Regarding splitting of dates

hi i want to split the dates that will be entered by the user on the monthly basis.if there is any way of diing it please le me know
suppose i have a strting date as 15.06.2001 and the end date as 31.05.2002
then the output that i want is
15.06.2001 to 30.06.2001
01.07.2001 to 31.07.2001
01.08.2001 to 31.08.2001
01.09.2001 to 30.09.2001
01.10.2001 to 31.10.2001
01.11.2001 to 30.11.2001
01.12.2001 to 31.12.2001
01.01.2001 to 31.01.2002
01.02.2002 to 31.02.2002
01.03.2002 to 28.03.2002
01.04.2002 to 31.04.2002
01.04.2002 to 30.05.2002
any code or functional module to split the date ranges in this type of format will do
thanks and regards
ashish

Pl try the following code.
*& Report  ZDATES                                                      *
report  zdates.
parameters: p_begda type sydatum,
            p_endda type sydatum.
types:
      begin of typ_dates,
       begda type sydatum,
       endda type sydatum,
      end of typ_dates.
data: t_dates type standard table of typ_dates,
      rec_dates like line of t_dates.
do.
  if sy-tabix eq 1.
    rec_dates-begda = p_begda.
  endif.
  call function 'RP_LAST_DAY_OF_MONTHS'
    exporting
      day_in            = rec_dates-begda
    importing
      last_day_of_month = rec_dates-endda.
  append rec_dates to t_dates.
  if  rec_dates-endda = p_endda.
    exit.
  else.
    rec_dates-begda = rec_dates-endda + 1.
  endif.
enddo.
loop at t_dates into rec_dates.
  write: / rec_dates-begda,rec_dates-endda.
endloop.
~Suresh

Similar Messages

  • Split month, date and year

    How to split month,date and year from date field ? is there any standard function module.

    You can do this.
    data: year(4) type c,
          month(2) type c,
          day(2) type c.
    year = sy-datum(4).
    month = sy-datum+4(2).
    day = sy-datum+6(2).
    write:/ year.
    write:/ month.
    write:/ day.
    Regards,
    Rich Heilman

  • How to Split the Date Column in OBIEE

    Hi,
    We have date column name :To_Date and format is MM/DD/YY HH:MM:SS .
    How do split the date into YEARS,MONTH,DAY as new columns.
    kindly help on this.
    Regards.,
    CHR
    Edited by: 867932 on Nov 23, 2011 10:18 PM

    Hi User,
    All 3 functions can be written in rpd too.In BMM layer, duplicate the date column ->Goto Column Mapping tab-> Expression builder ->Select Functions-> Calendar Date/Time Functions-> Select DayofMOnth function. Your logical column's formula will look like,
    DayofMonth(YourDateColumn)
    Rgds,
    Dpka

  • FM for splitting the date

    Hi Everyone,
    I need an FM to split the Date into days , month and year...
    can anyone help me out.
    Points will be rewarded.
    Regards,
    Syed

    Hi Syed,
    When we can do something with the code you can write the code. Since the sy-datum is anyhow stored in YYYYMMDD format you can split it by writing a small piece of code like this :
    data :
      w_day type i,
      w_month type i,
      w_year type i.
    w_day = sy-datum+6(2).
    w_month = sy-datum+4(2).
    w_year = sy-datum(4).
    write :/ 'Day    :  ', w_day,
           /  'Month :  ', w_month,
           /  'Year    :  ', w_year.
    Regards,
    Swapna.
    Edited by: NagaSwapna Thota on Jul 8, 2008 11:07 AM

  • Problem in splitting uploaded data at delimitar.

    Hi All,
    I am uploading a TAB sepearted file from PC using GUI_UPLOAD. The file is getting uploaded correctly.
    In the internal table the data in the file appears as below
    '56110020#40#39.00#10353#DE10Z370###Creidt card spend for month January #################'
    For the tabs in the text file, the internal table shows '#'.
    Now I am using SPLIT command to split the data into different field (all are char type) at the seperater TAB or # in this case.
    SPLIT it_tab-record AT p_sep INTO fld1 fld2 fld3.
    But the data is NOT getting split at # and the result I am getting is that: fld1 = 56110020#40#39.00 and now values in fld2 and fld3.
    I an working in ECC 6.0. We have upgraded from 4.6C to ECC 6.0. In 4.6C the same program is working fine.
    Please help.
    regards,
    Gaurav

    can you do this way
    PURC002|0000026071|_013890010310||00020|DE|AT|EINA|38 + 0,03 MTZ ab 15.05.04
    PURC002|0000026071|_013890010310||00020|DE|AT|EINA|38/Stk. + MTZ EUR 0,08 = <U>EUR 0,46</>
    IF p_local EQ gc_x.
        CALL FUNCTION 'GUI_UPLOAD'
          EXPORTING
            filename                = gv_file
            filetype                = gc_filetype
          TABLES
            data_tab                = gt_text_file
          EXCEPTIONS
            file_open_error         = 1
            file_read_error         = 2
            no_batch                = 3
            gui_refuse_filetransfer = 4
            invalid_type            = 5
            no_authority            = 6
            unknown_error           = 7
            bad_data_format         = 8
            header_not_allowed      = 9
            separator_not_allowed   = 10
            header_too_long         = 11
            unknown_dp_error        = 12
            access_denied           = 13
            dp_out_of_memory        = 14
            disk_full               = 15
            dp_timeout              = 16
            OTHERS                  = 17.
        IF sy-subrc <> 0.
          MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                  WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        ENDIF.
    *If the Application Server is selected on the selection screen
      ELSEIF p_server EQ gc_x.
      Read the data from Application Server
        OPEN DATASET gv_file FOR INPUT IN TEXT MODE ENCODING DEFAULT
                                WITH SMART LINEFEED.
      ENDIF.
    *If the Presentation Server is selected on the selection screen
      IF p_local EQ gc_x.
        LOOP AT gt_text_file INTO wa_text_file.
          CLEAR wa_text_table.
          SPLIT wa_text_file-line AT gc_comma  INTO   wa_text_table-objectid
                                                      wa_text_table-lifnr
                                                      wa_text_table-matnr
                                                      wa_text_table-ebelp
                                                      wa_text_table-tdname
                                                      wa_text_table-spras
                                                      wa_text_table-textid
                                                      wa_text_table-object
                                                      wa_text_table-line.
          APPEND wa_text_table TO gt_text_table.
        ENDLOOP.

  • Question regarding roaming and data usage

    I am currently out of my main country of service, and as such I have a question regarding roaming and data usage.
    I am told that the airplane mode is sufficient from keeping the phone off from roaming, but does this apply to any background data usage for applications and such?
    If the phone is in airplane mode, are all use of the phone including wifi and application use through the wifi outside of all extra charges from roaming?

    Ann154 wrote:
    If you are getting charged to use the wifi, then it is possible.  Otherwise no
    Just to elaborate here, Ann154 is referring to access charges for wifi, which is nothing to do with Verizon, so if you are using it in a plane, hotel, an internet cafe etc that charges for Wifi rather than being free .   Verizon does not charge you (or indeed know about!) wifi usage, or any other usage that is not on their cellular network (such as using a foreign SIM for example in global phones)  So these charges, if any, will not show up on the verizon bill app.  Having it in airplane mode prevents all cellular data traffic so you should be fine

  • Help needed with SQL to split huge data into two excel or text files.

    Hi,
    I have a table which has around 1850000 records. I want to split the data into two sets and import the data in .txt or execl file.
    How can i break upthe records using rownum or any thing
    select * from tablename where rownum<940000 fetched some records
    but
    when i gave select * from tablename where rownum>940000 it was not fetching records.
    Guideme

    when i gave select * from tablename where rownum>940000 it was not fetching records.try this
    select * from (select tablename.*, rownum rn from tablename) where rn >940000

  • How to export the text edit data to excel file without splitting the data in excel file?

    how to export the text edit data to excel file without splitting the data in excel file?
    I have a requirement in SAP HR where in the appraiser can add comments in the area given and can export that to excel file. Currently the file is getting exported but the comments getting split into deifferent rows.
    I want the entire comment to be fit in one row.
    Please help.
    Thank you

    Hi,
    if your text edit value is stored in 'lv_string' variable.
    then before exporting the value to excel you have to remove CL_ABAP_CHAR_UTILITIES=>NEWLINE
    that is '#' from the variable lv_string.
    for that use code some thing like this.
    REPLACE ALL OCCURRENCES OF CL_ABAP_CHAR_UTILITIES=>NEWLINE in lv_string WITH space.
    I think this will do the trick.

  • Regarding Payment Terms date in MIR0 Transaction

    Hi,
    This is regarding payment terms date in MIR0 Transaction.
    Actual Scenario : Pay on the 25th of the next month following the delivery, or if the 25th of the next month following the delivery has been exceeded,pay 10 days from the invoice date.
    Help required with immediate attention
    Brief Description Example wise:
    Case1:
    We have a delivery in our hand for that the billing date is consecutive month's 25th, briefly we can say that, If you raise a delivery in this month(any day of the month,i.e.,1 to 30 of the month) the payment must be done on the very next month's 25th.
    Case2:
    Invoice is generated and it must get fired after 10days.
    If the date(Payment) is (After 10days) Less than 25th April then prefer25th
    If the date(Payment) is (After 10days) Greater than
    25th April then prefer higher date(>25th April).
    How to include this logic in the MIRO Transaction. For this any UserExit is available please let me know.

    You can do that in transaction "Manager Business Partners" / BBPMAININT. Add the purchasing org and you will be able to complete this.
    Regards
    Kathirvel

  • How to split the data based on one column

    Dear All,
    I have the table data like this.
    type quantity revenue_mny count country
    a 10           10          2 India
    a 20          12          3 India
    b 30          15          1 India
    a 35          20          2 US
    b 20          10          1 US
    b 60          15          1 US
    I woulkd like to split the date based on type column.
    For each country, for Type "a" get the sum of revenue count quanity ans same for b
    and all shuld come in on row for each country.
    output should be like
    country revenue_mny(For a) quantity(for a) count(For a) revenue_mny(for b) quantity(for b) count(For b)
    India 22 30 5 15 30 1
    US 20 35 2 25 80 2
    I tried the below query . its not splittng the date for each country in one row.
    select country,
    sum(case when type='a') then revenue_mny else 0 end ) revenue_mny_a,
    sum(case when type='b' then revenue_mny else 0 end ) revenue_mny_b
    sum(case when type='a' then quantity else 0 end) quantity_a,
    sum(case when type='b' then quantity else 0 end) quantity_b from
    test
    group by country
    Please need your helo

    Like this?
    with t as
    select 'a' type, 10 quantity, 10 revenue_mny, 2 cnt, 'India' country from dual union all
    select 'a', 20, 12, 3, 'India' from dual union all
    select 'b', 30, 15, 1, 'India' from dual union all
    select 'a', 35, 20, 2, 'US' from dual union all
    select 'b', 20, 10, 1, 'US' from dual union all
    select 'b', 60, 15, 1, 'US' from dual
    select country,
    sum(case when type='a' then revenue_mny else 0 end ) revenue_mny_a,
    sum(case when type='a' then quantity else 0 end) quantity_a,
    sum(case when type='a' then cnt else 0 end) cnt_a,
    sum(case when type='b' then revenue_mny else 0 end ) revenue_mny_b,
    sum(case when type='b' then quantity else 0 end) quantity_b ,
    sum(case when type='b' then cnt else 0 end) cnt_b
    from t
    group by country;result:
    COUNTRY  REVENUE_MNY_A QUANTITY_A CNT_A REVENUE_MNY_B QUANTITY_B CNT_B
    India    22            30         5     15            30         1
    US       20            35         2     25            80         2Or you can do it with a decode instead of case. The result will be the same:
    with t as
    select 'a' type, 10 quantity, 10 revenue_mny, 2 cnt, 'India' country from dual union all
    select 'a', 20, 12, 3, 'India' from dual union all
    select 'b', 30, 15, 1, 'India' from dual union all
    select 'a', 35, 20, 2, 'US' from dual union all
    select 'b', 20, 10, 1, 'US' from dual union all
    select 'b', 60, 15, 1, 'US' from dual
    select country,
    sum(decode(type,'a',revenue_mny,0)) revenue_mny_a,
    sum(decode(type,'a',quantity,0)) quantity_a,
    sum(decode(type,'a',cnt,0)) cnt_a,
    sum(decode(type,'b',revenue_mny,0)) revenue_mny_b,
    sum(decode(type,'b',quantity,0)) quantity_b,
    sum(decode(type,'b',cnt,0)) cnt_b
    from t
    group by country;(I changed tablename from TEST to T and columnname from COUNT to CNT, because you should not use reserved words as tablename or columnname.)
    Edited by: hm on 09.10.2012 06:17

  • Split Spry.Data and Spry.Data.Region from SpryData.js?

    I don't like Spry.Data.Region or anything like Spry.*.Region because it make hard to customize using javascript.
    But Spry.Data with Observer / Notify feature is great one. In fact, using SpryData.js, I can write my own "render html object" easier.
    SrpyData.js is big (127KB) even when compressed, it is still 65KB. That why, needed split Spry.Data and Spry.Data.Region from SpryData.js so who want to use Spry.Data only like me will get smaller trafic.
    I've tried remove all Spry.Data.Region content in SpryData.js. Surprise, some in Spry.Data call to Spry.Data.Region >>> Design have problem.
    Anyone can help splitting them?

    Found this in the archives:
    <div spry:if="{ds_RowCount} == 0" class="trigger" style="display:none"></div>
    Basically a hack to point the tooltip to a hidden div so no errors are thrown.

  • How to Split Column Data Based on Identifier

    Hi,
    I am using oracle database.
    I am having data in this format in my column 1234~2345~3456~4567.
    I need a query to split the data in the column based on the identifier '~',so that i can pick out the value after the second occurrence of the identifier.
    May i know who can i achieve this.
    Edited by: 962987 on Oct 3, 2012 12:11 PM

    Hi,
    Welcome to the forum!
    Whenever you have a question, please post CREATE TABLE and INSERT statements for a little sample data, and the results you want from that data. For example, given this data
    CREATE TABLE     table_x
    (       my_column     VARCHAR2 (40)
    INSERT INTO table_x (my_column) VALUES ('1234~2345~3456~4567');
    INSERT INTO table_x (my_column) VALUES ('just~2 parts');I think you're asking for these results
    PART_3     MY_COLUMN
    3456       1234~2345~3456~4567
               just~2 partsI assume that, if the string doesn't contain at least 2 '~'s, you want to return NULL. It's a good idea to explain what you want for special cases like that, and include examples in your sample data and results.
    Not all versions of Oracle are exactly the same. In fact, they are all different. If you want the best solution that works on your version, then say what version that is.
    The following query will work in Oracle 10.1 and higher:
    SELECT  REGEXP_SUBSTR ( my_column
                    , '[^~]+'
                    , 1
                    , 3     -- 3rd occurrence (after 2nd delimiter)
                    )     AS part_3
    ,     my_column          -- if wanted
    FROM    table_x
    ;See the forum FAQ {message:id=9360002}
    Edited by: Frank Kulash on Oct 3, 2012 3:24 PM
    Added sample data and results.

  • Split TempDB Data file into multiple files

    Hey , 
    I have been seeing TempDB contention in memory on our SQL server 2012 Enterprise Edition with SP2 and I need to split TempDB Data file into multiple files .
    Could someone please help me to verify the following information:
    1]
    We are on SQL server 2012 Enterprise Edition with service pack2 but as per SQL Server 2012 Enterprise Edition under CAL Licensing –We are limited to use 20 logical processors instead 40 logical processors. Our SQL is configured
    on NUMA nodes and with the limitation SQL uses only 2 NUMA nodes on live .There are 10 logical CPUs are evenly assigned to each NUMA nodes. Microsoft recommends that if SQL server configured on NUMA node and we have 2 NUMA nodes, then we may add two data files
    at a time. Please let me know should I add two TempDB data file at a time?
    2] We have TempDB Data and log files both on the same Drive of SQL server  .When I split TempDB into two Data files, I can get them on the same Drive .What your recommendation should I need to create TempDB Data files on the same drive or on separate
    disks?
    3] What would be the blackout plan for splitting the tempdb into multiple files? Please let me know if someone has a better back out plan ?
                1] Run script that create tempdb Database with a single file
    2] Reboot SQL service in order to apply change   
    Your help will be apprecited .
    Thanks ,
    Daizy

    Tom , I am seeing TempDB contention on Production server when there is a heavily load on sql server . We also experiencing the overall system slowness.Please look at Pagelatch wait statistics on our server ,Please advise .
    wait_type
    waiting_tasks_count
    wait_time_ms
    max_wait_time_ms
    signal_wait_time_ms
    PAGELATCH_UP
    2680948
    3609142
    10500
    508214
    PAGELATCH_SH
    1142213
    1338451
    8609
    324538
    PAGELATCH_NL
    0
    0
    0
    0
    PAGELATCH_KP
    0
    0
    0
    0
    PAGELATCH_EX
    44852435
    7798192
    9886
    6108374
    PAGELATCH_DT
    0
    0
    0
    0
    Thanks ,
    Daizy

  • Adding Multiple files to many Sharepoint Sites based on the Sites Code (Splitting the data using a code)

    Hi,
    I'm pretty new to Sharepoint and wanted to ask a question to see if it can actually do what we need it to do.
    We would have a top Site within a collection and under this we would have 400 sites for different sections of the business (External) Lots of teams create documents for all these sites and use the Code of the site in each document so we know what business
    site its going to. For example 67890_BusinessReportMay.docx We know this document should be heading to site 67890 
    We have lots of services sending data to these 400 business sites. If we were to do them manually it would take most of our time to go to each site and upload the documents, even if we could do all the sites documents in 1 multiple upload. We still have
    to go to each site to do it
    Is there an easy answer to this? At present I have folders to match each of the 400 Sharepoint Sites and I use an Integration Services package to move all the documents into these 400 Matching folders. I do this using the Code at the start of each file name.
    If it doesn't have a code its put into an error folder and reported back on.
    Then I want something to automatically go through each of these folders and load them into the Sharepoint Sites document libraries. 
    OR
    Say you have 600 files all going to the 500 sites on Sharepoint in one folder. The File contains meta data regarding the Sites Code.  Using this you can split the files and automatically pull them across to the Sharepoint Sites without having to go
    into each Sharepoint Site?
    I'm thinking that you cant do this and I'm hoping I'm wrong. Any help would be greatly appreciated.  
    Debbie

    Hi Debbie,
    According to your description, my understanding is that you want to move the documents to different sites based on the code in the name of the files.
    I recommend to create workflow and set the workflow to start when an item is created to achieve this goal.
    As there is no built-in action which can move the documents cross site, we need to use the custom activity 'Copy List Item Extended' which need to be downloaded from
    http://spdactivities.codeplex.com/.
    Please per the steps in the link below to deploy the custom activity to SharePoint:
    http://social.technet.microsoft.com/Forums/en-US/82609160-152f-461a-9a19-0e996f5a309b/sharepoint-2010-archiving-the-files-from-one-document-library-to-another-document-library-in
    After that, we can create a workflow to meet the need.
    Here is the detailed steps to create the workflow:
    Create a column in single line of text type(called NO for example) in the library where the documents are uploaded, and input value manually in the column to store the code when uploading the documents. (We can also use workflow to extract the code from
    the name of the documents if the codes are always 5 numbers.)
    Create a workflow associated with the library and set the workflow to start when an item is created.
    Select If current item field equals value, change the settings to be If Current Item: NO equals 67890.
    Select Copy List Item Extended, change the settings to be Copy item in Current Item to list at site 67890 URL.
    Select Delete Item, change the settings to be Delete Item in Current Item.
    Create other conditions from step 3 to step 5 to copy the documents to different sites and delete current documents.
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Regarding Upload of data into SAP from Application server using DATA SETS

    Hi all,
    I have a problem when uploading data that is application server [.txt file] into SAP using OPEN DATA SET,READ DATA SET & CLOSE DATA SET it is now going to dump. The file is actually splits the fields by using Tab Delimiter.  
    During uploading some junk values are coming with '#' so it going to dump and giving follow type of error.
    Runtime Errors         CONVT_NO_NUMBER          
    Exception              CX_SY_CONVERSION_NO_NUMBER
    Unable to interpret "#0#0#0#0#0#0#0#" as a number.
    Can any one solve the above issue as i need it urgently.
    Thanks in advance.
    Thanks & Regards,
    Rayeezuddin.

    Hi Hielman,
    Thnaks for that reply and for effort you are putting to solve my issue.
    I had done the same thing what u have posted prior to your reply but still i am getting dump.
    FORM f_get_legacy_data .
      DATA: l_tab type xstring,
            l_tab1(1) type c,
            s type x.
      move '23' to l_tab.
      move l_tab to l_tab1.
    OPEN DATASET v_pfile FOR INPUT IN TEXT MODE ENCODING DEFAULT.
      OPEN DATASET v_pfile FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    IF sy-subrc <> 0.
       MESSAGE text-207 TYPE c_e.
    ELSE.
      DO.
        CLEAR wa_input1.
       READ DATASET v_pfile INTO wa_input1.
        READ DATASET v_pfile INTO wa_read.
        IF sy-subrc EQ 0.
          move wa_read to i_txt-txt.
          append i_txt.
        ELSE.
          EXIT.
        ENDIF.
      ENDDO.
      s = '09'.
      loop at i_txt.
       move i_txt-txt+10(1) to l_tab1.
       move '#' to l_tab1.
       split i_txt-txt at s into wa_input1-vbeln wa_input1-posnr
        split i_txt-txt at '#' into wa_input1-vbeln wa_input1-posnr
                                       wa_input1-per0bal wa_input1-per1val
                                       wa_input1-per2val wa_input1-per3val
                                       wa_input1-per4val wa_input1-per5val
                                       wa_input1-per6val wa_input1-per7val
                                       wa_input1-per8val wa_input1-per9val
                                       wa_input1-per10val wa_input1-per11val
                                       wa_input1-per12val.
        APPEND wa_input1 TO i_input1.
        CLEAR wa_input1.
      endloop.
    ENDIF.
      CLOSE DATASET v_pfile.
      IF i_input1[] IS INITIAL.
    If there is no data in the legacy file or if the structure of the
    legacy data does not match with that of internal table error message
    need to be displayed.
        MESSAGE text-211 TYPE c_e.
    *&--begin of change--
      ELSE.
        CLEAR: wa_input, wa_input1.
        LOOP AT i_input1 INTO wa_input1.
         MOVE wa_input1 TO wa_input.
          MOVE: wa_input1-vbeln TO wa_input-vbeln,
                wa_input1-posnr TO wa_input-posnr,
                wa_input1-per0bal TO wa_input-per0bal,
                wa_input1-per1val TO wa_input-per1val,
                wa_input1-per2val TO wa_input-per2val,
                wa_input1-per3val TO wa_input-per3val,
                wa_input1-per4val TO wa_input-per4val,
                wa_input1-per5val TO wa_input-per5val,
                wa_input1-per6val TO wa_input-per6val,
                wa_input1-per7val TO wa_input-per7val,
                wa_input1-per8val TO wa_input-per8val,
                wa_input1-per9val TO wa_input-per9val,
                wa_input1-per10val TO wa_input-per10val,
                wa_input1-per11val TO wa_input-per11val,
                wa_input1-per12val TO wa_input-per12val.
          APPEND wa_input TO i_input.
          CLEAR: wa_input, wa_input1.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    " GET_LEGACY_DATA
    When i am giving input as
    Directory  /pw/data/erp/D5S/fi/up
    Name: Backlog1616_D1S.txt
    BKCOPO1           BKSOI1        1000.00                100.00                 -200.00                0                      0                      0                      0                      0                      0                      0
    BKSOPO2           BKSOI2        2222.22                0                      300                    0                      0                      0                      0                      0                      0                      0
    BKSOPO3           BKSOI3        -3000                  400                    0                      0                      0                      0                      0                      0                      0                      0
    BKSOPO4                         4000.55                500                    600                    0                      0                      0                      0                      0                      0                      0
    0040000000        000010        -100                   -110                   -110                   0                      0                      -600                   0                      0                      0                      0
    0040000001        000010        -110                   -110                   0                      0                      0                      -610                   0                      0                      0                      0
    I am getting i_input internal table populated as follows at the end of that subroutine.
    After appending [APPEND wa_input TO i_input].
    BKCOPO1#BK|000000|            0.00 |            0.00 |            0.00    |
    BKSOPO2#BK|000000|            0.00 |            0.00 |            0.00    |
    BKCOPO3#BK|000000|            0.00 |            0.00 |            0.00    |
    BKCOPO4##4|000000|            0.00 |            0.00 |            0.00    |
    0040000000|000000|            0.00 |            0.00 |            0.00    |
    0040000001|000000|            0.00 |            0.00 |            0.00    |
    And output is showing erronious records: 6
    No entries inserted.
    Can you solve this issue.

Maybe you are looking for

  • IMac no longer visible on network, Internet works fine

    My home network has an iMac running Leopard, an iBook running Panther, and occasionally a Windows XP machine. I was recently transferring files from my Windows machine to the iMac, when the Windows machine unexpectedly restarted, interrupting the tra

  • Why do my PS video transitions have poor quality on playback?

    I've created a slide show using layers and have created a video group. This turns the layers into clips in the timeline. I've added a cross-fade transition between each clip in the timeline. Everything plays back nicely in PS. When I export to mp4 an

  • Unable to connect to SOAPServer within Jdeveloper9i

    Hi. I want to create a connection to the SOAP server within Jdeveloper9i? Following eror on the create connection wizard... Please email me at [email protected] r reply here thanx. Testing connection... [SOAPException: faultCode=SOAP-ENV:IOException;

  • Broadcasting Authorization or Setup

    I am having an issue with Information Broadcasting within 2004s SP10. My scheduling is working fine in my Dev environment when the system is open. However when I move into the test environment and I click the button to schedule a broadcast setting fr

  • Systemd-udev-settle.service suddenly active after last system update

    Hi, with my last pacman -Syu I noticed that my boot time went up from 4s to 8s. Using systemd-analyze, I saw that systemd-udev-settle.service is the culprit. I am not using lvm or similar, so AFAIK systemd-udev-settle.service should not be running on