Adding leading zeros before data loaded into DSO

Hi
In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...

Hi,
Can you check at info object level, what kind of conversion routine it used by.
Use T code - RSD1, enter your info object and display it.
Even at data source level also you can see external/internal format what it maintained.
if your info object was using ALPHA conversion then it will have leading 0s automatically.
Can you check from source how its coming, check at RSA3.
if your receiving this issue for records only then you need to check those records.
Thanks

Similar Messages

  • Truncation of leading Zeros when Down Loading into Excel - OLE Objects

    Hi,
    Can any one help me on this.
    I am using <b>OLE Objects</b> to download Data into Excel Sheet. Data with leading Zeros is getting truncated in Excel.
    Ex: Report Output is showing Plant Number as 0002. But when i am downloading to Excel Plant value will become 2 .
       I would like to have it as 0002 in Excel.
    I have declared Werks as CHAR of 4.I am using OLE Obects for Downloading into Excel Sheet.
    I am using "OLE2_OBJECT" I can not use any other FMs to down load to Excel.As i am modifying this program not creating.
    Thanks In Advance.
    K.Nirmala
    Message was edited by: Nirmala Reddy

    Hi Nirmala,
    While downloading to excel sheet, u need to change the number format of cell from General to Text, then leading zero's won't get deleted. For that u need to set the property of the cell. Please check this sample code,
    INCLUDE OLE2INCL.
    tables : zobrent.
    data : it_kna1 type table of zobrent with header line.
    handles for OLE objects
    DATA: H_EXCEL TYPE OLE2_OBJECT,        " Excel object
          H_MAPL TYPE OLE2_OBJECT,         " list of workbooks
          H_MAP TYPE OLE2_OBJECT,          " workbook
          H_ZL TYPE OLE2_OBJECT,           " cell
          H_F TYPE OLE2_OBJECT.            " font
    DATA  H TYPE I.
    DATA: cell1 TYPE ole2_object.
    *&   Event START-OF-SELECTION
    START-OF-SELECTION.
      select * from zobrent into table it_kna1
               where zopanid = '10001'
                and zo_brent = '050'.
    start Excel
      CREATE OBJECT H_EXCEL 'EXCEL.APPLICATION'.
      PERFORM ERR_HDL.
      SET PROPERTY OF H_EXCEL  'Visible' = 1.
    get list of workbooks, initially empty
      CALL METHOD OF H_EXCEL 'Workbooks' = H_MAPL.
      PERFORM ERR_HDL.
    add a new workbook
      CALL METHOD OF H_MAPL 'Add' = H_MAP.
      PERFORM ERR_HDL.
    output column headings to active Excel sheet
      PERFORM FILL_CELL USING 1 1 1 'EDate'.
      PERFORM FILL_CELL USING 1 2 1 'Brent'.
      PERFORM FILL_CELL USING 1 3 1 'Zopanid'.
      PERFORM FILL_CELL USING 1 4 1 'Contract Type'.
      PERFORM FILL_CELL USING 1 5 1 'Price Type'.
      PERFORM FILL_CELL USING 1 6 1 'Installation Type'.
      PERFORM FILL_CELL USING 1 7 1 'Volume'.
      PERFORM FILL_CELL USING 1 8 1 'AQ'.
      PERFORM FILL_CELL USING 1 9 1 '00000123'.
      LOOP AT IT_KNA1.
    copy values to active EXCEL sheet
        H = SY-TABIX + 1.
        PERFORM FILL_CELL USING H 1 0 IT_KNA1-zo_effdat.
        PERFORM FILL_CELL USING H 2 0 IT_KNA1-zo_brent.
        PERFORM FILL_CELL USING H 3 0 IT_KNA1-zopanid.
      ENDLOOP.
      CALL METHOD OF h_excel 'Cells' = cell1
        EXPORTING
          #1 = 1
          #2 = 1.
      FREE OBJECT H_EXCEL.
      PERFORM ERR_HDL.
      if sy-subrc eq 0.
       write : / 'year'(001).
      endif.
         FORM FILL_CELL
    sets cell at coordinates i,j to value val boldtype bold
    FORM FILL_CELL USING I J BOLD VAL.
      CALL METHOD OF H_EXCEL 'Cells' = H_ZL EXPORTING #1 = I #2 = J.
      PERFORM ERR_HDL.
      GET PROPERTY OF H_ZL 'Font' = H_F.
      PERFORM ERR_HDL.
      SET PROPERTY OF H_F 'Bold' = BOLD .
      PERFORM ERR_HDL.
    ***Changing the format of the cell from General to Text
      <b>SET PROPERTY OF H_ZL 'NumberFormat' = '@'.</b>
      PERFORM ERR_HDL.
      SET PROPERTY OF H_ZL 'Value' = VAL .
      PERFORM ERR_HDL.
    ENDFORM.
    *&      Form  ERR_HDL
    FORM ERR_HDL.
    IF SY-SUBRC <> 0.
      WRITE: / 'Fehler bei OLE-Automation:'(010), SY-SUBRC.
      STOP.
    ENDIF.
    ENDFORM.                    " ERR_HDL
    U just paste this code in a sample program & see.
    Please reward, if found helpful.

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Partial data loading from dso to cube

    Dear All,
    I am loading the data from dso to the cube through DTP. The problem is that some records are getting added  through data
    packet 1 to the cube around 50 crore records while through data packet2 records are not getting added.
    It is full load from dso to cube.
    I have tried deleting the request and executing DTP but again the same no of records get added to the cube through datapacket 1   and after that records are not added to the cube through datapacket 1  ;request remains in yellow state only.
    Please suggest .

    Nidhuk,
    Data load transfers package by package. Your story sounds like it got stuck in second package or something. Suggest you check package size, try to increase to a higher number to see if anything  changes ? 50 records per package kind of low, your load should not spread out into too many packages.
    Regards. Jen
    Edited by: Jen Yakimoto on Oct 8, 2010 1:45 AM

  • How to delete decimal point and adding leading zeros....

    Hi,
    I have one requirement in the report   i.e.
            <b>Present Value    :</b>  44567.98
            <b>Expected Value  :</b> 0000004456798
    In the present Value how will I remove that decimal point and how to add those six ing zeros. I tried with CONVERSION_EXIT....but it is not giving. Help me...
    Thanks in advance.
    Regards,
    Kumar.

    Hi,
    Use SPLIT and COMCATENATE fnctions.
    Eg:  split l_v at '.' into l_v1 l_v2
            concatenate l_v1 l_v2 into l_v.
    For adding leading zeros
    Use FM CONVERSION_EXIT_ALPHA_INPUT.
    Eg:
    data: tknum type vttk-tknum value '99156'.
    call function 'CONVERSION_EXIT_ALPHA_INPUT'
         exporting
              input  = tknum
         importing
              output = tknum.
    Sri
    Message was edited by:
            Sri Tayi

  • Adding Leading Zero's to a variable

    Hi friends ,
                <b>1.</b> I want to add leading zero's to a field .
    Wxample - LIFNR , if its value in a variable is 16987 then i want to convert it to standard format (0000016987).
    <b>2.</b> I want to convert date format to system's format. In Dev server and quality server it is in different format and i'm facing problem while uploading data using BDC.Also how i'll identify that in which format it is in production.
    How to get it

    Hi,
    <u>Use these function Modules for ur problems..</u>
    (1)For adding leading zeroes or in sap format internal..
    CONVERSION_EXIT_XXXXX_INPUT
    (2)For changing date into systems internal format..
    CONVERSION_EXIT_XXXXX_INPUT.
    u can find out conversion routines for each corresponding domain of that field in given below procedure.
    Domain->Defintion->Output Characteristics-> Conversion routine.
    Double click on that, it will show corresponding conversion routines.
    Please add ur rewards.
    With regards,
    Rajesh

  • How to make data loaded into cube NOT ready for reporting

    Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
    Please suggest. <removed>
    Thanks

    See, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
    Hope this helps...

  • Leading zeros-master data

    Hi All,
    We have a characteristic 'Material'(custom defined master data infoObject) of length
    10,Conversion routine:ALPHA, datatype :CHAR,& output length:10.
    Requirement is Material number should be displayed as 10 digit number(characters ) in Bex
    report.
    but for some of the materials,we  don't have 10 digits for example we have a material
    123(this 123 should be seen as 0000000123 in Query output).
    When i am displaying infocube contents or master data contents i could see 123 as
    0000000123. but with F4,it shows 123 as 123  and in Query output also as '123'
    1.actually How data will be stored in infocube and master data tables? (either 123 or
    0000000123)
    2. one soution to display matarial as 10 char in report is  with  transfer/update routine by
    adding leading zeros.
    Is there any other way to achieve this without using routine ?kendly let me know.
    correct me if my above asumption is wrong.
    thanks

    Hi Murali,
    the Alpha Conversion ist used to convert data between an internal and an external view. The internal view is how the data is physically stored in the database. The external is i.e. how the data is displayed in reports.
    At some point the Alpha Conversion (if defined for the characteristic) is automatically used - you can not change it. This means that in this cases you can only see the external format. The BEx Reporting is one place and the F4 help another.
    In some places within the BW system (no enduser view) you have the option to see the data in the internal format. One of these places ist the cube content. There you have a flag "Do not use any conversion". If this flag is activated you will see the data in the internal format. But this is NOT the format a user will see in reporting.
    In your case the internal format is the value "0000000123" and the external "123".
    Hope that helps
    Regards
    Adios

  • @ adding leading zeros to a number

    Hi,
    How do I go abt adding leading zeros to a 8- digit number?
    eg DATA: number(8) Type N.
        number = 16.
    How do i go about converting this to '00000016'.
    Note that the value in variable number would be read frm a file (Inbound prog).

    Hi,
      Use the function.
    data: tknum type vttk-tknum value '99156'.
    call function 'CONVERSION_EXIT_ALPHA_INPUT'
         exporting
              input  = tknum
         importing
              output = tknum
    Regards

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • Selective Deletion Before Data Load

    Hi Experts - I need to do the data load into Oracle data Warehouse. Before loading data , I need to do some selective deletion from the target table.
    In the source dataset I have a date column where I have Max and Min Date . I need to delete the data from the target laying between this Min and Max date.
    Any Idea how to do this selective deletion.
    Thanks
    R

    Create a workflow, and declare two local variables, $DateMin and $DateMax, of either date or datetime datatypes, as appropriate.  Create a script:
    $DateMin = sql('DS','select min([datetime field]) from [incoming table]');
    $DateMax = sql('DS','select min([datetime field]) from [incoming table]');
    Add a dataflow to your workflow, and connect it downstream of the script.  Add two parameters to the dataflow -- let's say you call them $P_DateMin and $P_DateMax. Back in your workflow, in the "Calls" tab of the Variables & Parameters window, set the mapping of the two dataflow input parameters to your two local workflow variables.
    In your dataflow: perform a selection of the primary key (the column(s) which constitute the pk) of your target table, filtering on your two input parameter values ($P_DateMin and $P_DateMax.  If you want to be on the safe side in terms of preventing blocking issues, send these records into a Data Transfer transform (file preferred, but up to you). Then, downstream from the Data Transfer transform, send the records into a Map Operation transform, mapping 'Normal' to 'Delete'. Then, simply send them into your target table.
    You could, of course, just write a SQL script to delete the records, but those are to be avoided as breaking lineage & impact chains.
    If all your date or datetime stamp fields on your target table are "whole" dates, with no time portion, and you have a smallish number of dates between your min. and max. dates, and you have a large number of records to delete between those dates, and your target table has an index on the date stamp column, then another approach would be to generate records, one per day, using a Date Generation transform, still making use of your two dataflow parameters. You'd declare the date field so generated to be the (false) primary key, map the records to deletes w/ the Map Operation transform, and then send them into your target, with the "Use input keys" option selected.

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Data load into SAP ECC from Non SAP system

    Hi Experts,
    I am very new to BODS and I have want to load historical data from non SAP source system  into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
    Regards,
    Monil

    Hi
    In order to load into SAP you have the following options
    1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
    2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
    3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
    The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
    However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
    To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
    Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
    Customer Master - DEBMAS
    Article Master - ARTMAS
    Material Master - MATMAS
    Vendor Master - CREMAS
    Purchase Info Records (PIR) - INFREC
    The list is endless.........
    In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
    If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
    Do let me know if you need more info on any specific queries or issues you may encounter.
    kind regards
    Raghu

  • Leading zero before decimal point

    Hi,
    I am trying to format a number so that there is a leading zero before the decimal point.
    I am trying
    select to_char(.75,'999,999.99') from dual;
    But this gives me only .75 as output.
    I want 0.75
    Is there any easy way to get it (I guess it would be possible with all sorts of decode and lpad combinations, but I want to avoid that! I'm sure there must be some easy way!)

    this?
    SQL> SELECT TO_CHAR('.75','990.99') FROM dual;
    TO_CHAR
       0.75
    SQL> ed
    Wrote file afiedt.buf
      1* SELECT TO_CHAR('.75','999.99') FROM dual
    SQL> /
    TO_CHAR
        .75
      1* SELECT TO_CHAR('.75','099.99') FROM dual
    SQL> /
    TO_CHAR
    000.75

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

Maybe you are looking for

  • How to schedule a job in another system.

    Hi, Now i have an ABAP program, which run in system ABC, client 001. i want to schedule a job in the program, with the function modules JOB_OPEN, JOB_SUBMIT, and JOB_CLOSE. But this job should run in ABC/002. How to write code? Who can help me on the

  • Windows cannot start

    hi, I have connected  8GB DiskToGo make pen drive in my Windows 7 32 bit enterprise edition. This is under Yellow Exclamatory mark. If I update the driver, i am receiving "Best Driver is installed already". Windows cannot start error code is 10 in th

  • How to get rid of podcasts in the cloud

    Since the update, I see all the episodes of the podcasts that I listen to. In addition to the downloaded ones, I see every other episode with the cloud icon, which means it is available in the cloud. However, I've already listened to most of them, so

  • PO Line column displaying same data

    Hello, I have created a custom .rtf for the Standard PO. When previewing the pdf in iSupplier one of the columns (attribute1 from PO_LINES) is displays the same data when a PO has multiple lines. I currently have the following in the BI Properties fo

  • Maverick installation problems

    When I downloaded maverick it loaded the application then when to to the actual installing of maverick. It gets to 40 minutes then sais 'The OS X upgrade couldn't be started because the disk Macintosh HD is damaged an cannot be repaired.  I haven't