Delimiter in the Source Data

Hi All,
We are using BODS 3.2 on linux.
How should we handle when a delimiter is present in the source data.
For example:
We have CSV file as the source, The file has comma ',' as part of the data in one of the columns. When we execute the job it throws an error :
"A row delimiter was seen for row number <1> while processing column number <n> in file"
How do we handle these ( if a column delimiter is present in data)?????

Hello
I am facing the same issue, actually client doesn't have any idea about his data.
How should we handle when a delimiter is present in the source data (Flat File) through BODS.
We are on SAP BW 7.4 & BODS 4.2.
Any solution on same…? Its very argent.
Thanks in advance

Similar Messages

  • User View is not reflecting the source data - Transparent Partition

    We have a transparent partition cubes. We recently added New fiscal year details to the cube (user view as well as source data cube). We loaded the data to the source data cube. From the user view, we tried to retrieve data, it shows up 0's. but the data is availble in the source data cube. Could anyone please provide the information what might be the issue?
    Thanks!

    Hi-
    If u haven't add the new member in the partition area, then Madhvaneni's advice is the one u should do. Because, if u haven't add the member, the target can't read the source.
    If u have already added the new member in the partition area, and still the data won't show up, sometimes it's worth to try re-save the partition, and see what's the outcome.
    -Will

  • How to handle duplicate Primary Key entries in the Source data

    This is my first experience with ODI.
    I receive Source data from the customer that includes a one letter designation, ACTION_CODE, in each record of data as to the disposition of the record:
    ‘R’ represents Re-issue in which case I’m to modify the corresponding Target record based on the Primary Key.
    ‘N’ represents an Insert in which case I’m to insert a new record into the Target.
    ‘D’ represents a delete in which case I’m to delete the record with the corresponding Primary Key from the Target.
    The Source data comes in an XML file and the Target is an Oracle DB.
    I have chosen the IKM Oracle Incremental Update (MERGE) Knowledge Module.
    I filter ACTION_CODE to just collect records that are ‘N’ or ‘R’ and I exclude the ACTION_CODE from the mapping but since within the same Source
    set there may be an ‘N’ and ‘R’ with the same primary key I receive Primary Key errors.
    Should I alter CKM to not check for duplicates in the Source?
    Is there a better way?

    Ganesh,
    Identifying Duplicates is a logical activity.  More or less it need Manual intervention to judge both the records means common.  if few unique paramenters like Telephone, Pincode, SSN, passport no etc can be used on filters for searching the records.  Currently there are no automatic method to identify the duplicates.  In MDM 5.5 SP04 which is next release there will be auto de-duplicate facility based on tresholeds and matching criteria that you will setup.
    I hope i have answered your query transparently. if you have any queries futher you can reply here.
    Regards
    Veera

  • How To Create the source data in the destination part of Shuttle in APEX?

    Hi,
    I create a shuttle in a region, and I can specify the source list value in the left part of the shuttle, and how can I create the original data in the right part(destination field).
    Now the destination part alwasy is null at first when open the page.
    So any idea? or Shuttle in Apex just can set the source data in the left part?
    Edited by: PPMonkey on Jun 18, 2009 10:10 PM

    Re: How do you populate right side of Shuttle control

  • How ODI will reduce the source data to process?

    By using ODI we can get High efficiency of integration by reducing the volume of source data processed in the flow. How ODI will reduce the source data to process?

    Hi ramana,
    See the example.
    DO.
       READ DATASET s_filename INTO I_TEMP.
       IF SY-SUBRC <> 0.
        EXIT.
       ELSE.
         move I_TEMP to I_FINALTAB.
         append I_FINALTAB.
         clear I_FINALTAB.
      ENDIF.
    ENDDO.
         In the above code we are moving the data form file to i_temp and i_temp to I_FINAL.
    Here before moving to i_FINAL.
    use  TRANSLATE i_temp to UPPSERCASE.
    Then move to i_final.
    Pls. reward if useful.

  • [APEX 3] Requested source data of the report has been modified

    Hello APEX-Friends,
    I have a common problem but the situation is a bit different here. Many of you might know the "invalid set of rows requested the source data of the report has been modified" problem. Often it occurs on submit. That means, you have a report, you select rows, you do things, you submit the page and everything blews up.
    This is because you enter some values into fields the report depends on and so you modify your report parameters and the source data changes.
    But:
    In my case I have a dynamically created report that blews up before any submits occur or values change.
    My query is a union of two selects. Both query different views. Those views use a date field as parameter and some compare functions.
    I read the field with a V-Function i wrapped arround the apex V Function - declared as deterministic. My date compare function is also declared deterministic (I doubt this makes any differences as it might be only important for the optimizer, but as long as I don't know exactly what APEX evaluates, I go for sure).
    I ensured, that the date field is set by default with the current date (and that works, because my interactive report initially displays correct data from the current date).
    So everything is deterministic and the query must return same results on subsequent calls, but APEX still throws this "source data has changed" error and I am to 99.99% sure, that this cannot be true.
    And now the awesome thing about this:
    If I change the value of the date field, an javascript performs a submit. The page is reloaded (without resetting pagination!) and everything works fine. I can leave the page, reenter, do things - everything works well.
    But if I log into the application and directly move to the corrupted report and try to use the pagination without editing fields or submitting the page the error occurs.
    Do you have any Idea what's happing there? I could try to workaround this by submitting the page the first time it's entered to trigger this "mystery submit" that gets everything working. But I would like to understand this issue and have a clean solution.
    Thanks in advance,
    Mike aka UniversE

    Okay, I found a solution, but I do not understand it - it might be a design flaw in APEX.
    I mentioned the date field that is used in the query. I also mentioned that it is set with the current date by default. I did not mention how.
    There are some possibilities in APEX to do so.
    1. Default-Setting in the element properties
    2. Static assignment if no value is in session cache
    3. Computation before header
    I did the first and second.
    BUT:
    An interactive report seems to work as follows. A query is executed to get all rows of the report. Then a second query is executed to get the rows that shall be displayed. And the order is screwed up, I think.
    1. The first report query to get all rows
    2. The elements are loaded and set to default values
    3. The second report query to get the display rows
    And that's the reason why nothing worked. Scince I added a computation before header the date field is set before the report queries are executed and everything works all fine now.
    But I think it's a design flaw. Either both queries shall be executed before Regions or afterwards but not split as field values might change when elements are loaded.
    Greetings,
    UniversE

  • Row should get added in the target as soon as the data in the source table

    I have done the following:
    * The source table is part of the CDC process.
    * I have started the journal on the source table.
    Whenever I change the data in the source, I expect the target to get a new row added with a new sequence number as the surrogate key. I find that even though the source data changes, the new row does not get added.
    Could someone point out to me why is the new row not getting added?

    Step 1 - Sequence Number
    create a sequence in your rdbms namely
    CREATE SEQUENCE SEQUENCE_NAME
    MINVALUE 1
    MAXVALUE 99999
    START WITH 1
    INCREMENT BY 1
    YOU can use the above sequence in your mapping in this way
    schema_name.sequence_name.nextval executed on Target option .
    Next select only Insert option for sequence column .
    Click on the Source datastore and in the Properties panel you will find an option called " Journalized Data Only " . Now whenever this interface runs , only the journalized data gets transferred.
    The other way to see the journalized data from the source side is right click on the source datastore under the model which is journalized and now go to " changed data capture " and then to " journal data .. "
    Now you can see only the journzalied data.
    As CDC creates as trigger at the source , so whenever there is change in the source it gets captured at the target whenver you run the interface above interface with Journalized data only option.
    I hope iam clear and elaborate now.
    Thanks

  • Using sqlldr when source data column is 4000 chars

    I'm trying to load some data using sqlldr.
    The table looks like this:
    col1 number(10) primary key
    col2 varchar2(100)
    col3 varchar2(4000)
    col4 varchar2(10)
    col5 varchar2(1)
    ... and some more columns ...
    For current purposes, I only need to load columns col1 through col3. The other columns will be NULL.
    The source text data looks like this (tab-delimited) ...
    col1-text<<<TAB>>>col2-text<<<TAB>>>col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    END-OF-RECORD
    There's nothing special about the source data for col1 and col2.
    But the data for col3 is (usually) much longer than 4000 chars, so I just need to truncate it to fit varchar2(4000), right?
    The control file looks like this ...
    LOAD DATA
    INFILE 'load.dat' "str 'END-OF-RECORD'"
    TRUNCATE
    INTO TABLE my_table
    FIELDS TERMINATED BY "\t"
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    col1 "trim(:col1)",
    col2 "trim(:col2)",
    col3 char(10000) "substr(:col3,1,4000)"
    I made the column 3 specification char(10000) to allow sqlldr to read text longer than 4000 chars.
    And the subsequent directive is meant to truncate it to 4000 chars (to fit in the table column).
    But I get this error ...
    Record 1: Rejected - Error on table COL3.
    ORA-01461: can bind a LONG value only for insert into a LONG column
    The only solution I found was ugly.
    I changed the control file to this ...
    col3 char(4000) "substr(:col3,1,4000)"
    And then I hand-edited (truncated) the source data for column 3 to be shorter than 4000 chars.
    Painful and tedious!
    Is there a way around this difficulty?
    Note: I cannot use a CLOB for col3. There's no option to change the app, so col3 must remain varchar2(4000).

    You can load the data into a staging table with a clob column, then insert into your target table using substr, as demonstated below. I have truncated the data display to save space.
    -- load.dat:
    1     col2-text     col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    END-OF-RECORD-- test.ctl:
    LOAD DATA
    INFILE 'load.dat' "str 'END-OF-RECORD'"
    TRUNCATE
    INTO TABLE staging
    FIELDS TERMINATED BY X'09'
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    col1 "trim(:col1)",
    col2 "trim(:col2)",
    col3 char(10000)
    SCOTT@orcl_11gR2> create table staging
      2    (col1 varchar2(10),
      3       col2 varchar2(100),
      4       col3 clob)
      5  /
    Table created.
    SCOTT@orcl_11gR2> host sqlldr scott/tiger control=test.ctl log=test.log
    SCOTT@orcl_11gR2> select * from staging
      2  /
    COL1
    COL2
    COL3
    1
    col2-text
    col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    1 row selected.
    SCOTT@orcl_11gR2> create table my_table
      2    (col1 varchar2(10) primary key,
      3       col2 varchar2(100),
      4       col3 varchar2(4000),
      5       col4 varchar2(10),
      6       col5 varchar2(1))
      7  /
    Table created.
    SCOTT@orcl_11gR2> insert into my_table (col1, col2, col3)
      2  select col1, col2, substr (col3, 1, 4000) from staging
      3  /
    1 row created.
    SCOTT@orcl_11gR2> select * from my_table
      2  /
    COL1
    COL2
    COL3
    COL4       C
    1
    col2-text
    col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    more-col3-text
    XYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
    1 row selected.

  • Creation of DME medium FZ205 There is no source data found

    We are executing payment runs using F110 and then creating data medium - a file to send to the bank.
    In the variant for the program I am putting C:\ however when I have several users executing payment runs at the same time, the data medium is not creating and I am getting the error message that the source data cannot be found
    Can anyone help me with this issue - should I leave the file name as blank?
    Thanks
    Liz

    Hello,
    In order to avoid FZ205 please review your selection parameters and F1 help for the print program when creating the file:
    1. If you are taking the Output to file system:
    If required, the file can be written to the file system. The created file can be copied to a PC using data medium exchange management. You should be looking for downloaded files here, since the data carrier is not managed within the SAP system, but is already stored in the file system by the payment medium program. The file name should be defined by the user. You should make sure that existing files with the same name have already been processed, because they will be overwritten.
    Note:If a file cannot be found using the data medium exchange management the reason could be that the directory that was written to at the start of the payment medium program (in background processing, for example) cannot be read online.
    You should then select a directory which can be recorded and read by several different computers. Due to the problems described above and the resulting lack of data security, we advise against writing to the file system. This method is only beneficial if the data carrier file is taken from the file system by an external program, to be transferred to the bank.
    2. If you are taking Output into TemSe:
    If required, the file created can be stored within the SAP System(store in the TemSe and not in the file system),thus protecting it from unauthorized external access. You can download the file into the user's file system via the DME manager. The name of the file to be created during the download can be determined when running the payment medium program: the contents of the
    file name parameter are stored in the management data and defaulted when running the download.
    Please check the corresponding files in the DME administration for all files and check if the output medium 'File-System' has been
    chosen, that means output medium '0'. In order to use the TemSe you have to use the output medium '1'. Furthermore see if the PC-file- paths, like c:\filename.DAT, instead of application file names. The FDTA has difficulties to find these files, especially by using 2 application servers.
    To avoid problems with the files SAP recommends you to use the TemSe   with output medium '1', or the file system with output
    medium '0'. TemSe is always a better option.
    I hope this helps.
    Best regards,
    Suresh Jayanthi.

  • Best approach to delete records that are not in the source table anymore.

    I have a situation where I need to remove records from dimensions that are not in the source data anymore. Right now we are not maintaing history, i.e. not using SCD but planning for the next release. If we did that it would be easy to figure the latest records. The load is nightly and records are updated and new added.
    The approach that I am considering is to join the dimension tables the the sources on keys and delete what doesn't join. However, is there perhaps some function in OWB that would allow to do this automatically on import so it can be also in place for the future?
    Thanks!

    Bear in mind that deleting dimension records becomes problematic if you have facts attached to them. Just because this record is no longer in the active set doesn't mean that it wasn't used historically, and so have foreign key constraints on it in your database. IF this is the case, a short-term solution would be to add an expiry_date field to the dimension and update the load to set this value when the record disappears rather than to delete it.
    And to do that, use the target dimension as a source table, outer join it to the actual source table on the natural key, and so your update will set expiry_date=nvl(expiry_date,sysdate) to set to sysdate if this record has not already been expired on all records where the outer join fails.
    Further consideration: what do you do if the record is re-inserted into the source table? create a new dimension key? Or remove the expiry date?
    But I will say that I am not a fan of deleting records in most circumstances. What do you do if you discover a calculation error and need to fix that and republish historical cubes? Without the historical data, you lose the ability to do things like that.

  • How to deal with such Unicode source data in BI 7.0?

    I encountered error when activating DSO data. It turned out that the source data is Unicode in the HTML representation style. For example, the source character string is:
    ABCDEFG& #65288;XYZ  (I added a space in between & and # so that it won't be interpreted to Unicode in SDN by web browser)
    After some analysis, I see it's actually the Unicode string
    ABCDEFG&#65288;XYZ
    Please notice the wide left parenthesis. It's the actual character from the HTML $#xxx style above. To compare, here is the Unicode parenthesis '&#65288;'  and here is the ASCII one '(' . You see they are different.
    My question is: as I have trouble loading the &#... string, I think I should translate the string to actual Unicode character (like '&#65288;' in this case). But how can I achieve this?
    Thanks!
    Message was edited by:
            Tom Jerry

    I found this is called "Numeric character reference", or NCR, in HTML term. So the question is how to convert string in NCR fashion back to Unicode. Thanks.

  • Error during data load due to special characters in source data

    Hi Experts,
    We are trying to load Billing data into the BW using the billing item datasource. It seems that there are some special characters in the source data. When the record with these characters is encountered, the request turns red and the package is not loaded even into the PSA. The error we get in the monitor is something like
    'RECORD 5028: Contents from field ****  cannot be converted into type CURR',
    where the field **** is a key figure of type currency. We managed to identify the said record in RSA3 on the source system and found that one of the fields contains some invalid (special) characters that show up as squares in RSA3. The data in the rest of the fields, including the fields mentioned in the error  looks correct.
    Our source system is a non-unicode system wheras the BW system is unicode enabled. I figure that the data in the rest of the fields are getting misaligned due to the presence of the invalid characters in the above field. This was confirmed when we unassigned the field with the special characters from the transfer rules and removed the source field from the transfer structure. After doing this the data was loaded successfully and the request turned green.
    Can anyone suggest a way to either filter out such invalid characters from the source data or make some settings in the BW systems such that the special characters are no longer invalid in the BW system? We cannot write code in the transfer rules because the data package does not even come into the PSA. Is there any other method to solve this problem?
    Regards,
    Ted

    Hi Thad,
    I was wondering, if system is unicode or non unicode that should not matter the amount and currency field . As currencies are defined by SAP and it is in pure English at least a currency code part of 3 Chars. 
    Could this because of some incosistency of data ??
    I would like to know for Which currency  had some special characters it it in that particular record ??
    Hope that helps.
    Regards
    Mr Kapadia

  • FDM validation report with source data gives 0

    Hi All,
    I'm building a validation report to report on the source data (so using ~ instead of |).
    I've created a logic group to sum up the amounts to a member TOTAL.
    When i use the validation editor lookup screen i can browse for Entity, Account (TOTAL), UD1, UD2 and UD3.
    The category, Period and Year are left blank.
    When trying to test i have to select a test entity (which is the target entity dimension). However, these entities are nog in my source entities (since they are mapped).
    So the lookup value gives a value of 0 while a value should be there (since i've selected all dimensions).
    Any suggestions? i remember that in the past i did just select a random test entity and it did work.
    We're using version 11.1.1.3.
    Thanks!

    Hi Tony,
    What do you mean by target information.
    I only retrieve source information but i have to select an entity in the test window (where only the target entities are shown).
    So all fields in the import screen (except those in the pov) are defined and match the import screen members exact.
    Furthermore when i click browse for the years i get an error as well..

  • ODI: Can I create join link between the source Files?

    I have a few flat files that has foreign key relationships to each other, and I put them as source files, and try to import data to Essbase from them. and I got the error message: The source data server has no join capacilities. But if I just put all of the informations inside one flat file, the transferring is successful. It seems I cannot put "joins" on the source flat files. Please advise,thanks!

    Hi,
    You can join tables in your source area, even if they are flat files.
    The joins will be done in a staging area, depending on the size of the files and the location of the agent you are running it can define where you want the staging area to be.
    If the files are small and the joins are not complex then it can be done using the memory engine, otherwise in most circumstances I would use the power of a relational engine such as Oracle/SQL server as the staging area.
    Make sure you use a staging area for the interface or you will get the error messages, also on the joins make sure they are done in the staging area, there will be an option when you highlight the join
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How do I clear out the measure data in a 10g cube?

    Does anyone know of an OLAP feature that I could call to zero out the measures stored in the cube before I refresh from the source data?
    I'm new to Oracle OLAP, but I have some experience with Essbase and I have quite a bit of experience with Oracle PL/SQL. The database is 10g.
    Here's my problem. My dimensions and cube are mapped to Oracle tables. I've built an Oracle procedure, using DBMS_LOB / xml_clob functionality, to refresh my cube. The procedure worked successfully to initially load the data. It works successfully when changing the dollar amounts in source data. But it's possible that a record in my source data can be totally removed. When this happens, the dollars from the removed record are still showing up in the cube total after the procedure executes. I had assumed the CleanMeasures="true" parameter would take care of this, but it is not.
    thanks,
    Nancy
    Here's the parameters I'm passing to BuildDatabase in my Oracle procedure:
    ' <BuildDatabase Id="Action2" AWName="BUDGET.BUDGETS" BuildType="EXECUTE"
    RunSolve="true" CleanMeasures="true" CleanAttrs="true" CleanDim="true"
    TrackStatus="false" MaxJobQueues="0">';

    If you are doing a reload every time then you can issue following commands to clear data from cube.
    lmt name to all
    allstat
    clear all from <cubename>prttopvar
    You can wrap above commands in pl sql procedure using dbms_aw.execute package and execute it before cube load starts. Instead of clearing it from whole cube you can clear only from one partition also. Just take a look at clear command in olap DML 10.2 reference.
    Thanks,
    Brijesh
    Edited by: Brijesh Gaur on Aug 10, 2010 6:47 AM

Maybe you are looking for