Data services job failes while insert data into SQL server from Linux

SAP data services (data quality) server is running on LInux server and Windows. Data services jobs which uses the ODBC driver to connect to SQL server is failing after selecting few thousand records with following reason as per data services log on Linux server. We can run the same data services job from Windows server, the only difference here is it is using SQL server drivers provided by microsoft. So the possible errors provided below, out of which #1 and #4 may not be the reason of job failure. DBA checked on other errors and confirmed that transaction log size is unlimited and system has space.
Why the same job runs from Windows server and fails from Linux ? It is because the ODBC drivers from windows and Linux works in different way? OR there is conflict in the data services job with ODBC driver.
===== Error Log ===================
8/25/2009 11:51:51 AM Execution of <Regular Load Operations> for target <DQ_PARSE_INFO> failed. Possible causes: (1) Error in the SQL syntax;
(2)6902 3954215840 RUN-051005 8/25/2009 11:51:51 AM Database connection is broken; (3) Database related errors such as transaction log is full, etc.; (4) The user defined in the
6902 3954215840 RUN-051005 8/25/2009 11:51:51 AM datastore has insufficient privileges to execute the SQL. If the error is for preload or postload operation, or if it is for
===== Error Log ===================

this is another method
http://www.mssqltips.com/sqlservertip/2484/import-data-from-microsoft-access-to-sql-server/
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Similar Messages

  • Error with date field when inserting records into sql server from webdynpro

    Dear SDN's,
    I am trying to insert the records into sql server through my webDynpro program.
    I have created a date field in a dictionary with the datatype date.
    In my webdynpro program to insert the date i am following the below format.
    String dateString = "2006/12/10";
          java.util.Date d=new java.util.Date(dateString);
          java.sql.Date <b>date</b> = new java.sql.Date(d.getTime());
    int i=stmt.executeUpdate("INSERT INTO TRAVEL_HEADER(TRQID,PROJECTID,<b>REQDT</b>,ADVCE,ETADV,PURTR) values(21, '555-1212', '" + <b>date</b> + "', 5000, '20060501','hi')");
    when i try to execute it, it gives the following error.
    <b>com.sap.sql.log.OpenSQLException: The SQL statement "INSERT INTO "TRAVEL_HEADER" ("TRQID","PROJECTID","REQDT","ADVCE","ETADV","PURTR") VALUES (21,'555-1212','2006-12-10',5000,'20060501','hi')" contains the semantics error[s]: - type check error: new value (element number 3 (CHAR)) is not assignable to column  >>REQDT<< (DATE)</b>
    Please correct me.
    Your help will be appreciated.
    Regards,
    Sireesha.B

    Hi,
    int i=stmt.executeUpdate("INSERT INTO TRAVEL_HEADER(TRQID,PROJECTID,REQDT,ADVCE,ETADV,PURTR) values(21, '555-1212', 'date', 5000, '20060501','hi')");
    try like this.
    I Think in SQL the general format to take date as input like this.
    INSERT INTO X VALUES ('10/30/56')
    thaks,
    Lohi.

  • HOW TO INSERT DATA INTO SQL SERVER FROM MS ACCESS TABLE??

    NEED TO INSERT DATA INTO SQL SERVER FROM MS ACCESS TABLE.

    this is another method
    http://www.mssqltips.com/sqlservertip/2484/import-data-from-microsoft-access-to-sql-server/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Data Services job rolling back Inserts but not Deletes or Updates

    I have a fairly simple CDC job that I'm trying to put together. My source table has a record type code of "I" for Inserts, "D" for deletes, "UB" for Update Before and "UP" for Update After. I use a Map_CDC_Operation transform to update the destination table based on those codes.
    I am not using the Transaction Control feature (because it just throws an error when I use it)
    My issue is as follows.
    Let's say I have a set of 10,000 Insert records in my source table. Record number 4000 happens to be a duplicate of record number 1. The job will process the records in order starting with record 1 and begin happily inserting records into the destination table. Once it gets to record 4000 however it runs into a duplicate key issue and then my try/catch block catches the error and the dataflow will exit. All records that were inserted prior to the error will be rolled back in the destination.
    But the same is not true for updates or deletes. If I have 10000 deletes and 1 insert in the middle that happens to be an insert of a duplicate key, any deletes processed before the insert will not be rolled back. This is also the case for updates.
    And again, I am not using Transaction Control, so I'm not sure why the Inserts are being rolled back, but more curiously Updates and Deletes are not being rolled back. I'm not sure why there isn't a consistent result regardless of type of operation. Does anyone know what's going on here or  what I'm doing wrong/what my misconception may be?
    Environment information: both source and destination are SQL Server 2008 databases and the Data Services version we use is 14.1.1.460.
    If you require more information, please let me know.

    Hi Michael,
    Thanks for your reply. Here are all the options on my source table:
    My Rows per commit on the table is 10,000.
    Delete data table before loading is not checked.
    Column comparison - Compare by name
    Number of loaders - 1
    Use overflow file - No
    Use input keys - Yes
    Update key columns - No
    Auto correct load - No
    Include in transaction - No
    The rest were set to Not Applicable.
    How can I see the size of the commits for each opcode? If they are in fact different from my Rows per commit (10,000) that may solve my issue.
    I'm new to Data Services so I'm not sure how I would implement my own transaction control logic using a control column and script. Is there a guide somewhere I can follow?
    I can also try using the Auto correct load feature.  I'm guessing "upsert" was a typo for insert? Where is that option?
    Thank you very much!
    Riley

  • Importing Data into Sql Server 2012 from Excel Data

    Hi,
    I got errors like this when i am doing import data into sql server from excel Data. Can you please help us?
    - Executing (Error)
    Messages
    Error 0xc020901c: Data Flow Task 1: There was an error with Source - demotable$.Outputs[Excel Source Output].Columns[Comment] on Source - demotable$.Outputs[Excel Source Output]. The column status returned was: "Text was truncated or one
    or more characters had no match in the target code page.".
     (SQL Server Import and Export Wizard)
    Error 0xc020902a: Data Flow Task 1: The "Source - demotable$.Outputs[Excel Source Output].Columns[Comment]" failed because truncation occurred, and the truncation row disposition on "Source - demotable$.Outputs[Excel Source Output].Columns[Comment]"
    specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
     (SQL Server Import and Export Wizard)
    Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on Source - demotable$ returned error code 0xC020902A.  The component returned a failure code when the pipeline engine called PrimeOutput().
    The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the failure.
     (SQL Server Import and Export Wizard)

    Are you attempting to import into a newly made table or into an existing table? It looks like it's trying to insert data where it cannot be inserted (invalid column or lack of data size in your column).
    Try the following:
    1). In your excel sheet, highlight the whole sheet and make sure the cells are in 'text' form and try re-importing
    2). save the document as ms dos TEXT and import as a text document.
    3). double check your columns are correct for the data, for example if you have a column that has a string of 100 characters and your column is 'NvarChar(90)' - that might cause the error? Or just correct data type in your column
    3). If that doesn't work and you're inserting into a new table, try importing it as string first and writing a query to insert columns that should be float/integer or whatever. You may want to convert float texts to a 'bigint' first rather than string
    > float as that can cause problems if I remember correctly.

  • Error while Inserting data into flow table

    Hi All,
    I am very new to ODI, I am facing lot of problem in my 1st interface. So I have many questions here, please forgive me if it has irritated to you.
    ========================
    I am developing a simple Project to load a data from an input source file (csv) file into a staging table.
    My plan is to achieve this in 3 interfaces:
    1. Interface-1 : Load the data from an input source (csv) file into a staging table (say Stg_1)
    2. Interface-2 : Read the data from the staging table (stg_1) apply the business rules to it and copy the processed records into another staging table (say stg_2)
    3. Interface-3 : Copy the data from staging table (stg_2) into the target table (say Target) in the target database.
    Question-1 : Is this approach correct?
    ========================
    I don't have any key columns in the staging table (stg_1). When I tried to execute the Flow Control of this I got an error:
    Flow Control not possible if no Key is declared in your Target Datastore
    With one of the response (the response was - "FLOW control requires a KEY in the target table") in this Forum I have introduced a column called "Record_ID" and made it a Primary Key column into my staging table (stg_1) and my problem has been resolved.
    Question-2 : Is a Key column compulsary in the target table? I am working in BO Data Integrator, there is no such compulsion ... I am little confused.
    ========================
    Next, I have defined one Project level sequence. I have mapped the newly introduced key column Record_Id (Primary Key) with the Project level sequence. Now I am got another error of "CKM not selected".
    For this, I have inserted "Insert Check (CKM)" knowledge module in my Project. With this the above problem of "CKM not selected" has been resolved.
    Question-3 : When is this CKM knowledge module required?
    ========================
    After this, the flow/interface is failing while loading data into the intermediar ODI created flow table (I$)
    1 - Loading - SS_0 - Drop work table
    2 - Loading - SS_0 - Create work table
    3 - Loading - SS_0 - Load data
    5 - Integration - FTE Actual data to Staging table - Drop flow table
    6 - Integration - FTE Actual data to Staging table - Create flow table I$
    7 - Integration - FTE Actual data to Staging table - Delete target table
    8 - Integration - FTE Actual data to Staging table - Insert flow into I$ table
    The Error is at Step-8 above. When opened the "Execution" tab for this step I found the message - "Missing parameter Project_1.FTE_Actual_Data_seq_NEXTVAL RECORD_ID".
    Question-4 : What/why is this error? Did I made any mistake while creating a sequence?

    Everyone is new and starts somewhere. And the community is there to help you.
    1.) What is the idea of moving data from stg_1 and then to stg_2 ? Do you really need it for any other purpose other than move data from SourceFile to Target DB.
    Otherwise, its simple to move data from SourceFile -> Target Table
    2.) Does your Target table have a Key ?
    3.) CKM (Check KM) is required when you want to do constraint validation (Checking) on your data. You can define constraints (business rules) on the target table and Flow Control will check the data that is flowing from Source File to Target table using the CKM. All the records that donot satisfy the constraint will be added to E$ (Error table) and will not be added to the Target table.
    4.) Try to avoid ODI sequences. They are slow and arent scalable. Try to use Database sequence wherever possible. And use the DB sequence is target mapping as
    <%=odiRef.getObjectName( "L" , "MY_DB_Sequence_Row" , "D" )%>.nextval
    where MY_DB_Sequence_Row is the oracle sequence in the target schema.
    HTH

  • Issue while using views in Data services Jobs

    Hi,
    In Data Services Job, i am trying to pull data from a view to a table. The view is pointing to the table which is in other database.
    The problem is when i import the source view into data services and view the data, i found one row having wrong data. The values in that row are wrong/corrupted while the same row in the source table is having correct values
    I queried the view from TOAD for that record. The values are valid.
    The data is coming wrong only in Data services. Any row in that table gets corrupted, there is no specific row.
    Hence while running the jobs i am getting errors.
    Any idea what can be the reason of getting corrupted data in view while same view when queried from TOAD gives correct values?

    hi,
    There is a possibility of unsupported data type by data service, please share the data service version,  database type and data type of column which got corrupted.
    Regards,
    M Ramesh

  • Error while inserting data into a table.

    Hi All,
      I created a table.While inserting data into the table i am getting an error.Its telling "Create data Processing Function Module".Can any one help me regarding this?
    Thanx in advance
    anirudh

    Hi Anirudh,
      Seems there is already an entry in the Table with the same Primary Key.
    INSERT Statement will give short dump if you try to insert data with same key.
    Why dont you use MODIFY statement to achieve the same.
    Reward points if this Helps.
    Manish

  • Input to data service job

    Hi Experts,
    Is it possible to provide input to data service job from BW while runtime.

    Hi,
    you can follow the steps below to execute a BODS Job from BW.
    Goto BW, create infopackage for respective datasource and fill the "3rd party selection" details as below.
            Repository    : BODS repository name
            JobServer     : BODS running jobserver name
            JobName     : BODS job name
    Save and Execute infopackage.It will trigger BODS job which will load the data into BW datasource.
    Add this infopackage into process chain and schedule it.
    Now for passing the values into the job you can try as follows:-
    You need to add Global Variables to your Job.
    Then, if you refresh the 3rd party selections, you'll see your variables after Advanced_Parameters.

  • Error while insert data using execute immediate in dynamic table in oracle

    Error while insert data using execute immediate in dynamic table created in oracle 11g .
    first the dynamic nested table (op_sample) was created using the executed immediate...
    object is
    CREATE OR REPLACE TYPE ASI.sub_mark AS OBJECT (
    mark1 number,
    mark2 number
    t_sub_mark is a class of type sub_mark
    CREATE OR REPLACE TYPE ASI.t_sub_mark is table of sub_mark;
    create table sam1(id number,name varchar2(30));
    nested table is created below:
    begin
    EXECUTE IMMEDIATE ' create table '||op_sample||'
    (id number,name varchar2(30),subject_obj t_sub_mark) nested table subject_obj store as nest_tab return as value';
    end;
    now data from sam1 table and object (subject_obj) are inserted into the dynamic table
    declare
    subject_obj t_sub_mark;
    begin
    subject_obj:= t_sub_mark();
    EXECUTE IMMEDIATE 'insert into op_sample (select id,name,subject_obj from sam1) ';
    end;
    and got the below error:
    ORA-00904: "SUBJECT_OBJ": invalid identifier
    ORA-06512: at line 7
    then when we tried to insert the data into the dynam_table with the subject_marks object as null,we received the following error..
    execute immediate 'insert into '||dynam_table ||'
    (SELECT

    887684 wrote:
    ORA-00904: "SUBJECT_OBJ": invalid identifier
    ORA-06512: at line 7The problem is that your variable subject_obj is not in scope inside the dynamic SQL you are building. The SQL engine does not know your PL/SQL variable, so it tries to find a column named SUBJECT_OBJ in your SAM1 table.
    If you need to use dynamic SQL for this, then you must bind the variable. Something like this:
    EXECUTE IMMEDIATE 'insert into op_sample (select id,name,:bind_subject_obj from sam1) ' USING subject_obj;Alternatively you might figure out to use static SQL rather than dynamic SQL (if possible for your project.) In static SQL the PL/SQL engine binds the variables for you automatically.

  • Access issues while inserting data in a table in same schema

    Hi All.
    I have a script that at first creates and then populates a table. My script used to run fine in production environment till few hours back. But all of a sudden, it is popping up error while inserting data into the table .
    Error message - "Insufficient Previlages".
    Please suggest me what may be the reasons for this kind of error.
    Thanks in advance

    Sonika wrote:
    Hi All.
    I have a script that at first creates and then populates a table. My script used to run fine in production environment till few hours back. But all of a sudden, it is popping up error while inserting data into the table .
    Error message - "Insufficient Previlages".
    Please suggest me what may be the reasons for this kind of error.
    1) something changed
    2) you are hitting a bug

  • Data Services job server crashed and won't start backup

    Hello,
    I was running some jobs on data services 4.2 sp3  windows server 2012R2 and they all failed and the job server went down. None of the jobs that failed had an trace file or error log in the management console. Now i am unable to open data services designer or data services server manager, when I try to open them nothing happens. Also the SAP Data services job service cannot be started. The job server was running fine for a few weeks before this. This has happened twice already today the first  time the only way i was able to fix it was run the repair on the dataservices install. Can someone please help me what know what is causing this and how it can be fixed.

    Hi Tyler,
    It was Windows specific issue please refer the below link & KBA
    How To Fix Windows Service Error 1053
    http://windows-exe-errors.com/how-to-fix-windows-service-error-1053/
    1986247 - Error "Windows could not start the BusinessObjects Data Services service on local computer" occurs in Data Services 4.1
    https://service.sap.com/sap/support/notes/1986247
    1992260 - Error: Windows could not start the SAP Data Services service on local computer, after upgrading SAP data services and deleting job servers SAP Data Services 4.2
    https://service.sap.com/sap/support/notes/1992260
    Hope this will help!!!!
    Thanks,
    Daya

  • Data load failed while loading data from one DSO to another DSO..

    Hi,
    On SID generation data load failed while loading data  from Source DSO to Target DSO.
    Following are the error which is occuuring--
    Value "External Ref # 2421-0625511EXP  " (HEX 450078007400650072006E0061006C0020005200650066
    Error when assigning SID: Action VAL_SID_CONVERT, InfoObject 0BBP
    So, i'm  not getting  WHY in one DSO i.e Source  it got successful but in another DSO i.e. Target its got failed??
    While analyzing all i check that SIDs Generation upon Activation is ckecked in source DSO but not in Target DSO..so it is reason its got failed??
    Please explain..
    Thanks,
    Sneha

    Hi,
    I hope your data flow has been designed in such a way where the 1st DSO as a staging Device and all transformation rules and routine are maintained in between 1st to 2nd dso and sid generation upon activation maintained in 2nd DSO.  By doing so you will be getting your data 1st DSO same as your source system data since you are not doing any transformation rules and routine etc.. which helps to avoid data load failure.  
    Please analyze the following
    Have you loaded masterdata before transaction data ... if no please do it first
    go to the property of first dso and check whether there maintained sid generation up on activation (it may not be maintained I guess)
    Goto the property of 2nd Dso and check whether there maintained sid generation up on activation (It may be maintained I hope)
    this may be the reason.
    Also check whether there is any special char involvement in your transaction data (even lower case letter)
    Regards
    BVR

  • ORA-00600: internal error code while inserting data in table

    hi gems..
    i am getting the below error while inserting data in a table...
    *ORA-00600: internal error code, arguments: [kqd-objerror$ ] , , [0], [98], [BIN$sm1O+fYhF1jgRAAhKNYyZA==$0], [], [], [], [], [], [], []*
    i can select the table absolutely but cant insert datas(but this is the schema owner and so datas should get inserted)
    i have checked the alert.log...the entries in last few lines are like this:
    <msg time='2011-11-25T03:08:55.763+05:30' org_id='oracle' comp_id='clients'
    type='UNKNOWN' level='16' host_id='ICS167DOR'
    host_addr='10.184.134.139'>
    <txt>Directory does not exist for read/write [oracle/ora11g/app/ora11g/product/11.2.0/dbhome_1/log] [oracle/ora11g/app/ora11g/product/11.2.0/dbhome_1/log/diag/clients]
    </txt>
    </msg>
    please help...thanks in advance
    Edited by: user12780416 on Nov 25, 2011 3:29 AM

    hi...
    finally i got the solution...i know that this problem may occur due to some other reasons also for different users...but the problem which caused the developers facing the error in this case is below:
    they faced the error while importing the dumps in the server. at the same time the application developers told that they can select the tables but cannot insert any datas.
    after listenning to this, i assumed that this may be a space problem with the system tablespace as it is responsible for storing the data dictionary.
    i asked for the free spaces for the system tablespace and got the reason. It has only 0.2% left.
    i told them to issue the resize command for the system01.dbf datafile(allocated 2GB more) and the problem got resolved.
    Hope this helps..thanks

  • How to select data from 3rd row of Excel to insert into Sql server table using ssis

    Hi,
    Iam having Excel files with headers in first two rows , i want two skip that two rows and select data from 3rd row to insert into Sql Server table using ssis.3rd row is having column names.

                                                         CUSTOMER DETAILS
                         REGION
    COL1        COL2        COL3       COL4           COL5          COL6          COL7
           COL8          COL9          COL10            COL11      
    1            XXX            yyyy         zzzz
    2            XXX            yyyy        zzzzz
    3           XXX            yyyy          zzzzz
    4          XXX             yyyy          zzzzz
    First two rows having cells merged and with headings in excel , i want two skip the first two rows and select the data from 3rd row and insert into sql server using ssis
    Set range within Excel command as per below
    See
    http://www.joellipman.com/articles/microsoft/sql-server/ssis/646-ssis-skip-rows-in-excel-source-file.html
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

Maybe you are looking for