Data Integrator - Bulk Load Append ERROR

Hi,
Im working in a SYBASE database and i have the following error:
My source table for example have 1000 rows and i set my target table with the bulk load append option every 100 rows to commit, but every time when the process ends in my target i have 900 rows, i think the data integrator dont commit the last 100 rows.
I have a "clean" pass of data, no where, no joins is a simple copy from one table to another table.
Anyone can give me a solution or something to solve this problem?
Thanks a lot...

Sybase ASE or IQ and which version ?
what is the Data Integrator version that you are using ?
is the job failing with errors ? may be the last batch is not committed because of constraints on the target table, did you try inserting the same set of rows without bulk load ? try inserting the rows without bulk load and see if it gives any errors

Similar Messages

  • Loading Data Using Outline Load Utility - Error

    Trying to load some data using Outline Load Utility. I followed the Oracle documentation on this and Under the Administration -> Manage Data Load area I defined Accounts as data load dimension, and period as driver dimension. I then added Jan - Feb as driver members. I wanted to load data for Jan. My data header looks like this:
    Accounts,Jan,Point-of-View,Data Load Cube Name
    acct1, 768, "2010, Ver1, Actuals, entity1, etc" Plan1
    The command i typed is this:
    C:\Oracle\Middleware\user_projects\epmsystem1\Planning\planning1>outlineload /A:Plan1 /U:Admin /M /I:C:\MockData2.txt /D:Accounts /L:C:\dataload.log /X:c:\dataload.exc
    I get the following errors:
    [Mon Nov 22 11:03:02 CST 2010] Unrecognized column header value "Jan".
    [Mon Nov 22 11:03:02 CST 2010] Unrecognized column header value "Point-of-View".
    [Mon Nov 22 11:03:02 CST 2010] Unrecognized column header value "Data Load Cube Name".
    [Mon Nov 22 11:03:02 CST 2010]Unable to obtain dimension information and/or perform a data load: Unrecognized column header value(s), refer to previous messages. (Note: column header values are case sensitive.)
    [Mon Nov 22 11:03:02 CST 2010]Planning Outline data store load process finished with exceptions: not all input records were read due to errors (or an empty input file). 0 data records were read, 0 data records were processed, 0 were successfully loaded, 0 were rejected.
    This is version 11.1.2. What am I doing wrong here? I also find it interesting that the command for data load and metadata load is the same as per oracle docs. I guess Planning knows if we're trying to load data or metadata based on the CSV file header?

    I don't usually bother with loading data using the outline load utility but as a test on 11.1.2 using the planning sample application I gave it a quick go.
    In planning, went to Administration > Data Load Settings > Picked Account as the data load dimension, Period as the driver dimension and selected Jan as a member
    I created a file :-
    Account,Jan,Point-of-View,Data Load Cube Name
    TestMember,100,"Local,E05,Actual,NoSegment,Working,FY11",Consol
    And used the following command line from the directory with the planning utilities
    OutlineLoad /A:plansamp /U:admin /M /N /I:F:/temp/dload.csv /D:Account /L:F:/temp/outlineLoad.log /X:F:/temp/outlineLoad.exc
    The log produced :-
    [Tue Nov 23 10:02:01 GMT 2010]Successfully located and opened input file "F:\temp\dload.csv".
    [Tue Nov 23 10:02:01 GMT 2010]Header record fields: Account, Jan, Point-of-View, Data Load Cube Name
    [Tue Nov 23 10:02:01 GMT 2010]Located and using "Account" dimension for loading data in "plansamp" application.
    [Tue Nov 23 10:02:01 GMT 2010]Load dimension "Account" has been unlocked successfully.
    [Tue Nov 23 10:02:01 GMT 2010]A cube refresh operation will not be performed.
    [Tue Nov 23 10:02:01 GMT 2010]Create security filters operation will not be performed.
    [Tue Nov 23 10:02:01 GMT 2010]Examine the Essbase log files for status if Essbase data was loaded.
    [Tue Nov 23 10:02:01 GMT 2010]Planning Outline load process finished (with no data load as specified (/N)). 1 data record was read, 1 data record was processed, 1 was accepted, 0 were rejected.
    There you go no problems.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Data Integrator 11.7.3: Error 70602

    Hi,
    I have scheduled my entire production environment in one job, many levels, many workflows, dataflows, scripts, variables aso. After an upgrade from 11.7.2 to 11.7.3 the job stops every night initiating one particular workflow and dataflow. It's a Sybase repo.
    Here is the part of the log where the df initiates and fails. Anybody with the same experience or knows what this sybase error 70602 often relates to in BODI ?
    23687     1     WORKFLOW     04.12.2008 06:29:17     Work flow <EDW_FAKTA_OPPDRAG_FEILMELDING_WF> is started.
    4139     1     DATAFLOW     04.12.2008 06:29:23     Process to execute data flow <EDW_FAKT_OPPDRAG_FEILMELDING_AKTIV_DF> is started.
    4139     1     DATAFLOW     04.12.2008 06:29:29     Data flow <EDW_FAKT_OPPDRAG_FEILMELDING_AKTIV_DF> is started.
    4139     1     DATAFLOW     04.12.2008 06:29:29     Cache statistics for data flow <EDW_FAKT_OPPDRAG_FEILMELDING_AKTIV_DF> are not available to be used for optimization and need
    4139     1     DATAFLOW     04.12.2008 06:29:29     to be collected before they can be used.
    4139     1     DATAFLOW     04.12.2008 06:29:29     Data flow <EDW_FAKT_OPPDRAG_FEILMELDING_AKTIV_DF> using PAGEABLE Cache with <3443 MB> buffer pool.
    4139     1     DATAFLOW     04.12.2008 06:31:47     Data flow <EDW_FAKT_OPPDRAG_FEILMELDING_AKTIV_DF> is terminated due to error <70602>.
    4139     1     DATAFLOW     04.12.2008 06:31:47     Process to execute data flow <EDW_FAKT_OPPDRAG_FEILMELDING_AKTIV_DF> is completed.
    23687     1     WORKFLOW     04.12.2008 06:31:48     Work flow <EDW_FAKTA_OPPDRAG_FEILMELDING_WF> is terminated due to an error <70602>.
    23687     1     WORKFLOW     04.12.2008 06:31:48     Work flow <FAKTA_2_KJORING> is terminated due to an error <70602>.
    23687     1     WORKFLOW     04.12.2008 06:31:48     Work flow <EDW_WF> is terminated due to an error <70602>.
    23687     1     JOB     04.12.2008 06:31:48     Job <DW_Job> is terminated due to error <70602>.
    regards,
    -jem

    Hi Manoj,
    Takes some time to test this out because running df's separately does not give the same results or an authentic setting and the test environment is poorly. We are therefore adjusting-watching the prod job, which run every night on the sybase repo. What we did:
    - The first night we disabled both quality stats and sample data for all validation objects in the job and the job completed successfully.
    - The second night we activated the quality stats for all validation objects in the job and the job completed successfully.
    - Then we took out a df to a new and separat job for a stresstest against the repo, running 50 days with data. With only the quality stats still checked. It failed with this error:
    13328 1 RUN-050406 13.12.2008 12:42:45 |Session New_Job2|Workflow EDW_FAKTA_ORDRE_KONTERING_WF|Dataflow EDW_FAKT_ORDRE_KONTERING_DF
    13328 1 RUN-050406 13.12.2008 12:42:45 Data flow <EDW_FAKT_ORDRE_KONTERING_DF> received a bad system message. Message text from the child process is <blank message,
    13328 1 RUN-050406 13.12.2008 12:42:45 possibly due to insufficient memory>. The process executing data flow <EDW_FAKT_ORDRE_KONTERING_DF> has died abnormally. For
    13328 1 RUN-050406 13.12.2008 12:42:45 NT, please check errorlog.txt. For HPUX, please check stack_trace.txt. Please notify Customer Support.
    13328 1 RUN-050409 13.12.2008 12:42:46 |Session New_Job2|Workflow EDW_FAKTA_ORDRE_KONTERING_WF
    13328 1 RUN-050409 13.12.2008 12:42:46 The job process could not communicate with the data flow <EDW_FAKT_ORDRE_KONTERING_DF> process. For details, see previously
    13328 1 RUN-050409 13.12.2008 12:42:46 logged error <50406>.
    - We then unchecked the quality stats and the job completed successfully.
    - Finally we hold the prod job for 4 days, to give more data/stress on the repo (still with only the quality stats checked) and the job completed successfully.
    Tonight we will try to turn on sample data for the stage workflow.
    But when the job did fail in the beginning it failed on df's in the edw workflow. Means it run through the stage workflow with both quality stats and sample data. (As mentioned earlier post, stage and edw are different schemas in the data warehouse database, oracle 10G. And the stagejobs read from post and the edw from the stage.)
    One difference I can see is that the validations in stage are pretty simple and with "clean" ports, read:nice formats. And we are just checking on incoming missing values from source-post (NULL). In the edw df's the validations are more complex and we often test on lookup returns (if-exists variants). And the format on the return values are not always nice..for instance, Number values in oracle (large integer contents) are converted to decimal (28,7).. We have not changed that. But again, if we run these df's manually - not scheduled, it runs through.        
    Forgot to mention that the job got 135 df's, and roughly 50 with validations.
    I just wonder how robust this feature is..and how does it really works ? Fx how does the good rows impact the traffic against the repo ?..not at all or may huge amount of good data slow down stats counts/updates ?  What about "bad formats" ?..is converting portformats an issue, against the repotables I mean ?
    Any parameters to adjust in general ?
    I guess sybase is not the most robust platform either.. What about DS 3.0 ? Is it more robust ?
    I did also adjust the lenght and type of characters in the validation name. So they are pretty simple now. It's not easy to get any detail traces, in prod and with huge amount of data..but I will try..
    Regards,
    - jem

  • Data Integrator import XSD / DTD error

    Hi experts,
    Really need your help here!  I am trying to import an XSD or DTD schema into BODI.  I have both the XSD and DTD which I generated from the XML source file, using Altova XMLSpy.  BODI is allowing me to select the root element, but when I run the import, the memory usage on my PC against the BODI service goes to 1.5Gb, after which the following error is returned:
    Unknown error when importing metadata. (BODI-1112416)
    When I clear this error and try any other operations in the DI Designer I get out of memory errors and I have to close down the application and restart.  I am running a 4Gb RAM, Intel Core Duo, Windows 7 laptop.  I suspect the XSD / DTD structure is simply to large for BODI to parse.  The XSD and DTD is about 500Kb in size and contain over 16000 rows when viewed in a text editor!
    Any suggestions on how to process this XML in BODI will be greatly appreciated.
    Regards,
    Johann

    Hi,
    replace choice with sequence
    <b><xsd:complexType>
         <xsd:choice>
              <xsd:element name = “A” type=”xsd:string” />
              <xsd:element name= B” type=”xsd:string” maxOccurs=”unbounded” />
         </xsd:choice>
    </xsd:complexType></b>
    is handled identical to
    <b><xsd:complexType>
         <xsd:sequence>
              <xsd:element name = “A” type=”xsd:string” minOccurs=”0” />
              <xsd:element name= B” type=”xsd:string”
    minOccurs=”0” maxOccurs=”unbounded” />
         </xsd:sequence>
    </xsd:complexType></b>
    Regards
    Tarun Telang

  • Finding Error while creating Data Integrator Repositor

    Hi,
    I am working in SAP BO Data Integrator. I have created databeses and its login. Installed SQL Server and while installing Data Integrator I got an error for creating a new repository which is as follows
    Cannot open connection to the repository.  The error message from the underlying DBMS is <ODBC call <SQLDriverConnect> for data source <MSSERVER\SQLEXPRESS> failed: <[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'AIO_REPO_IDOC'. The user is not associated with a trusted SQL Server connection.>. Notify Customer Support.>. (BODI-20006)
    Can anyone resolve this problem.
    Edited by: sap_beginnner on Aug 9, 2010 4:10 PM

    Hi,
    I used SQL Server Authentication to logon to databases and for DI Version I am using SAP BusinessObjects XI 3.2.
    I tried it again by deleting all databases and then create the same databases but on creating repository for table AIO_REPO_IDOC in repository manager gives error which is as follows
    Cannot open connection to the repository.  The error message from the underlying DBMS is <ODBC call <SQLDriverConnect> for data source <IDHASOFT238933\SQLEXPRESS> failed: <[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'AIO_REPO_IDOC'. The user is not associated with a trusted SQL Server connection.>. Notify Customer Support.>. (BODI-20006) 
    An error occurred during creation of the local repository. (BODI-300054)
    I also tried through another tool to acces the databases. It is working successfully.
    Edited by: sap_beginnner on Aug 10, 2010 8:58 AM

  • Monitoring and Alerting tools for Business Object Data Integrator

    Hi,
    We have Business Objects DI installed in AIX 5.3 environment. Please advise any type of monitoring and alerting tools for DI.
    Monitor tools to monitor Job servers(like job service and web services running properly) and also to monitor long running extraction jobs and alerting the same. Thanks.
    Thanks.
    Raj.

    Hi Thomas
    Though both the tools objectives are for Data integration, the purpose for both tools are different.
    SAP BODS/ BOBJ Data Integrator is an ETL tool which involves combining data residing in different sources and providing users with a unified view of the data
    SAP PI is a Middleware/EAI tool which have a mechanism/technology that connects different parts of an application or a series of applications
    SAP BOBJ Data Integrator
    Bulk Data Integration
    Extraction Transformation and Loading
    Deals of huge volume of data
    SAP PI
    It is process based integration of application data
    It is a middle ware which integrates various project applications into a common programming package that can be operate/access as a single application
    SAP NetWeaver PI is SAP’s implementation of Service-oriented Architecture (SOA) middleware and facilitates the integration of business processes that span different departments, organizations, or companies.
    SAP PI uses SAP NetWeaver Application Server and message servers where in fail over cases the messages can be resent again.
    The below differences will gives you more clarity
    Verdict is we can use both the tools for data transfer/migration and for choosing the tool depends upon the business requirement
    Please reply back if you need more information.
    Regards
    Murthy

  • Using the cache in Data Integrator

    Hi,
    I'm trying to understand if is possible restrict the use of cache in Data Integrator to load initial datasource to limit the use of server resources.
    I understand from the manual to set the option to use the type of dataflow cache: Pageable or In-Memory. This option does not solve my problem. I would like to avoid to load all the cached datasource.
    Is possible to perform query objects directly without first loading all data sources in the tables?

    base /open SQL Statement etc) the first time i defined the system everything was fine but when i changed the Database (using M.S.Access) the " open SQL Statement" it would show the tables but not thier columns ,I"m using win98 sec edition / Teststand 1.0.1Hello Kitty -
    Certainly it is unusual that you can still see the tables available in your MS Access database but cannot see the columns? I am assuming you are configuring an Open Statement step and are trying to use the ring-control to select columns from your table?
    Can you tell me more about the changes you made to your file when you 'changed' it with MS Access? What version of Access are you using? What happens if you try and manually type in an 'Open Statement Dialog's SQL string such as...
    "SELECT UUT_RESULT.TEST_SOCKET_INDEX, UUT_RESULT.UUT_STATUS, UUT_RESULT.START_DATE_TIME FROM UUT_RESULT"
    Is it able to find the columns even if it can't display them? I am worried that maybe you are using a version of MS Access that is too new for the version of TestSt
    and you are running. Has anything else changed aside from the file you are editing?
    Regards,
    -Elaine R.
    National Instruments
    http://www.ni.com/ask

  • Data Integrator & SAP BW

    Hi Expoerts,
    What's the advantage of using Data Integrator to load R/3 data into SAP BW instead of the BW ETL process?
    Kind regards,
    Andrew

    Hi,
    I don't know the actual version. at that time the name was acta works.
    advantage? I don't know someone.
    You dont' have
    - a nice monitoring
    - a integration to process chains/to your data flows (eventing)
    - you must install/control a new server/system
    - you must learn a new development
    - the generated abap is very bad.
    - you come quickly to the development limits of the tool
    Sven

  • Error when Bulk load hierarchy data

    Hi,
    While loading P6 Reporting databases following message error appears atthe step in charge of Bulk load hierarchy data into ODS.
    <04.29.2011 14:03:59> load [INFO] (Message) - === Bulk load hierarchy data into ODS (ETL_LOADWBSHierarchy.ldr)
    <04.29.2011 14:04:26> load [INFO] (Message) - Load completed - logical record count 384102.
    <04.29.2011 14:04:26> load [ERROR] (Message) - SqlLoaderSQL LOADER ACTION FAILED. [control=D:\oracle\app\product\11.1.0\db_1\p6rdb\scripts\DATA_WBSHierarchy.csv.ldr] [file=D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv]
    <04.29.2011 14:04:26> load [INFO] (Progress) - Step 3/9 Part 5/6 - FAILED (-1) (0 hours, 0 minutes, 28 seconds, 16 milliseconds)
    Checking corresponding log error file (see below) I see that effectively some records are rejected. Question is: How could I identify the source of the problem and fix it?
    QL*Loader: Release 11.1.0.6.0 - Production on Mon May 2 09:03:22 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Control File:   DATA_WBSHierarchy.csv.ldr
    Character Set UTF16 specified for all input.
    Using character length semantics.
    Byteorder little endian specified.
    Data File:      D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv
    Bad File:     DATA_WBSHierarchy.bad
    Discard File:  none specified
    +(Allow all discards)+
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table WBSHIERARCHY, loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name                  Position   Len  Term Encl Datatype
    PARENTOBJECTID                      FIRST     *  WHT      CHARACTER
    PARENTPROJECTID                      NEXT     *  WHT      CHARACTER
    PARENTSEQUENCENUMBER                 NEXT     *  WHT      CHARACTER
    PARENTNAME                           NEXT     *  WHT      CHARACTER
    PARENTID                             NEXT     *  WHT      CHARACTER
    CHILDOBJECTID                        NEXT     *  WHT      CHARACTER
    CHILDPROJECTID                       NEXT     *  WHT      CHARACTER
    CHILDSEQUENCENUMBER                  NEXT     *  WHT      CHARACTER
    CHILDNAME                            NEXT     *  WHT      CHARACTER
    CHILDID                              NEXT     *  WHT      CHARACTER
    PARENTLEVELSBELOWROOT                NEXT     *  WHT      CHARACTER
    CHILDLEVELSBELOWROOT                 NEXT     *  WHT      CHARACTER
    LEVELSBETWEEN                        NEXT     *  WHT      CHARACTER
    CHILDHASCHILDREN                     NEXT     *  WHT      CHARACTER
    FULLPATHNAME                         NEXT  8000  WHT      CHARACTER
    SKEY                                                      SEQUENCE (MAX, 1)
    value used for ROWS parameter changed from 64 to 21
    Record 14359: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 14360: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 14361: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 27457: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 27458: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 27459: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 38775: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 38776: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 38777: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 52411: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 52412: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 52413: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 114619: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 114620: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 127921: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 127922: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 164588: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 164589: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 171322: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 171323: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 186779: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 186780: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 208687: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 208688: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 221167: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 221168: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Record 246951: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
    ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
    Record 246952: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
    ORA-01722: invalid number
    Table WBSHIERARCHY:
    +384074 Rows successfully loaded.+
    +28 Rows not loaded due to data errors.+
    +0 Rows not loaded because all WHEN clauses were failed.+
    +0 Rows not loaded because all fields were null.+
    Space allocated for bind array:                 244377 bytes(21 rows)
    Read   buffer bytes: 1048576
    Total logical records skipped:          0
    Total logical records read:        384102
    Total logical records rejected:        28
    Total logical records discarded:        0
    Run began on Mon May 02 09:03:22 2011
    Run ended on Mon May 02 09:04:07 2011
    Elapsed time was:     00:00:44.99

    Hi Mandeep,
    Thanks for the information.
    But still it doesnot seem to work.
    Actally, i have Group ID and Group Name as display field in the Hiearchy table.
    Group ID i have directly mapped to Group ID.
    I have created a Split Hierarchy of Group Name and mapped it.
    I have also made all the options configurations as per your suggestions, but it doenot work still.
    Can you please help.
    Thanks,
    Priya.

  • Error while running bulk load utility for account data with CSV file

    Hi All,
    I'm trying to run the bulk load utility for account data using CSV but i'm getting following error...
    ERROR ==> The number of CSV files provided as input does not match with the number of account tables.
    Thanks in advance........

    Please check your child table.
    http://docs.oracle.com/cd/E28389_01/doc.1111/e14309/bulkload.htm#CHDCGGDA
    -kuldeep

  • Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).

    Hi,
    I have a file where fields are wrapped with ".
    =========== file sample
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    ==========
    I am having a .net method to remove the wrap characters and write out a file without wrap characters.
    ======================
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    ======================
    the .net code is here.
    ========================================
    public static string RemoveCharacter(string sFileName, char cRemoveChar)
                object objLock = new object();
                //VirtualStream objInputStream = null;
                //VirtualStream objOutStream = null;
                FileStream objInputFile = null, objOutFile = null;
                lock(objLock)
                    try
                        objInputFile = new FileStream(sFileName, FileMode.Open);
                        //objInputStream = new VirtualStream(objInputFile);
                        objOutFile = new FileStream(sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString(), FileMode.Create);
                        //objOutStream = new VirtualStream(objOutFile);
                        int nByteRead;
                        while ((nByteRead = objInputFile.ReadByte()) != -1)
                            if (nByteRead != (int)cRemoveChar)
                                objOutFile.WriteByte((byte)nByteRead);
                    finally
                        objInputFile.Close();
                        objOutFile.Close();
                    return sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString();
    ==================================
    however when I run the bulk load utility I get the error 
    =======================================
    Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).
    ==========================================
    the bulk insert statement is as follows
    =========================================
     BULK INSERT Temp  
     FROM '<file name>' WITH  
      FIELDTERMINATOR = ','  
      , KEEPNULLS  
    ==========================================
    Does anybody know what is happening and what needs to be done ?
    PLEASE HELP
    Thanks in advance 
    Vikram

    To load that file with BULK INSERT, use this format file:
    9.0
    4
    1 SQLCHAR 0 0 "\""      0 ""    ""
    2 SQLCHAR 0 0 "\",\""   1 col1  Latin1_General_CI_AS
    3 SQLCHAR 0 0 "\",\""   2 col2  Latin1_General_CI_AS
    4 SQLCHAR 0 0 "\"\r\n"  3 col3  Latin1_General_CI_AS
    Note that the format file defines four fields while the fileonly seems to have three. The format file defines an empty field before the first quote.
    Or, since you already have a .NET program, use a stored procedure with table-valued parameter instead. I have an example of how to do this here:
    http://www.sommarskog.se/arrays-in-sql-2008.html
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SSRS 2005 report: Cannot bulk load Operating system error code 5(Access is denied.)

    I built a SSRS 2005 report, which calls a stored proc on SQL Server 2005. The proc contains following code:
    CREATE TABLE #promo (promo VARCHAR(1000))
    BULK
    INSERT #promo
    FROM '\\aseposretail\c$\nz\promo_names.txt'
    WITH
    --FIELDTERMINATOR = '',
    ROWTERMINATOR = '\n'
    SELECT * from #promo
    It's ok when I manually execute the proc in SSMS.
    When I try to run the report from BIDS I got following error:
    *Cannot bulk load because the file "\aseposretail\c$\nz\promo_names.txt" could not be opened. Operating system error code 5(Access is denied.).*
    Note: I have gooled a bit and see many questions on this but they are not relevant because I CAN run the code no problem in SSMS. It's the SSRS having the issue. I know little about the security of SSRS.

    I'm having the same type of issue.  I can bulk load the same file into the same table on the same server using the same login on one workstation, but not on another.  I get this error:
    Msg 4861, Level 16, State 1, Line 1
    Cannot bulk load because the file "\\xxx\abc.txt" could not be opened. Operating system error code 5(Access is denied.).
    I've checked SQL client versions and they are the same, I've also set the client connection to TCP/IP only in the SQL Server Configuration Manager.  Still this one workstation is getting the error.  Since the same login is being used on both workstations and it works on one  but not the other, the issue is not a permissions issue.  I can also have another user login into the bad workstation and have the bulk load fail, but when they log into their regular workstation it works fine.  Any ideas on what the client configuration issue is?  These are the version numbers for Management Studio:
    Microsoft SQL Server Management Studio 9.00.3042.00
    Microsoft Analysis Services Client Tools 2005.090.3042.00
    Microsoft Data Access Components (MDAC) 2000.085.1132.00 (xpsp.080413-0852)
    Microsoft MSXML 2.6 3.0 5.0 6.0
    Microsoft Internet Explorer 6.0.2900.5512
    Microsoft .NET Framework 2.0.50727.1433
    Operating System 5.1.2600
    Thanks,
    MWise

  • Error in loading data using SQL loader

    I am getting a error like ‘SQL*Loader -350 syntax error of illegal combination of non-alphanumeric characters’ while loading a file using SQL loader in RHEL. The command used to run SQL*Loader is:
    Sqlldr userid=<username>/<password> control =data.ctl
    The control file, data.ctl is :
    LOAD data
    infile '/home/oraprod/data.txt'
    append  into table test
    empid terminated by ',',
    fname terminated by ',',
    lname terminated by ',',
    salary terminated by whitespace
    The data.txt file is:
    1,Kaushal,halani,5000
    2,Chetan,halani,1000
    I hope, my question is clear.
    Please revert with the reply to my query.
    Regards

    Replace ''{" by "(" in your control file
    LOAD data
    infile 'c:\data.txt'
    append  into table emp_t
    empid terminated by ',',
    fname terminated by ',',
    lname terminated by ',',
    salary terminated by whitespace
    C:\>sqlldr user/pwd@database control=c.ctl
    SQL*Loader: Release 10.2.0.3.0 - Production on Wed Nov 13 10:10:24 2013
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 1
    Commit point reached - logical record count 2
    SQL> select * from emp_t;
         EMPID FNAME                LNAME                    SALARY
             1 Kaushal              halani                     5000
             2 Chetan               halani                     1000
    Best regards
    Mohamed Houri

  • Bulk load option Data Services Oracle

    Hello,
    I'm trying to use the option "Bulk load" but it doesn't work.
    Error message : "DBS-070301Oracle <DB_INFO>> error message for operation <OCIDirPathPrepare>: <ORA-00942: table or view does not exist"
    Is it necessary to affect dba grant for the oracle user that is declared in the Datastore?
    The version is : Data Services XI 3.1
    The database is : Oracle9i Enterprise Edition Release 9.2.0.7.0
    Thx for your answer

    this also happens in case there is a mismatch in Oracle Client and Server Version, like Oracle 9i client and 10g server
    what is you Oracle Server version ?
    you can also check similar post in DI BOB forum
    http://www.forumtopics.com/busobj/viewtopic.php?t=122242

  • Bulk loading of Customer data into Application

    Hi Guys,
    I am going on with the development of Teleservice module on a new instance.
    Now i need to migrate the data on the old instance to the new instance.
    Please let me know if i have to use only APIs to create the customer into Apps or whether i can bulk load into the seeded tables directly.
    This has to include even Service Requests data also.
    Please let me know if there is any integration violation if we go with bulk loading of data directly.

    You donot need to develop a code for loading customer data anymore. Oracle has provided the BUlk IMport functionality in 11.5.8 for importing the customer infromation (using Oracle Customers Online/Oracle Data Libraian modules). If you would like to create accounts in addition to customer parties, you will have to use TCA V2 apis or customer interface program. For migrating the service requests, i guess the only option is to use APIs. HTH, Venit

Maybe you are looking for

  • How to Restore Factory Settings on a Mac?

    How to Restore Factory Settings on a Mac OS X 10.6.8? I have been told you can do this without the Installation CD. I also do not remember the installation CD when i got the laptop. If it's possible then please someone let me know! if not then how ca

  • Technote upate: "Adobe Digital Editions does not install on Windows Vista" now v1.6

    http://www.adobe.com/go/kb403051 has now been updated to contain the 1.6 version of Digital Editions. Note that the instructions remain the same. Only the download is new. Regards, Bentley Wolfe Senior Support Engineer, Flash/Flash Player/Digital Edi

  • Update to read raw files from Panasonic GH4 gives error Code U44M1I216??

    Trying to view Raw files from the new Panasonic GH4; CS6 won't open them.  I tried Help>Update and got 5 updates available.  Downloaded and attempted installation but got the error code above.  Did as directed in some forum posts--Library>Application

  • New Folder On Desktop

    Need some help with this: I want an Automator plug-in workflow that creates a new empty folder on my desktop titled with the current days date. So the new empty folder would be labeled 10-7-08 When I was running Tiger I created an Automator plug-in t

  • Prefered way to check availability of SAP system

    Hello! We would like to monitor the SAP ERP productive system from SAP Solution Manager. Our current solution is based on CCMSPING agent that has been installed on the server of the satellite system. This solution is not the ideal, because of the CCM