Batch Processing Insertion(Avoids Integrity Constraint Rejection)

Hi!
To all of you. I'm new in this community. I've a question - hope you people can resolve my problem.
How to insert data from one table to other? Suppose i've two tables say A And B. I want to insert data from A to B. In Table B, there is a constraint. So, when when i want to insert any data from B and if there is any constraint violations occur - it won't allow u to insert any row in the destination table and abort the insertion process. But, i want to continue the insertion process in a way, that all the records will be inserted into table B excepts those records which have constraint violation. At the same time i need to check information about those rejeceted records. How can i do it? Waiting for your reply.
Satyaki.

Hi!
Thanks for you reply. But, i think BULK will be usefull for loading mass data into your table or pl/sql table. My issue is different. My isssue is constraint violation during insertion. I want to continue my insertion process even if there is any constraint violation occurs. Like sql loaders, i want to load the sucessfull records and get rejected records from bad file. But, i want to implement this in sql or in pl/sql.
Satyaki.

Similar Messages

  • Batch Processing when Integration script is used

    Can we set up batch processing when we are using integration scripts to connect to source tables? If yes, then what are the settings required ?
    Thanks

    You need to create an empty flat file in the Openbatch folder (assuming you are using the single period load). the format for the file name is in the admin guide. I generally add this to the batch script before the batch process section.

  • Digital Signature automatically inserted for batch processing

    hey guys... hope you can shed some light...
    Is it possible to generate a non-interactive form with a digital signature already in-place?
    We're generating invoices in batch processing and need to have a digital signature in place when we email the PDF to our vendor. The digital signature will always be the same.
    I did see this in the LCD help:
    "You can use Adobe document services to sign a document, validate a
    signature, select the collection of objects to lock, or certify a PDF
    document programmatically. This workflow is ideal when a large number
    of documents require a signature because they can be signed in a batch
    process.
    To sign PDF documents in Adobe document services, configure the web
    service for secure access using the secure sockets layer (SSL). For
    details about this configuration, see the Adobe document services
    Configuration documentation."
    I have a link to the older ADS config document [-Here-|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/30a9630b-4f89-2a10-6fab-e311b3ffd9a6] and am in the process of looking through it.
    However, has anyone done something like this already?

    Otto,
    thanks!
    I hadn't checked out the demo - i didn't even know there was a demo program provided . BTW, it's FP_TEST_07.
    I do believe you're correct that it needs to be an interactive form - that wont be an issue for us.
    There is some documentation floating around that we're trying to piece together - we have our Basis team configuring the ADS for SSL connections.
    From what I've seen, I think we have to store our signature in the ADS and attach it at run time.
    Too many moving parts - I don't like it when I have to get more than 1 group involved to complete a develoment item.
    I'll keep this thread updated as we progress.

  • Best practices for batch processing without SSIS

    Hi,
    The gist of this question is in general how should a C# client insert/update batches records using stored procedures. The ideas I can think of are:
    1) create 1 SP with a parameter of type XML, and pass say 100 records at a time, on 1 thread.  The SP reads the XML as a table and does a single INSERT.
    2) create 1 SP with many parameters, that inserts 1 records.  I can either build a big block of EXEC statements for say 100 records at a time, or call the SP 1 and a time, on 1 thread.  Obviously this seems the slowest.
    3) Parallel processing version of either of the above: Pass 100 records at a time via XML parameter, big block of EXEC statements, or 1 at a time, and use PLINQ to make multiple connections to the database.
    The records will be fairly wide, substantial records.
    Which scenario is likely to be fastest and avoid lock contention?
    (We are doing batch processing and there is not a current SSIS infrastructure, so it's manual: fetch data, call web services, update batches.  I need a batch strategy that doesn't involve SSIS - yet).
    Thanks.

    The "streaming" option you mention in your linked thread sounds interesting, is that a way to input millions of rows at once?  Are they not staged into the tempdb?
    The entire TVP is stored in tempdb before the query/proc is executed.  The advantage of the streaming method is that it eliminates the need to load the entire TVP into memory on either the client or server.  The rowset is streamed to the server
    and SQL Server uses the insert bulk method is to store it in tempdb.  Below is an example C# console app that streams 10M rows as a TVP.
    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Collections;
    using System.Collections.Generic;
    using Microsoft.SqlServer.Server;
    namespace ConsoleApplication1
    class Program
    static string connectionString = @"Data Source=.;Initial Catalog=MyDatabase;Integrated Security=SSPI;";
    static void Main(string[] args)
    using(var connection = new SqlConnection(connectionString))
    using(var command = new SqlCommand("dbo.usp_tvp_test", connection))
    command.Parameters.Add("@tvp", SqlDbType.Structured).Value = new Class1();
    command.CommandType = CommandType.StoredProcedure;
    connection.Open();
    command.ExecuteNonQuery();
    connection.Close();
    class Class1 : IEnumerable<SqlDataRecord>
    private SqlMetaData[] metaData = new SqlMetaData[1] { new SqlMetaData("col1", System.Data.SqlDbType.Int) };
    public IEnumerator<SqlDataRecord> GetEnumerator()
    for (int i = 0; i < 10000000; ++i)
    var record = new SqlDataRecord(metaData);
    record.SetInt32(0, i);
    yield return record;
    IEnumerator IEnumerable.GetEnumerator()
    throw new NotImplementedException();
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Data import as a batch process - Issue on successive runs

    Hi,
    We have a requirement of creating a batch process that will have to read data from REMOTE_TABLE on remote database system and insert all the rows into LOCAL_TABLE on our local data base.
    Scenario:
    When the batch runs for the first time, since LOCAL_TABLE is empty, all the data is read from REMOTE_TABLE and is getting inserted into LOCAL_TABLE.
    But on the successive runs, the same set of data is being duplicated on the LOCAL_TABLE, which I want to avoid.
    I want to update only those records on LOCAL_TABLE that were updated in REMOTE_TABLE and I want to insert only those records that were newly created on REMOTE_TABLE.
    How can push in some business logic.
    Thanks
    Rk.

    Did you think of Materialized Views ?

  • Batch processing using JPD suitability

    Hello,
    In our company, we are using Weblogic Integration 8.1.6 for batch DB processing mixed with some business logic. We have a few applications built around this: an event generator is fired and a JPD (with DB controls and other controls calling EJBs and WSs) runs to process large CSV files (8MB) into the DB, with some other queries for business data in between, on a daily basis.
    However, we have been facing (and still are facing) many problems with those JPDs. It seems to me that they are not such a good choice for long running batches. There is always something: memory is to low, memory is too high (yes! there was a time when we had to lower the instance memory from 1.5GB to 1GB, to prevent it from crashing, after endless trial and error), the transaction timeout is too short (in the descriptors or domain wide), processes cannot be monitored in the WLI console for unknown reasons, transaction handling is not clear (you cannot avoid transactions inside the JPD; it is forced by default - you can only disable it through the datasource) and more problems that happened throughout the past few years, which required rather tedious trial-and-error solving.
    My question is: are JPDs really suitable/intended for long batch processing? The event generators certainly invite you into using them for that purpose. Is anybody else using them for that? Or is everyone else happy with batch EXEs?
    Edited by: KhaledWLS on Sep 19, 2010 10:37 AM

    Please repost your question in the SOA Suite forum:
    SOA Suite
    This forum is for WebLogic Server and readers may not be familiar with WLI.

  • Batch Processing of records using JDBC adapter

    Dear all,
    We are working on JDBC-JDBC scenario and the flow is as below. Source is ORACLE and Target is SQL SERVER.
    We need to pick up 2000 records from Source . PI should split them ito 4 batches of 500 each and should insert in SQL DB.While inserting the records, if there are any errors, those records should sent back to Source again. It is Synchronous scenario.
    1) But as far as I know Batch Processing mode, INSERT option cannot be used.(From SAP help)
    2) Will the connection break if we are sending the error records back to sender.
    Any thoughts would be helpful.
    regards,
    babu

    Hi,
    The below links may help U.
    JDBC ADAPTER
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    /people/saravanakumar.kuppusamy2/blog/2005/01/19/rdbms-system-integration-using-xi-30-jdbc-senderreceiver-adapter
    /people/sap.user72/blog/2005/06/01/file-to-jdbc-adapter-using-sap-xi-30 - File to JDBC
    JDBC - http://help.sap.com/saphelp_nw2004s/helpdata/en/64/ce4e886334ec4ea7c2712e11cc567c/content.htm
    multiple tables with JDBC - http://help.sap.com/saphelp_nw04/helpdata/en/2e/96fd3f2d14e869e10000000a155106/content.htm
    JDBC Response /people/swaroopa.vishwanath/blog/2006/12/28/send-rfc-to-sap-xi-150-asynchronous
    Regards
    Pullarao

  • Batch processing Failure

    hey,
    please help me regarding the batch processing failure issue,
    The file is placed in appropriate folder but in am getting an error when i execute it.
    The error is
    ORA-02291-integrity constraint
    Please help Me Out

    What is in the user error log? (Toos > View Error Log). This is going to be a bit difficult to troubleshoot over the discussion forum, you may want to open a Service Request to allow an FDM analyst to view the files, users error logs, etc.

  • Batch Processing in ODI

    Hi Experts,
    I am queries about the batch processing in ODI.
    We want transform bulk of data...right now I am using Merge IKM for integration.
    When merge step will integrate the data to target table, it is doing in one go..
    Suppose during integration there is some problem like tablespace full etc. At that time whole merge will rollback.
    Can we commit the data in Junks or we need to modify the KM's to introduce this facility?
    Just checked with Array Fetch Size and Batch Update properties of Data Server..but these are not for commit process of the transactions.
    Thanks in advance.
    Regards,
    Sandeep

    Yes you are right, i think that option will work when that insert/update statement executes > 1000 times.
    But in our scenario single insert statement transforming millions of rows.

  • How to find the table name on which integrity constraint not found

    Hi All
    How to acheive this
    I have a lot of tables with lot of primary key - foreign key
    relationship.
    In plsql
    when any inserts happen in the child table & the corresponding row is not present in the parent table, we get an exception
    ORA-02291: integrity constraint (user1.ppk) violated - parent key not found
    On this exception , in the exception block i want to trap teh name of the parent table on which the primary key for the particular child table was not there
    Is it possible to retrieve the parent table in this way. I am looking for a generic plsql code block which can help to acheive this
    Regards

    scott@ORA92> SET SERVEROUTPUT ON
    scott@ORA92> DECLARE
      2    e_no_parent_key EXCEPTION;
      3    PRAGMA            EXCEPTION_INIT (e_no_parent_key, -2291);
      4    v_fk_cons       VARCHAR2 (61);
      5    v_owner            VARCHAR2 (30);
      6    v_parent_table  VARCHAR2 (61);
      7    v_pk_cons       VARCHAR2 (30);
      8    v_parent_column VARCHAR2 (30);
      9  BEGIN
    10    INSERT INTO emp (empno, deptno) VALUES (99, 60);
    11  EXCEPTION
    12    WHEN e_no_parent_key THEN
    13        -- extract schema.constraint_name from sqlerrm:
    14        v_fk_cons:= SUBSTR (SQLERRM,
    15                      INSTR (SQLERRM, '(') + 1,
    16                      INSTR (SQLERRM, ')') - (INSTR (SQLERRM, '(') + 1));
    17        DBMS_OUTPUT.PUT_LINE ('Foreign key constraint violated: ' || v_fk_cons);
    18        -- extract parent schema.table and parent key:
    19        SELECT owner, table_name, constraint_name
    20        INTO     v_owner, v_parent_table, v_pk_cons
    21        FROM     user_constraints
    22        WHERE     (owner, constraint_name) =
    23            (SELECT r_owner, r_constraint_name
    24             FROM     user_constraints
    25             WHERE     owner || '.' || constraint_name = v_fk_cons);
    26        DBMS_OUTPUT.PUT_LINE ('Parent table: ' || v_owner || '.' || v_parent_table);
    27        DBMS_OUTPUT.PUT_LINE ('Parent key: ' || v_owner || '.' || v_pk_cons);
    28        -- extract parent table columns:
    29        FOR rec IN
    30          (SELECT column_name
    31           FROM   user_cons_columns
    32           WHERE  owner = v_owner
    33           AND    table_name = v_parent_table
    34           AND    constraint_name = v_pk_cons)
    35        LOOP
    36          DBMS_OUTPUT.PUT_LINE
    37            ('Parent table column: ' || rec.column_name);
    38        END LOOP;
    39  END;
    40  /
    Foreign key constraint violated: SCOTT.FK_DEPTNO
    Parent table: SCOTT.DEPT
    Parent key: SCOTT.PK_DEPT
    Parent table column: DEPTNO
    PL/SQL procedure successfully completed.

  • Batch process to add Javascript to PDF files

    Hi All,
    I have written a small piece of Javascript for my PDF files. The idea is to add a date stamp to each page of the document before printing. To do this, I have added the following code to the "Document Will Print" action:
    for (var pageNumber = 0; pageNumber < this.numPages; pageNumber++)
    var dateStamp = this.addField("Date","text",pageNumber,[700,10,500,40]);
    dateStamp.textSize=8;
    dateStamp.value = "Date Printed: " + util.printd("dd/mmm/yyyy",new Date());
    My question is this: Does anyone know of a way to batch process a whole directory (of around 600 PDF's) to insert my Javascript into the "Document Will Print" action of each file?
    Many thanks for any information you may have.
    Kind regards,
    Aaron

    > Can I just confirm a few things please? Firstly, should I be going into "Batch Sequences" -> "New Sequence" and selecting "Execute JavaScript" as my sequence type?
    Yes, you are creating new batch sequence that will use JavaScript.
    > My second question is, how can I insert my body of script into the variable "cScript"? I have quotation marks and other symbols that I imagine I will have to escape if I wish to do this?
    You ca either use different quotation marks or us the JavaScript escape character '\' to insert quotation marks
    Your will print code will only work for a full version of Acrobat and not Reader, because Reader will not allow the addition of fields. Also each time you print you will be creating duplicate copies of the field. So it might be better to add the form field only in the batch process and then just add the script to populate the date field in the WillPrint action.
    // add form field to each page of the PDF
    for (var pageNumber = 0; pageNumber < this.numPages; pageNumber++)
    var dateStamp = this.addField("Date","text",pageNumber,[700,10,500,40]);
    dateStamp.textSize=8;
    this.setAction("WillPrint", "dateStamp.value = \"Date Printed: \" + util.printd(\"dd/mmm/yyyy\",\new Date());");

  • How to use the same (selection) paths in multiple images for batch processing?

    I have a whole bunch of photos I need to batch process, but, when creating an action for this, I stumbled upon the problem that the paths I selected with the pen tool, and saved under the "Paths" tab need to be in every image for to be used in the action. I did try to solve it for some time, but couldn't figure out. Any ideas?
    Many Thanks
    Carlos

    Batch processing usually involves using actions you have recorded.  In Action you can insert Path that can be used during processing documents.  Path have some size so you may want to only process document that have the same size.  Look in the Actions Palette fly-out menu for insert path.  It inserts|records the current document work path into the action being worked on and when the action is played it inserts the path into the document as the current work path..

  • How to : only one user in a branch can do the batch process at a time ?

    dear gurus,
    In ADF web application, accessed by multiple branch (20), there is a
    batch process to create invoices from orders. There are 3-5 users in a
    branch. one user can process about 30 - 50 orders to become
    invoices.
    Business rule is : In one batch process, each users must produce
    invoices with gapless invoice number. e.g : user_1 create 30 invoices
    no.001 - 030, user_2 create 40 invoices no.031-070 ... and so on...
    So we want to protect that at a time, in one branch, only one user can
    do the batch process. User_2 must wait until user_1 is done to start
    the process.
    We use the table below to maintain the latest invoice number created :
    CREATE TABLE doc_number
    DOC_TYPE VARCHAR2(6 BYTE) NOT NULL,
    BRANCH_CODE VARCHAR2(6 BYTE) NOT NULL,
    DOC_NBR NUMBER(12) DEFAULT 0,
    CONSTRAINT "DOC_NUMBER_PK" PRIMARY KEY (DOC_TYPE, BRANCH_CODE)
    The question is :
    I think of locking a spicific row of the doc_number table (select ...
    for update wait 10) at the beginning of the batch process stored
    procedure.
    But what if the session/connection left orphan and the row is lockde
    forever before the dba kill it ?
    So is there a better approach to limit the user execution of the
    process ?
    Thank you very much for your help,
    Krist

    Forget it. See http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4343369880986.
    However, if you are dead set on doing this old fashioned thing of trying to emulate paper invoices from a numbered tablet, you could always just let everyone generate all of their invoices with no invoice number whenever they want, then at some time ( say 21:00 every day), take a lock on the table and apply invoice numbers to all of the records with a null invoice number -- which will take only a few centiseconds at most. And let the users/system print only invoices that have numbers assigned. This way, you pretty much eliminate the concurrency issue of various branches generating invoices only in their time window -- yuk, what an awful way to have to do business.
    And completely forget about users or branches having their own range of numbers. That is completely nuts. Your database already knows which branch owns which invoice, right??

  • SAP R/3 vs XI Batch Processing

    We are planning to move some of our existing interfaces between legacy systems & R/3 to be processed through XI. We would like to do this in such a manner that the business doesn't get impacted much and also keeping the project costs to minimum.
    Current interfaces use batch processing in R/3 with 1000's of records sitting in a file ftp'd from legacy system and a pre-edit ABAP program will validate the data and posts transactions in R/3 using BDC sessions. Errors will be reported through some custom application built in ABAP. Users will process the BDC session and correct the errors to post the documents. In some cases we use BAPI's but when the BAPI errors out a BDC session is created for processing the errors.
    We are on XI3.0 SP12 & R/3 4.6B
    Can XI do all the stuff that R/3 does today in terms of batch processing and error correction by business user etc. What are the limitations and are there any workarounds to overcome them.

    XI is not meant for that and cannot do that. XI is an integration engine and does not replace R/3 work.
    What it means: XI can receive data and modify the data in mapping and deliver it to another system and invoke a process in case of R/3.
    XI has 3 adapters for other SAP systems.
    1. IDoc
    2. RFC
    3. XI Proxy
    4. File Adapter (I dont think you want to create a file again, but one of my previous client used this option)
    I guess you know about the first 2.
    The third one is, when you receive the data from source system, modify it according to the requirements and pass it to an ABAP Class where you can write your own code.
    Let me simplify this for your process.
    When a file is received, XI can pick up the file and send it to R/3 System using XI proxy (in internal tables or structures) which will be executing a class. Within that class you can code to pass that data to your ABAP program with SUBMIT PROGRAM. This is just an example.
    This cannot replace your BDC process, but can make it to do the same.
    regards
    Shravan

  • Batch process crop Pdfs in background

    Hello
    Have read various posts on here, and from what I read I'm not sure it's possible... but thought I'd ask!
    I am using Acroboat XI Pro and after each print production cycle, I have a huge batch of PDFs with printer marks/bleeds etc which I want to crop by batch process down to A4.  In our previous version of Acrobat we could configure the action, point it to a source folder, and run the action, saving the cropped files into a separate processed folder. Easy.
    Using Action Wizard, I haven't been able to find a way to specify a destination folder - and running the Wizard either prompts us to overwrite the source file, or I can add a Save action and rename the file. This isnt ideal as we ideally want to retain the same naming convention.
    I guess I could rename the files, then use Bridge to strip the added prefix out - but want to avoid over-complicating what used to be a one-click process :-(
    Can anyone let me know if they have found a way around this.
    Much appreciated
    Sharon

    Change the Save command to "Save to local folder" (by clicking the arrow icon next to it), and then select where you want to save the edited files.

Maybe you are looking for

  • Implemention of IFRS on New G/L

    Hello, We are planning to implement IFRS with New G/L. what would be the guidelines to do it? I was reading that recommendation is: 1. Migration of New G/L 2. Implementation of IFRS (why?) Can I migrate previous fiscal year to make a comparation with

  • Are you sure you want to navigate away from this page? I get this popup every minute. it's driving me crazy. How do i stop this from happening?

    This popup happens for no apparent reason. I'll just be reading an article or checking my e-mail and it pops up all the time

  • More jrun using up memory problems

    There seem to be a lot of problems like this but everyone's situation seems to be a little different.  It makes it hard to find solutions.  Here is what we are dealing with. We are taking an XML file containing product data and importing it into a SQ

  • Hierarchy SQL

    All, My input data looks like this: Parent_Column          Child_Column          Levels_from_Parent 100000               100000                    0 100000               110000                    1 100000               111000                    2 100

  • Error when importing pdf to Form Central

    I created an editable pdf form in Adobe Acrobat XI Pro, and would like to import it to Form Central. But when I select to import it, I keep getting an error that says, "There was an unexpected problem that prevented importing this document" and "svr.