Muliple package calls on aftercommit causing duplicate records

I have a jsp page inserting into table A. The app module has an aftercommit method that calls a package using createCallableStatement. This package call inserts into table B.
We found duplicate records were being created in table B. The create time is the same in the 2 duplicate records. To try to prevent the duplicates I put a check in the package before the insert to see if the record already exists then don't insert. This did not stop the duplicates. This implies that there are 2 instances of the package executing in parallel in different sessions. They can't see each others uncommitted inserts and so go ahead and insert.
Is this possible? Could it be creating multiple sessions?

Hi,
to work with multiple sessions, there have to be multiple database sessions. While e.g AM pooling could have additional connections to t he database open, I don't see how a commit of one session invokes the same on another
Frank

Similar Messages

  • Causing duplicate records

    Hi,
    generally we use the following api to insert records. but it is causing duplication of records. plz give me some guide lines.
    /* Formatted on 2006/04/21 15:31 (Formatter Plus v4.5.2) */
    CREATE OR REPLACE PACKAGE BODY cec_ipc_transactions_api
    AS
    PROCEDURE LOG (
    p_transaction_type VARCHAR2,
    p_order_number NUMBER,
    p_header_id NUMBER,
    p_order_id VARCHAR2,
    p_original_order_id VARCHAR2,
    p_version NUMBER,
    p_cancel_flag VARCHAR2,
    p_success OUT BOOLEAN,
    p_error_message OUT VARCHAR2,
    p_order_source IN VARCHAR2
    DEFAULT 'MarketPlace Router Store',
    p_business_entity IN VARCHAR2 DEFAULT 'UNK'
    IS
    x_header_id NUMBER;
    x_order_number NUMBER;
    x_statement_id NUMBER;
    x_task_id NUMBER;
    x_org_id NUMBER;
    x_transaction_type VARCHAR2 (60);
    x_transaction_type_old VARCHAR2 (60) := NULL;
    x_error_message VARCHAR2 (2000);
    x_coe_flag VARCHAR (1) := NULL;
    -- Added on 10 FEB 2005
    x_errcode VARCHAR2 (240);
    x_errmsg VARCHAR2 (240);
    v_procedure_name VARCHAR2 (80) := '<11i Procedure Name>';
    v_tran_control_rec xxcca_iei_ne_util.cca_iei_tran_control_rec;
    -- 5/16/99 -- add open_flag = 'Y' to preven DTS errors when cancelling orders
    -- Everest Migration 10.7 to 11i ---Change Date '14 JUL 2004'
    -- Get header id of respective order for update with nowait option from oe_order_headers_all
    CURSOR lock_header (p_header_id NUMBER)
    IS
    SELECT ooha.header_id header_id
    FROM oe_order_headers_all ooha,
    xxcca_oe_order_headers_ext xoohe
    WHERE ooha.header_id = p_header_id
    AND ooha.header_id = xoohe.header_id
    AND NVL (xoohe.cms_customer_version, -999) < p_version
    AND ooha.open_flag
    || '' = 'Y'
    FOR UPDATE OF xoohe.cms_customer_version NOWAIT;
    BEGIN
    cec_debug_pk.DEBUG ('Begin CEC_IPC_TRANSACTIONS_API.log');
    CEC_DEBUG_PK.debug('EAH ord: ' || p_order_number
    || ' hdrid: ' || p_header_id
    || ' ordid: ' || p_order_id
    || ' origid: ' || p_original_order_id
    || ' vers: ' || p_version
    || ' src: ' || p_order_source
    || ' ent: ' || p_business_entity);
    IF p_header_id IS NULL
    THEN
    BEGIN
    x_statement_id := 10;
    -- Everest Migration 10.7 to 11i ---Change Date '14 JUL 2004'
    -- Get the header id, org id, order number of the respetive order from oe_order_headers_all
    SELECT header_id, org_id, order_number
    INTO x_header_id, x_org_id, x_order_number
    FROM oe_order_headers_all
    WHERE order_number = p_order_number;
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    NULL;
    END;
    ELSE
    -- Everest Migration 10.7 to 11i ---Change Date '14 JUL 2004'
    -- Get the org id, order number of the respective order from oe_order_headers_all
    BEGIN
    SELECT org_id, order_number
    INTO x_org_id, x_order_number
    FROM oe_order_headers_all
    WHERE header_id = p_header_id;
    EXCEPTION
    --TD #38404 - Added exception handler
    WHEN NO_DATA_FOUND
    THEN
    NULL;
    END;
    x_header_id := p_header_id;
    END IF;
    IF x_org_id IS NULL AND NVL (p_version, 1) = 1
    THEN
    BEGIN
    SELECT org_id
    INTO x_org_id
    FROM cec_ipc_transactions
    WHERE original_order_id = p_original_order_id
    AND NVL (version, 1) = p_version
    AND org_id > -1000
    AND ROWNUM = 1;
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    NULL;
    END;
    END IF;
    IF x_org_id IS NULL AND NVL (p_version, 1) = 1
    THEN
    BEGIN
    SELECT TO_NUMBER (meaning)
    INTO x_org_id
    FROM cec_lookups
    WHERE lookup_type = 'CEC_ORG_ID'
    AND lookup_code = p_business_entity
    AND ROWNUM = 1;
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    x_org_id := -1000;
    END;
    END IF;
    x_transaction_type := p_transaction_type;
    If the transaction type is a terminal transaction type,
    then we try to update the version in xxcca_oe_order_headers_ext.
    If the record is locked, set the transaction type to
    COE_LOCKED.
    IF x_transaction_type IN
    (cec_globals_pk.coe_manual_rlse, cec_globals_pk.coe_ec_applied)
    THEN
    BEGIN
    FOR c IN lock_header (x_header_id)
    LOOP
    -- Everest Migration 10.7 to 11i ---Change Date '14 JUL 2004'
    -- Updae the cms customer version of the respective order in xxcca_oe_order_headers_ext table
    UPDATE xxcca_oe_order_headers_ext
    SET cms_customer_version = p_version
    WHERE header_id = c.header_id;
    END LOOP;
    EXCEPTION
    WHEN OTHERS
    THEN
    -- Handle locked record
    IF SQLCODE = -54
    THEN
    x_transaction_type := cec_globals_pk.coe_locked;
    -- if the record is locked, we want to save the
    -- old trans type so we can later restore it
    x_transaction_type_old := p_transaction_type;
    END IF;
    END;
    END IF;
    -- If the change is "done" either by automatic application or by
    -- release of all IC change holds, we want to send out the
    -- order acknowledgments
    -- This is done by inserting into CCA_NOTIFY_CHANGES table
    IF x_transaction_type IN (cec_globals_pk.coe_manual_rlse,
    cec_globals_pk.coe_ec_applied
    AND NVL (p_version, 1) > 1
    THEN
    -- Commented the insert and changed as follows on 10 FEB 2005
    /*INSERT INTO cca_iei_tran_control
    (source_system,
    source_id,
    control_id,
    request_start_time,
    tran_type,
    tran_status,
    tran_ack_q,
    tran_code
    VALUES ('IC_CHANGE',
    cca_iei_tran_control_s1.NEXTVAL,
    x_order_number,
    SYSDATE,
    'ORDER',
    'NEW',
    999,
    'IC_CHANGE'
    -- Added on 10 FEB 2005
    SELECT cca_iei_tran_control_s1.NEXTVAL
    INTO v_tran_control_rec.DATA.source_id
    FROM DUAL;
    v_tran_control_rec.DATA.control_id := x_order_number;
    v_tran_control_rec.DATA.header_id := x_header_id;
    v_tran_control_rec.DATA.tran_ack_q := 999;
    v_tran_control_rec.DATA.tran_code := 'IC_CHANGE';
    v_tran_control_rec.DATA.tran_status := 'NEW';
    v_tran_control_rec.DATA.tran_type := 'ORDER';
    v_tran_control_rec.DATA.request_start_time := SYSDATE;
    v_tran_control_rec.DATA.source_system := 'IC_CHANGE';
    v_tran_control_rec.DATA.attribute5 := v_procedure_name;
    xxcca_iei_ne_util.insert_cca_iei_tran_control (
    v_tran_control_rec,
    x_errcode,
    x_errmsg
    END IF;
    IF x_transaction_type = cec_globals_pk.coe_print_and_tear
    AND (NVL (p_version, 1) > 1 AND x_header_id IS NULL)
    THEN
    x_transaction_type := cec_globals_pk.coe_ec_applied;
    END IF;
    /* Try to update existing record in CEC_IPC_TRANSACTIONS for this
    header_id / version combination.
    If none is found, then create a new one.
    Usually, the Query will create the record and others will update
    It is possible that another transaction type will result in the
    creation of the CEC_IPC_TRANSACTIONS record if the order was not
    imported yet or if ERP was down at the time of Query.
    x_statement_id := 20;
    x_coe_flag := NULL;
    IF x_transaction_type = cec_globals_pk.coe_submitted
    THEN
    BEGIN
    SELECT 'X'
    INTO x_coe_flag
    FROM cec_ipc_transactions
    WHERE status_code
    || '' NOT IN (cec_globals_pk.coe_aborted,
    cec_globals_pk.coe_queried,
    cec_globals_pk.coe_expired,
    cec_globals_pk.coe_locked,
    cec_globals_pk.coe_wait,
    cec_globals_pk.coe_wait_for_version,
    cec_globals_pk.coe_wait_for_cust_data,
    cec_globals_pk.coe_importing_cust_data
    AND ( erp_header_id = x_header_id
    OR ( original_order_id = p_original_order_id
    AND order_source <> 'ALL'
    AND NVL (version, 1) = NVL (p_version, 1);
    EXCEPTION
    WHEN OTHERS
    THEN
    x_coe_flag := NULL;
    END;
    END IF;
    IF NVL (x_coe_flag, 'YY') != 'X'
    THEN
    UPDATE cec_ipc_transactions
    SET status_code = x_transaction_type,
    status_code_old = x_transaction_type_old,
    operation_code = DECODE (
    operation_code,
    'CANCEL', operation_code,
    DECODE (
    p_version,
    NULL, 'INSERT',
    1, 'INSERT',
    DECODE (
    p_cancel_flag,
    'Y', 'CANCEL',
    'UPDATE'
    change_query_date = DECODE (
    x_transaction_type,
    cec_globals_pk.coe_queried, SYSDATE,
    change_query_date
    change_submit_date =
    NVL (
    change_submit_date,
    DECODE (
    x_transaction_type,
    cec_globals_pk.coe_submitted, SYSDATE,
    cec_globals_pk.coe_print_and_tear, SYSDATE,
    change_submit_date
    change_complete_date =
    DECODE (
    x_transaction_type,
    cec_globals_pk.coe_manual_rlse, SYSDATE,
    cec_globals_pk.coe_aborted, SYSDATE,
    cec_globals_pk.coe_expired, SYSDATE,
    cec_globals_pk.coe_ec_applied, SYSDATE,
    cec_globals_pk.coe_print_and_tear, NULL,
    cec_globals_pk.coe_submitted, NULL,
    change_complete_date
    WHERE ( ( original_order_id = p_original_order_id
    AND order_source <> 'ALL'
    OR erp_header_id = x_header_id
    AND NVL (version, 1) = NVL (p_version, 1);
    -- If update failed, then insert
    IF SQL%ROWCOUNT = 0
    THEN
    x_statement_id := 30;
    INSERT INTO cec_ipc_transactions
    (ec_header_id, version, status_code, operation_code,
    load_code, change_query_date, change_submit_date,
    change_complete_date, erp_header_id, error_message,
    order_id, original_order_id, org_id, order_source,
    creation_date)
    SELECT cec_headers_s.NEXTVAL, p_version, x_transaction_type,
    DECODE (
    p_version,
    NULL, 'INSERT',
    1, 'INSERT',
    DECODE (p_cancel_flag, 'Y', 'CANCEL', 'UPDATE')
    'ELECTRONIC_COMMERCE',
    DECODE (
    x_transaction_type,
    cec_globals_pk.coe_queried, SYSDATE,
    NULL
    DECODE (
    x_transaction_type,
    cec_globals_pk.coe_submitted, SYSDATE,
    cec_globals_pk.coe_print_and_tear, SYSDATE,
    NULL
    DECODE (
    x_transaction_type,
    cec_globals_pk.coe_manual_rlse, SYSDATE,
    cec_globals_pk.coe_aborted, SYSDATE,
    cec_globals_pk.coe_expired, SYSDATE,
    cec_globals_pk.coe_ec_applied, SYSDATE,
    NULL
    x_header_id, x_error_message, p_order_id,
    p_original_order_id, x_org_id, p_order_source, SYSDATE
    FROM DUAL;
    -- Since we bump the version of the change hold, we want to send
    -- all previous print and tear versions to manually released as well
    ELSIF x_transaction_type = cec_globals_pk.coe_manual_rlse
    THEN
    UPDATE cec_ipc_transactions
    SET status_code = x_transaction_type,
    change_complete_date = SYSDATE
    WHERE status_code
    || '' IN (cec_globals_pk.coe_print_and_tear,
    cec_globals_pk.coe_queried
    AND ( erp_header_id = x_header_id
    OR original_order_id = p_original_order_id
    AND NVL (version, 1) < NVL (p_version, 1);
    -- Fix incident 3241
    ELSIF x_transaction_type = cec_globals_pk.coe_aborted
    THEN
    UPDATE cec_ipc_transactions
    SET status_code = x_transaction_type,
    change_complete_date = SYSDATE
    WHERE status_code
    || '' IN (cec_globals_pk.coe_queried)
    AND ( erp_header_id = x_header_id
    OR ( original_order_id = p_original_order_id
    AND order_source <> 'ALL'
    AND NVL (version, 1) < NVL (p_version, 1);
    -- Fixed incident 2771
    ELSIF x_transaction_type = cec_globals_pk.coe_submitted
    THEN
    UPDATE cec_ipc_transactions
    SET status_code = cec_globals_pk.coe_expired,
    change_complete_date = SYSDATE
    WHERE status_code
    || '' IN (cec_globals_pk.coe_queried)
    AND ( erp_header_id = x_header_id
    OR ( original_order_id = p_original_order_id
    AND order_source <> 'ALL'
    AND NVL (version, 1) < NVL (p_version, 1);
    END IF;
    END IF;
    p_success := TRUE;
    p_error_message := x_error_message;
    cec_debug_pk.DEBUG ('End CEC_IPC_TRANSACTIONS_API.log');
    EXCEPTION
    WHEN OTHERS
    THEN
    p_success := FALSE;
    cec_debug_pk.get_error (
    'GENERIC_SQL_ERROR',
    x_error_message,
    x_task_id
    cec_debug_pk.set_token (
    x_error_message,
    'PROCEDURE_NAME',
    'CEC_IPC_TRANSACTIONS_API.log'
    cec_debug_pk.set_token (
    x_error_message,
    'STATEMENT_ID',
    TO_CHAR (x_statement_id)
    cec_debug_pk.set_token (x_error_message, 'SQLCODE', SQLCODE);
    cec_debug_pk.set_token (x_error_message, 'SQLERRM', SQLERRM);
    p_error_message := x_error_message;
    END LOG;
    END cec_ipc_transactions_api;
    /

      CURSOR lock_header (p_header_id NUMBER) IS
        SELECT ooha.header_id header_id
          FROM oe_order_headers_all ooha, xxcca_oe_order_headers_ext xoohe
         WHERE ooha.header_id = p_header_id
           AND ooha.header_id = xoohe.header_id
           AND NVL (xoohe.cms_customer_version, -999) < p_version
           AND ooha.open_flag|| '' = 'Y'
        FOR UPDATE OF xoohe.cms_customer_version NOWAIT;i'm not sure why you had this predicate in your where clause of your cursor:
               AND ooha.open_flag|| '' = 'Y'when you can simply
               AND ooha.open_flag = 'Y'
    or
               AND Nvl(ooha.open_flag,'X') = 'Y'also you only had one loop structures which is to update and not insert. could it be that the procedure log is being called out from other pl/sql within it's loop structure that cause duplication in inserts.

  • JCA DB Adapter process the same file twice in a clustered envrionment causing duplicate records

    We have 2 Nodes in a OSB Cluster and for one of the webservices we use DB Adapter to process the records and send it to external system.
    When ever we catch up with below exception and when the same records are reprocessed/retried the message is processed/sent by both the nodes which is causing a duplicate of records. 
    'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <1e3df38069b38bbe:3fdbec90:14c7cca0f5f:-8000-0000000000000006> <1428935826370> <BEA-000000> <<oracle.tip.adapter.db.InboundWork handleException> BINDING.JCA-11624
    DBActivationSpec Polling Exception.
    Query name: [XYZDBAdapterSelect], Descriptor name: [XYZDBAdapter.ABCEnrollment]. Polling the database for events failed on this iteration.
    Caused by JTS transaction was marked rollback only.
    Please advice!
    Thanks!

    You have to use MOVE operation, if there is any remote exception occured, then move the file to someother folder and again move back to the same folder where the file pickup will start.
    In that way you will be use the same file picked up next time when the polling happens.
    It is considered good etiquette to reward answerers with points (as "helpful" - 5 pts - or "correct" - 10pts).
    Thanks,
    Vijay

  • HTTP Protocol caused Duplicate Records

    Hello,
    Out company has just developed a j2me program for sales personal to place order from mobile phone. The program is developed using netbeans 6. The connection to the the server is using http protocol via gprs or 3G.
    There is a problem that we face; sometimes we there are duplicate entries from the one mobile request. For example, A sales personal placed an order and press the send button to the server.. The server incoming log sometimes will have multiple request from this this request..
    public void send()throws IOException
    //mainS = null;
    Days = "";
    if (mainS == null){
    getAlert1().setString("Invalid Input lala");
    throw new IOException("Invalid Input lala");
    String onum = getonum();
    StringBuffer b = new StringBuffer();
    String url = server + "?act=i&unm=" + uname + "&upw=" + upw + "&icode="+Icode+"&onum="+onum +"&dd="+ getDates() + "&num=" + mainS;
    System.out.println(url);
    try{
    //Connect to the server
    HttpConnection hc = (HttpConnection) Connector.open(url);
    //Authentication
    System.out.println(hc.getResponseCode());
    if (hc.getResponseCode() == HttpConnection.HTTP_OK) {
    InputStream is = hc.openDataInputStream();
    int ch;
    while((ch = is.read()) != -1) {
    b.append((char) ch);
    System.out.println(b.toString().trim());
    returnS = b.toString().trim();
    is.close();
    }else{
    getAlert1().setString("No respon from server.");
    hc.close();
    throw new IOException("No respon from server.");
    hc.close();
    }catch(Exception ex){
    getAlert1().setString(ex.getMessage());
    throw new IOException(ex.getMessage());
    Please help.

    yhkok wrote:
    Hi.
    Thank you for your reply. I think the problem caused by user to send multiple times should not be happened. Because when a user press "SEND" to submit the request, the can't press anything while waiting for the confirmation from the server. Perhaps the second issue cause the problem.. can you tell me more in detail how can I solve this problem.. or how can I check if the program send multiple time. This happens especially during heavily transactions or a lot of users send in together in a short period. Can It cause by while waiting for the server to reply.. the user end "time-out" and send the form again ? We are using Nokia 2630 mobilephone which loaded with S40 5th edition..
    Regards.
    KokI carefully said the user OR the user browser (whatever that may be) is re-submitting the form.
    Who cares why? It doesn't matter why. It (or someone) just is.

  • Remove duplicate records in Live Office, caused by CR Groups

    hello all
    i have a CR with groups. all works well, until i use the report in live office, were it is duplicating the group data for each of the detail records
    i have removed the details from the CR report, leaving only the group data, but it still happens
    anyone have a work around ?
    thanks
    g

    Hi,
    First you select the report name from the left panel and  check whether  option is coming on not.
    or you try  with right click on any report cell then go to live office and object properties.
    Second , you are getting duplicate record in the particular this report or all reports.And how many time highlight expert you are using in this report.
    Thanks,
    Amit

  • Duplicate records in Fact Tables

    Hi,
    We are using BPC 7.0 MS SP7. BPC created duplicate records in WB and Fac2 tables. We faced similar issue before and the solution was to reboot the server and cleanup the additional data created. I think it should not be an issue with the script logic files we have. We had the issue across all applications. Data is fine now after the server reboot and running the same logic files.  I want to know if any one faced this issue and if there is any solution other than reboot. I appreciate your help.
    Thanks
    Raj

    Hi Sorin,
    I know this thread is rather old, but i have a problem which is pretty much related to this thread and appreciate if you could assist me. I have client running on 7.0 MS who has been using it for the past 3 years.
    It is a heavily customized system with many batch files running daily to update dimensions, copy data and sort. And Yes we do use custom packages that incorporates stored procedures.
    Recently, with no change in environment, we encountered our factwb ballooning up out of no where. fact table only contains less then 1 gb data but, factwb has 200 gb data and practically paralayzed the system. There is also equilavent 300 gb increase in the log files.
    We are not able to find out what caused this? Or if even the 200gb records in wb are even valid records that are duplicated. Is there a way to troubleshoot this?

  • Oracle 10 -   Avoiding Duplicate Records During Import Process

    I have two databases on different servers(DB1&DB2) and a dblink connecting the two. In DB2, I have 100 million records in table 'DB2Target'.
    I tried to load 100 million more records from tables DB1Source to DB2Target on top of existing 400 million records but after an hour I got a network error from DB2.
    The load failed after inserting 70% records.Now I have three tasks. First I have to find the duplicate record between DB1 and DB2. Second I have to find out the remaining 30% records missing from DB2Target.
    Third I have to re-load the remaining 30% records. What is the best solution?
    SELECT COUNT(*), A, B FROM DB2TARGET
    GROUP BY A, B
    HAVING COUNT(*) > 2
    re-loading
    MERGE INTO DB2TARGET tgt
    USING DB1SOURCE src
    ON ( tgt .A=  tgt .A)
    WHEN NOT MATCHED THEN
    INSERT ( tgt.A,  tgt .B)
    VALUES ( src .A,  src .B)Thanks for any guidance.

    when I execute this I get the folllowing error message:
    SQL Error: ORA-02064: distributed operation not supported
    02064. 00000 - "distributed operation not supported"
    *Cause:    One of the following unsupported operations was attempted
    1. array execute of a remote update with a subquery that references
    a dblink, or
    2. an update of a long column with bind variable and an update of
    a second column with a subquery that both references a dblink
    and a bind variable, or
    3. a commit is issued in a coordinated session from an RPC procedure
    call with OUT parameters or function call.
    *Action:   simplify remote update statement                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Duplicate records returned by db adapter

    Hi,
    We have a DB Adapter design as following.
    From a J2EE application, an invoking procedure at the adapter is triggered. This invokes the DB adapter which inturn invokes the implemetned procedure which
    returns a VARRAY.
    J2EE application is deployed on the AS 10g Protal side.
    The implemented procedure further invokes a legacy procedure which returns a Refcursor.Inside this implemetned procedure, the cursor is fetched into a
    Recortype and each element of the record type is added to an Object (of type MPLUS_MPLUS_BankGurDet_OAI_V1). Then each of this object is added to a VARRAY and this VARRAY is returned to the invoking procedure.
    The problem we face is that, in the returned the varray, there is always a duplicate of the last recored. That is if we have , say 5 records, then when we manipulate the VARRAY, there will 6 recored and the sixth record will be a duplicate of the 5th one.
    Why is this happening. I have attached the code I used and log file. This is happening for all cases where VARRAY is returned. I have attached only one set
    here.The log file attached clearly show the duplicate records.
    -- All these is defined at the Legacy system
    -- Following is a object definition used by the implemented procedure to return a collection of objects.
    TYPE MPLUS_MPLUS_BankGurDet_OAI_V1 IS OBJECT (
    AG_CODE VARCHAR2(1000),
    AMT_NO NUMBER,
    BANK VARCHAR2(1000),
    VAL_FROM DATE,
    VAL_TO DATE,
    GUAR_NO VARCHAR2(1000)
    -- Following is the defintion of Varray returned by the implemented procedure.
    TYPE MPLUS_MPLUS_BankGurDet_OAI_Arr IS VARRAY(1000) OF MPLUS_MPLUS_BankGurDet_OAI_V1;
    -- This is the implemented procedure called by adapter. This returns a Varray of the above object.(MPLUS_MPLUS_BankGurDet_OAI_V1).
    PACKAGE Mplus AS
    TYPE MY_TYPE IS REF CURSOR;
    PROCEDURE imp_bankGurDet_OAI_V1(
    i_CUSTOMER_CODE IN LONG,
    i_CURRENT_DATE IN DATE,
    o_RESULT OUT MPLUS_MPLUS_BankGurDet_OAI_Arr
    END Mplus ;
    PACKAGE BODY Mplus AS
    PROCEDURE imp_bankGurDet_OAI_V1(
    i_CUSTOMER_CODE IN LONG,
    i_CURRENT_DATE IN DATE,
    o_RESULT OUT MPLUS_MPLUS_BankGurDet_OAI_Arr
    AS
    Type BANK IS RECORD(agencyname VARCHAR2(1000),Amout NUMBER,BankName VARCHAR2(1000),validfrom date , validto date,guarenteeno varchar2(1000));
    RECS MY_TYPE; -- Ref cursor
    BANKDETAILS_REC BANK ;
    BANKDETAILS_OBJ MPLUS_MPLUS_BANKGURDET_OAI_V1;
    i Number;
    dummy number;
    BEGIN
    WF_MP_ADVT.SP_ADV_AGE_BANK_GRTY(i_CUSTOMER_CODE,i_CURRENT_DATE,RECS);
    dummy:= 0;
    BANKDETAILS_OBJ:=new MPLUS_MPLUS_BankGurDet_OAI_V1('',0,'',sysdate,sysdate,'');
    o_RESULT:=MPLUS_MPLUS_BankGurDet_OAI_Arr();
    i:=1;
    LOOP
    fetch RECS INTO BANKDETAILS_REC;
    BANKDETAILS_OBJ.AG_CODE:=BANKDETAILS_REC.agencyname;
    BANKDETAILS_OBJ.AMT_NO:=BANKDETAILS_REC.Amout;
    BANKDETAILS_OBJ.BANK:=BANKDETAILS_REC.BankName;
    BANKDETAILS_OBJ.VAL_FROM:=to_date(BANKDETAILS_REC.validfrom,'dd-mon-yyyy');
    BANKDETAILS_OBJ.VAL_TO:=to_date(BANKDETAILS_REC.validto,'dd-mon-yyyy');
    BANKDETAILS_OBJ.GUAR_NO:=BANKDETAILS_REC.guarenteeno;
    o_RESULT.EXTEND;
    o_RESULT(i):=BANKDETAILS_OBJ;
    i:=i+1;
    EXIT WHEN RECS%NOTFOUND;
    END LOOP;
    END imp_bankGurDet_OAI_V1;
    END Mplus ;
    -- The following is a legacy procedure which return a refcursor which is iterated by the implemented procedure to create object and then this object is added to VARRAY in the.
    PACKAGE WF_MP_ADVT AS
    PROCEDURE SP_ADV_AGE_BANK_GRTY(
              Customer_Code      IN     CHAR,
    curdate in date,
              RESULT          OUT     Record_Type);
    END;
    PACKAGE BODY WF_MP_ADVT AS
    PROCEDURE SP_ADV_AGE_BANK_GRTY(
              Customer_Code      IN     CHAR,
    curdate in date,
              RESULT          OUT     Record_Type) IS
    BEGIN
    -- OPEN RESULT FOR SELECT AG_CODE,AMT_NO,BANK,to_char(VAL_FROM,'dd-mm-yyyy'),to_char(VAL_TO,'dd-mm-yyyy'),GUAR_NO FROM WF_MP_ADVT_BANKGAR WHERE AG_CODE=CUSTOMER_CODE;
    --OPEN RESULT FOR SELECT * FROM WF_MP_ADVT_BANKGAR WHERE AG_CODE=CUSTOMER_CODE;
    OPEN RESULT FOR SELECT AG_CODE,AMT_NO,BANK,to_char(VAL_FROM,'dd-mon-yyyy'),to_char(VAL_TO,'dd-mon-yyyy'),GUAR_NO FROM WF_MP_ADVT_BANKGAR WHERE AG_CODE=CUSTOMER_CODE;
    -- null;
    END SP_ADV_AGE_BANK_GRTY;
    END;
    The log file is as following
    Mplus.bankGurDet:OAI/V1,OAI/V1,false,1
    CUSTOMER_CODE: 1
    CURRENT_DATE: Fri Oct 29 00:00:00 GMT+05:30 2004
    Tue Nov 02 14:55:10 GMT+05:30 2004: InMessageTransformer: got a message for processing.
    Mplus.bankGurDet:OAI/V1,OAI/V1,false,1
    CUSTOMER_CODE: 1
    CURRENT_DATE: Fri Oct 29 00:00:00 GMT+05:30 2004
    Tue Nov 02 14:55:11 GMT+05:30 2004: Inbound Transform Engine: beginning to transform message.
    Tue Nov 02 14:55:11 GMT+05:30 2004: MessageTransformer: Successfully created destination message object with necessary header information but no attribute objects yet.
    Tue Nov 02 14:55:11 GMT+05:30 2004: InMessageTransformer: got a message for processing.
    Mplus.bankGurDet:OAI/V1,OAI/V1,false,1
    CUSTOMER_CODE: 1
    CURRENT_DATE: Fri Oct 29 00:00:00 GMT+05:30 2004
    Tue Nov 02 14:56:14 GMT+05:30 2004: db_bridge_writer_1 wrote the message to the database successfully.
    Tue Nov 02 14:56:14 GMT+05:30 2004: Agent: sending reply message.
    Mplus.bankGurDet:OAI/V1,OAI/V1,true,2
    RESULT[0]
    AG_CODE: 1
    AMT_NO: 200000.0
    BANK: STATE BANK OF INDIA
    VAL_FROM: 0004-04-01
    VAL_TO: 0005-03-31
    GUAR_NO: SBI01
    RESULT[1]
    AG_CODE: 1
    AMT_NO: 200000.0
    BANK: HDFC BANK
    VAL_FROM: 0004-04-01
    VAL_TO: 0005-03-31
    GUAR_NO: SBI01
    RESULT[2]
    AG_CODE: 1
    AMT_NO: 200000.0
    BANK: HDFC BANK
    VAL_FROM: 0004-04-01
    VAL_TO: 0005-03-31
    GUAR_NO: SBI01

    Vinod,
    As far as I can see the problem with duplicating the last record is a result of the EXIT condition in the loop in procedure 'imp_bankGurDet_OAI_V1'. Your loop looks as follows:
    LOOP
    FETCH recs INTO bankdetails_rec;
    bankdetails_obj.ag_code := bankdetails_rec.agencyname;
    bankdetails_obj.amt_no := bankdetails_rec.amout;
    bankdetails_obj.bank := bankdetails_rec.bankname;
    bankdetails_obj.val_from := TO_DATE(bankdetails_rec.validfrom,'DD-MON-YYYY');
    bankdetails_obj.val_to := TO_DATE(bankdetails_rec.validto,'DD-MON-YYYY');
    bankdetails_obj.guar_no := bankdetails_rec.guarenteeno;
    o_result.EXTEND;
    o_result(i):= bankdetails_obj;
    i:=i+1;
    EXIT WHEN recs%NOTFOUND;
    END LOOP;
    The problem is that checking for recs%NOTFOUND at the end of the loop results in going through the loop one more time even though you can't fetch another row from the recs cursor. The solution is to put the EXIT condition right after the FETCH statement. You now exit the loop if you can't fetch another row without assigning the last fetched record to bankdetails_obj again:
    LOOP
    FETCH recs INTO bankdetails_rec;
    EXIT WHEN recs%NOTFOUND;
    bankdetails_obj.ag_code := bankdetails_rec.agencyname;
    bankdetails_obj.amt_no := bankdetails_rec.amout;
    bankdetails_obj.bank := bankdetails_rec.bankname;
    bankdetails_obj.val_from := TO_DATE(bankdetails_rec.validfrom,'DD-MON-YYYY');
    bankdetails_obj.val_to := TO_DATE(bankdetails_rec.validto,'DD-MON-YYYY');
    bankdetails_obj.guar_no := bankdetails_rec.guarenteeno;
    o_result.EXTEND;
    o_result(i):= bankdetails_obj;
    i:=i+1;
    END LOOP;
    CLOSE recs;
    You also might want to consider to close the ref cursor after exiting the loop... 'too many open cursors' is not a good exception to get. ;-)
    Hope this is helpful.
    Thanks,
    Markus

  • How to suppress duplicate records in rtf templates

    Hi All,
    I am facing issue with payment reason comments in check template.
    we are displaying payment reason comments. Now the issue is while making batch payment we are getting multiple payment reason comments from multiple invoices with the same name and it doesn't looks good. You can see payment reason comments under tail number text field in the template.
    If you provide any xml syntax to suppress duplicate records for showing distinct payment reason comments.
    Attached screen shot, template and xml file for your reference.
    Thanks,
    Sagar.

    I have CRXI, so the instructions are for this release
    you can create a formula, I called it cust_Matches
    if = previous () then 'true' else 'false'
    IN your GH2 section, right click the field, select format field, select the common tab (far left at the top)
    Select the x/2 to the right of Supress  in the formula field type in
    {@Cust_Matches} = 'true'
    Now every time the {@Cust_Matches} is true, the CustID should be supressed,
    do the same with the other fields you wish to hide.  Ie Address, City, etc.

  • How to create duplicate records in end routines

    Hi
    Key fields in DSO are:
    Plant
    Storage Location
    MRP Area
    Material
    Changed Date
    Data Fields:
    Safety Stocky
    Service Level
    MRP Type
    Counter_1 (In flow Key figure)
    Counter_2 (Out flow Key Figure)
    n_ctr  (Non Cumulative Key Figure)
    For every record that comes in, we need to create a dupicate record. For the original record, we need to make the Counter_1 as 1 and Counter_2 as 0. For the duplicate record, we need to update Changed_Date to today's date and rest of the values will remain as is and update the counter_1 as 0 and counter_2 as -1. Where is the best place to write this code in DSO. IS it End
    routine?
    please let me know some bais cidea of code.

    Hi Uday,
    I have same situation like Suneel and have written your logic in End routine DSO as follows:
    DATA: l_t_duplicate_records TYPE TABLE OF TYS_TG_1,
          l_w_duplicate_record TYPE TYS_TG_1.
    LOOP AT RESULT_PACKAGE ASSIGNING <result_fields>.
        MOVE-CORRESPONDING <result_fields> TO l_w_duplicate_record.
        <result_fields>-/BIC/ZPP_ICNT = 1.
        <result_fields>-/BIC/ZPP_OCNT = 0.
        l_w_duplicate_record-CH_ON = sy-datum.
        l_w_duplicate_record-/BIC/ZPP_ICNT = 0.
        l_w_duplicate_record-/BIC/ZPP_OCNT = -1.
        APPEND l_w_duplicate_record TO  l_t_duplicate_records.
    ENDLOOP.
    APPEND LINES OF l_t_duplicate_records TO RESULT_PACKAGE.
    I am getting below error:
    Duplicate data record detected (DS ZPP_O01 , data package: 000001 , data record: 4 )     RSODSO_UPDATE     19     
    i have different requirement for date. Actually my requirement is to populate the CH_ON date as mentioned below:
    sort the records based on the key and get the latest CH_ON value with unique Plant,sloc, material combination and populate
    that CH_ON value for duplicate record.
    Please help me to resolve this issue.
    Thanks,
    Ganga

  • Duplicate records problem

    Hi everyone,
    I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
    My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
    LOAN RECORD NO.     LANGUAGE CODE
      123456                             ENG
      123456                             FRE
    So, although the loan only occurred once I have two instances of it in my report.
    I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
    ENG     1
    FRE      1
    A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
    I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
    Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
    LOAN RECORD     LANGUAGE CODE
      123456                      ENG, FRE
    Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
    Thanks!

    if you create a group by loan
    then create a group by language
    place the values in the group(loan id in the loan header)
    you should only see the loan id 1x.
    place the language in the language group you should only see that one time
    a group header returns the 1st value of a unique id....
    then in order to calculate avoiding the duplicates
    use manual running totals
    create a set for each summary you want- make sure each set has a different variable name
    MANUAL RUNNING TOTALS
    RESET
    The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
    whileprintingrecords;
    Numbervar  X := 0;
    CALCULATION
    The calculation is placed adjacent to the field or formula that is being calculated.
    (if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
    whileprintingrecords;
    Numbervar  X := x + ; ( or formula)
    DISPLAY
    The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
    whileprintingrecords;
    Numbervar  X;
    X

  • How to find out duplicate record contained in a flat file

    Hi Experts,
    For my project I have written a program for flat file upload.
    Requirement 1
    In the flat file there may be some duplicate record like:
    Field1   Field2
    11        test1
    11        test2
    12        test3
    13        test4
    Field1 is primary key.
    Can you please let me know how I can find out the duplicate record.
    Requirement 2
    The flat file contains the header row as shown above
    Field1   Field2
    How our program can skip this record and start reading / inserting records from row no 2 ie
    11        test1
    onwards.
    Thanks
    S
    FORM upload1.
    DATA : wf_title TYPE string,
    lt_filetab TYPE filetable,
    l_separator TYPE char01,
    l_action TYPE i,
    l_count TYPE i,
    ls_filetab TYPE file_table,
    wf_delemt TYPE rollname,
    wa_fieldcat TYPE lvc_s_fcat,
    tb_fieldcat TYPE lvc_t_fcat,
    rows_read TYPE i,
    p_error TYPE char01,
    l_file TYPE string.
    DATA: wf_object(30) TYPE c,
    wf_tablnm TYPE rsdchkview.
    wf_object = 'myprogram'.
    DATA i TYPE i.
    DATA:
    lr_mdmt TYPE REF TO cl_rsdmd_mdmt,
    lr_mdmtr TYPE REF TO cl_rsdmd_mdmtr,
    lt_idocstate TYPE rsarr_t_idocstate,
    lv_subrc TYPE sysubrc.
    TYPES : BEGIN OF test_struc,
    /bic/myprogram TYPE /bic/oimyprogram,
    txtmd TYPE rstxtmd,
    END OF test_struc.
    DATA : tb_assum TYPE TABLE OF /bic/pmyprogram.
    DATA: wa_ztext TYPE /bic/tmyprogram,
    myprogram_temp TYPE ziott_assum,
    wa_myprogram TYPE /bic/pmyprogram.
    DATA : test_upload TYPE STANDARD TABLE OF test_struc,
    wa2 TYPE test_struc.
    DATA : wa_test_upload TYPE test_struc,
    ztable_data TYPE TABLE OF /bic/pmyprogram,
    ztable_text TYPE TABLE OF /bic/tmyprogram,
    wa_upld_text TYPE /bic/tmyprogram,
    wa_upld_data TYPE /bic/pmyprogram,
    t_assum TYPE ziott_assum.
    DATA : wa1 LIKE test_upload.
    wf_title = text-026.
    CALL METHOD cl_gui_frontend_services=>file_open_dialog
    EXPORTING
    window_title = wf_title
    default_extension = 'txt'
    file_filter = 'Tab delimited Text Files (*.txt)'
    CHANGING
    file_table = lt_filetab
    rc = l_count
    user_action = l_action
    EXCEPTIONS
    file_open_dialog_failed = 1
    cntl_error = 2
    OTHERS = 3. "#EC NOTEXT
    IF sy-subrc 0.
    EXIT.
    ENDIF.
    LOOP AT lt_filetab INTO ls_filetab.
    l_file = ls_filetab.
    ENDLOOP.
    CHECK l_action = 0.
    IF l_file IS INITIAL.
    EXIT.
    ENDIF.
    l_separator = 'X'.
    wa_fieldcat-fieldname = 'test'.
    wa_fieldcat-dd_roll = wf_delemt.
    APPEND wa_fieldcat TO tb_fieldcat.
    CALL FUNCTION 'MESSAGES_INITIALIZE'.
    CLEAR wa_test_upload.
    Upload file from front-end (PC)
    File format is tab-delimited ASCII
    CALL FUNCTION 'GUI_UPLOAD'
    EXPORTING
    filename = l_file
    has_field_separator = l_separator
    TABLES
    data_tab = i_mara
    data_tab = test_upload
    EXCEPTIONS
    file_open_error = 1
    file_read_error = 2
    no_batch = 3
    gui_refuse_filetransfer = 4
    invalid_type = 5
    no_authority = 6
    unknown_error = 7
    bad_data_format = 8
    header_not_allowed = 9
    separator_not_allowed = 10
    header_too_long = 11
    unknown_dp_error = 12
    access_denied = 13
    dp_out_of_memory = 14
    disk_full = 15
    dp_timeout = 16
    OTHERS = 17.
    IF sy-subrc 0.
    EXIT.
    ELSE.
    CALL FUNCTION 'MESSAGES_INITIALIZE'.
    IF test_upload IS NOT INITIAL.
    DESCRIBE TABLE test_upload LINES rows_read.
    CLEAR : wa_test_upload,wa_upld_data.
    LOOP AT test_upload INTO wa_test_upload.
    CLEAR : p_error.
    rows_read = sy-tabix.
    IF wa_test_upload-/bic/myprogram IS INITIAL.
    p_error = 'X'.
    MESSAGE s153 WITH wa_test_upload-/bic/myprogram sy-tabix.
    CONTINUE.
    ELSE.
    TRANSLATE wa_test_upload-/bic/myprogram TO UPPER CASE.
    wa_upld_text-txtmd = wa_test_upload-txtmd.
    wa_upld_text-txtsh = wa_test_upload-txtmd.
    wa_upld_text-langu = sy-langu.
    wa_upld_data-chrt_accts = 'xyz1'.
    wa_upld_data-co_area = '12'.
    wa_upld_data-/bic/zxyzbcsg = 'Iy'.
    wa_upld_data-objvers = 'A'.
    wa_upld_data-changed = 'I'.
    wa_upld_data-/bic/zass_mdl = 'rrr'.
    wa_upld_data-/bic/zass_typ = 'I'.
    wa_upld_data-/bic/zdriver = 'yyy'.
    wa_upld_text-langu = sy-langu.
    MOVE-CORRESPONDING wa_test_upload TO wa_upld_data.
    MOVE-CORRESPONDING wa_test_upload TO wa_upld_text.
    APPEND wa_upld_data TO ztable_data.
    APPEND wa_upld_text TO ztable_text.
    ENDIF.
    ENDLOOP.
    DELETE ADJACENT DUPLICATES FROM ztable_data.
    DELETE ADJACENT DUPLICATES FROM ztable_text.
    IF ztable_data IS NOT INITIAL.
    CALL METHOD cl_rsdmd_mdmt=>factory
    EXPORTING
    i_chabasnm = 'myprogram'
    IMPORTING
    e_r_mdmt = lr_mdmt
    EXCEPTIONS
    invalid_iobjnm = 1
    OTHERS = 2.
    CALL FUNCTION 'MESSAGES_INITIALIZE'.
    **Lock the Infoobject to update
    CALL FUNCTION 'RSDG_IOBJ_ENQUEUE'
    EXPORTING
    i_objnm = wf_object
    i_scope = '1'
    i_msgty = rs_c_error
    EXCEPTIONS
    foreign_lock = 1
    sys_failure = 2.
    IF sy-subrc = 1.
    MESSAGE i107(zddd_rr) WITH wf_object sy-msgv2.
    EXIT.
    ELSEIF sy-subrc = 2.
    MESSAGE i108(zddd_rr) WITH wf_object.
    EXIT.
    ENDIF.
    *****Update Master Table
    IF ztable_data IS NOT INITIAL.
    CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
    EXPORTING
    i_iobjnm = 'myprogram'
    i_tabclass = 'M'
    I_T_ATTR = lt_attr
    TABLES
    i_t_table = ztable_data
    EXCEPTIONS
    attribute_name_error = 1
    iobj_not_found = 2
    generate_program_error = 3
    OTHERS = 4.
    IF sy-subrc 0.
    CALL FUNCTION 'MESSAGE_STORE'
    EXPORTING
    arbgb = 'zddd_rr'
    msgty = 'E'
    txtnr = '054'
    msgv1 = text-033
    EXCEPTIONS
    OTHERS = 3.
    MESSAGE e054(zddd_rr) WITH 'myprogram'.
    ELSE.
    CALL FUNCTION 'MESSAGE_STORE'
    EXPORTING
    arbgb = 'zddd_rr'
    msgty = 'S'
    txtnr = '053'
    msgv1 = text-033
    EXCEPTIONS
    OTHERS = 3.
    ENDIF.
    *endif.
    *****update Text Table
    IF ztable_text IS NOT INITIAL.
    CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
    EXPORTING
    i_iobjnm = 'myprogram'
    i_tabclass = 'T'
    TABLES
    i_t_table = ztable_text
    EXCEPTIONS
    attribute_name_error = 1
    iobj_not_found = 2
    generate_program_error = 3
    OTHERS = 4.
    IF sy-subrc 0.
    CALL FUNCTION 'MESSAGE_STORE'
    EXPORTING
    arbgb = 'zddd_rr'
    msgty = 'E'
    txtnr = '055'
    msgv1 = text-033
    EXCEPTIONS
    OTHERS = 3.
    ENDIF.
    ENDIF.
    ELSE.
    MESSAGE s178(zddd_rr).
    ENDIF.
    ENDIF.
    COMMIT WORK.
    CALL FUNCTION 'RSD_CHKTAB_GET_FOR_CHA_BAS'
    EXPORTING
    i_chabasnm = 'myprogram'
    IMPORTING
    e_chktab = wf_tablnm
    EXCEPTIONS
    name_error = 1.
    IF sy-subrc 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    ****Release locks on Infoobject
    CALL FUNCTION 'RSDG_IOBJ_DEQUEUE'
    EXPORTING
    i_objnm = 'myprogram'
    i_scope = '1'.
    ENDIF.
    ENDIF.
    PERFORM data_selection .
    PERFORM update_alv_grid_display.
    CALL FUNCTION 'MESSAGES_SHOW'.
    ENDFORM.

    Can you please let me know how I can find out the duplicate record.
    you need to split the records from flat file structure into your internal table ans use a delete ADJACENT duplicates comparing fields
    split flat_str into wa_f1 wa_f2 wa_f2 at tab_space.

  • Check for duplicate record in SQL database before doing INSERT

    Hey guys,
           This is part powershell app doing a SQL insert. BUt my question really relates to the SQL insert. I need to do a check of the database PRIOR to doing the insert to check for duplicate records and if it exists then that record needs
    to be overwritten. I'm not sure how to accomplish this task. My back end is a SQL 2000 Server. I'm piping the data into my insert statement from a powershell FileSystemWatcher app. In my scenario here if the file dumped into a directory starts with I it gets
    written to a SQL database otherwise it gets written to an Access Table. I know silly, but thats the environment im in. haha.
    Any help is appreciated.
    Thanks in Advance
    Rich T.
    #### DEFINE WATCH FOLDERS AND DEFAULT FILE EXTENSION TO WATCH FOR ####
                $cofa_folder = '\\cpsfs001\Data_pvs\TestCofA'
                $bulk_folder = '\\cpsfs001\PVS\Subsidiary\Nolwood\McWood\POD'
                $filter = '*.tif'
                $cofa = New-Object IO.FileSystemWatcher $cofa_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
                $bulk = New-Object IO.FileSystemWatcher $bulk_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
    #### CERTIFICATE OF ANALYSIS AND PACKAGE SHIPPER PROCESSING ####
                Register-ObjectEvent $cofa Created -SourceIdentifier COFA/PACKAGE -Action {
           $name = $Event.SourceEventArgs.Name
           $changeType = $Event.SourceEventArgs.ChangeType
           $timeStamp = $Event.TimeGenerated
    #### CERTIFICATE OF ANALYSIS PROCESS BEGINS ####
                $test=$name.StartsWith("I")
         if ($test -eq $true) {
                $pos = $name.IndexOf(".")
           $left=$name.substring(0,$pos)
           $pos = $left.IndexOf("L")
           $tempItem=$left.substring(0,$pos)
           $lot = $left.Substring($pos + 1)
           $item=$tempItem.Substring(1)
                Write-Host "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp"  -fore green
                Out-File -FilePath c:\OutputLogs\CofA.csv -Append -InputObject "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp"
                start-sleep -s 5
                $conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=PVSNTDB33; Initial Catalog=adagecopy_daily; Integrated Security=TRUE")
                $conn.Open()
                $insert_stmt = "INSERT INTO in_cofa_pvs (in_item_key, in_lot_key, imgfileName, in_cofa_crtdt) VALUES ('$item','$lot','$name','$timestamp')"
                $cmd = $conn.CreateCommand()
                $cmd.CommandText = $insert_stmt
                $cmd.ExecuteNonQuery()
                $conn.Close()
    #### PACKAGE SHIPPER PROCESS BEGINS ####
              elseif ($test -eq $false) {
                $pos = $name.IndexOf(".")
           $left=$name.substring(0,$pos)
           $pos = $left.IndexOf("O")
           $tempItem=$left.substring(0,$pos)
           $order = $left.Substring($pos + 1)
           $shipid=$tempItem.Substring(1)
                Write-Host "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp"  -fore green
                Out-File -FilePath c:\OutputLogs\PackageShipper.csv -Append -InputObject "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp"
    Rich Thompson

    Hi
    Since SQL Server 2000 has been out of support, I recommend you to upgrade the SQL Server 2000 to a higher version, such as SQL Server 2005 or SQL Server 2008.
    According to your description, you can try the following methods to check duplicate record in SQL Server.
    1. You can use
    RAISERROR to check the duplicate record, if exists then RAISERROR unless insert accordingly, code block is given below:
    IF EXISTS (SELECT 1 FROM TableName AS t
    WHERE t.Column1 = @ Column1
    AND t.Column2 = @ Column2)
    BEGIN
    RAISERROR(‘Duplicate records’,18,1)
    END
    ELSE
    BEGIN
    INSERT INTO TableName (Column1, Column2, Column3)
    SELECT @ Column1, @ Column2, @ Column3
    END
    2. Also you can create UNIQUE INDEX or UNIQUE CONSTRAINT on the column of a table, when you try to INSERT a value that conflicts with the INDEX/CONSTRAINT, an exception will be thrown. 
    Add the unique index:
    CREATE UNIQUE INDEX Unique_Index_name ON TableName(ColumnName)
    Add the unique constraint:
    ALTER TABLE TableName
    ADD CONSTRAINT Unique_Contraint_Name
    UNIQUE (ColumnName)
    Thanks
    Lydia Zhang

  • Write Optimized DSO Duplicate records

    Hi,
    We are facing problem while doing delta to an write optimized data store object.
    It gives error "Duplicate data record detected (DS <ODS name> , data package: 000001 , data record: 294 )
    But it can not have an duplicate record since data is from DSO and
    we have also checked in PSA the particular record and there we couldn't find the duplicate record.....
    There is no much complex routine also.....
    Have any one ever faced this issue and got the solution? Please let me know if yes.
    Thanks
    VJ

    Ravi,
    We have checked that there is no duplicate records in PSA.
    Also the source ODS has two keys and target ODS is having three Keys.
    Also the records that it has mentioned are having Record mode "N" New.
    Seems to be issue with write-ptimized DSO.
    Regards
    VJ

  • Duplicate Records & CFTRANSACTION

    Maybe I'm missing the obvious on this, as I've never had a
    problem with this before, but recently I developed a custom tag
    that logs and blocks the IP addresses of all these recent DECLARE
    SQL injection attempts.
    The issue is however that my blocked IP address seems to be
    getting duplicates here and there. My "datetime" field in the
    database show the duplicates are all added to the database the
    exact same second. What gives?
    Shouldn't CFTRANSACTION be preventing such a thing, even if
    multiple injection attempts come at the same time?

    I've always coded my applications where my primary key is my
    database's autonumber field, and instead insure my coding is solid
    enough to prevent duplicate records from appearing. Oddly enough
    it's worked flawlessly until now.
    Would not ColdFusion throw errors if I made the "ip_address"
    field my primary key, and my code allowed for a duplicate record to
    be entered? Am I interpretting the CFTRANSACTION code to do
    something it doesn't do?
    Also, the duplicates aren't causing problems, so a DISTINCT
    select isn't necessary. The IP address is blocked whether one
    record or fifty exist in my blocked_ip_addresses table. My goal is
    just not to waste database space.
    Any further help you can provide is MUCH appreciated!
    Thanks!

Maybe you are looking for