Stop processing the duplicate record values
Hi Experts,
I have a requirement that i need do not process the duplicate values from the file place in FTP.
Scenerio is File->PI->RFC.
For ex:
Source Data:
Name,Emp_id,DOB,Designation,Location,Joining_Date,Time_Stamp
Moni,654654,11-09-1980,Developer,TN,20-02-2008,24-03-2014:3.38pm
Shiva,654612,21-02-1982,Developer,TN,15-08-2009,24-03-2014:3.38pm
Venkat,654655,19-01-1983,Developer,TN,28-10-2010,24-03-2014:3.38pm
Moni,654654,11-09-1980,Developer,TN,20-02-2008,24-03-2014:9.38pm
If next time the same record comes like Moni,654654,11-09-1980,Developer,TN,20-02-2008,24-03-2014:9.38pm,no need to process this record values.
How to stop processing the duplicate record.Kindly share some ideas to achieve this requirement by using PI 7.1.
Best Regards,
Monikandan.
Hi ,
Here is one of the clean sol:
1)Using FCC read record by record by giving end separator as newline in FCC.
Reading a delimeter separated file whose columns may jumble
2)Use two mapping in one operation mapping .
1st Mapping :
Sourcefield()-->sort(ascending)-->splitbyvalue(valuechange)-->eachvalue-->collapsecontext-->udf-->
splitbyvalue(eachvalue)-->map to all target fields
Keep context of source field to top node to come all values in single array
for(int i=0;i<input.length;i++){
String temp[] = input[i].split(",");
if(temp.length == 7)
Name.addValue(temp[0]) ;
Emp_id.addValue(temp[1]) ;
DOB.addValue(temp[2]) ;
Designation.addValue(temp[3]) ;
Location.addValue(temp[4]) ;
Joining_Date.addValue(temp[5]) ;
Time_Stamp.addValue(temp[6]) ;
else{throw new StreamTransformationException("field missing in col "+i)}// up to you
Mapping 2:Actual file structure to RFC
Regards
Venkat
Similar Messages
-
Inside Loop, how to stop processing the current record but the next one?
We want to know which statement/command that can exit the current record processing in a loop and start to process the next record in the loop?
We've tried exit, but it jumps out the whole loop. We have tried Resume, but this command doesn't exist in ABAP.
Any idea?
Thanks!Hi,
Use CONTINUE.
Thanks,
Sriram Ponna. -
Hi,
I am providing support to one of our clients, where we have jobs scheduled to load the data from the tables in the source database to the destination database via SSIS packages. The first time load is a full load where we truncate all the tables in the destination
and load them from the source tables. But from the next day, we perform the incremental load from source to destination, i.e., only modified records fetched using changed tracking concept will be loaded to the destination. After full load, if we run the incremental
load, the job is failing with the error on one of the packages "Violation of PRIMARY KEY constraint. Cannot insert duplicate key in object '<tablename>'. The duplicate key value is <1234>, even though there are no duplicate records. When we
try debugging and running the failing package, it runs successfully. We are not able to figure out why the package fails and when we run the next day it runs successfully. Request you to help me in this regard.
Thank you,
Bala Murali Krishna Medipally.Hi,
I am providing support to one of our clients, where we have jobs scheduled to load the data from the tables in the source database to the destination database via SSIS packages. The first time load is a full load where we truncate all the tables in the destination
and load them from the source tables. But from the next day, we perform the incremental load from source to destination, i.e., only modified records fetched using changed tracking concept will be loaded to the destination. After full load, if we run the incremental
load, the job is failing with the error on one of the packages "Violation of PRIMARY KEY constraint. Cannot insert duplicate key in object '<tablename>'. The duplicate key value is <1234>, even though there are no duplicate records. When we
try debugging and running the failing package, it runs successfully. We are not able to figure out why the package fails and when we run the next day it runs successfully. Request you to help me in this regard.
Thank you,
Bala Murali Krishna Medipally.
I suspect you are trying to insert modified records instead of updating. -
Hi team,
How resolve the below error
Violation of PRIMARY KEY constraint 'PK_test'. Cannot insert duplicate key in object 'dbo.test'. The duplicate key value is (12610). (Source: MSSQLServer, Error number: 2627) ?
Thanks,
Ram
RAMHi team,
How resolve the below error
Violation of PRIMARY KEY constraint 'PK_test'. Cannot insert duplicate key in object 'dbo.test'. The duplicate key value is (12610). (Source: MSSQLServer, Error number: 2627) ?
Thanks,
Ram
RAM
There can be two reasons
1. The insert script used is having multiple instances of the records with Key as 12610 returned from the source query. If this is the issue add a logic to include only the unique set of id values for records by avoiding duplicates. There are several approaches
for this like using ROW_NUMBER with PARTITION BY, using a join with derived table etc
2. The record with Key 12610 already exist in your destination table and your script is again trying to insert another instances of record with same key. This can be avoided by adding a NOT EXISTS condition with a subquery which will check and return only
those records which doesnt already exist in the source
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
How to delete the duplicate records in a table without promary key
I have a table that contains around 1 million records and there is no promary key or auto number coulums. I need to delete the duplicate records from this table. what is the simple effective way to do this.
Please see this link:
Remove duplicate records ...
sqldevelop.wordpress.com -
To find the duplicate record in internal table
Hi,
i have a requirement to fine the duplicate record with 3 fields.
i am getting a flat file with 15 fields .
i need to check the duplaicate records of 3 fields . if i get any 2nd same record of 3 fields , the records will go to other internal table.
for ex :
1. aaa bbb ccc ddd eee fff ggg hhh
2. aaa bbb ccf dde edd ffg ggh hhj
3. aaa bbb cce ddd ees ffh ggu hhk
in that 1st record and 3rd record are same (aaa bbb ddd)
i need to find 3rd record
please help me
regrards
srinivasuhi,
itab2[] = itab1[].
sort itab1 by f1 f2 f3.
sort itab2 by f1 f2 f3.
delete itab2 index 1. "to delete the first record in itab2.
loop at itab1 into ws_itab1.
loop at itab2 into ws_itab2.
if ws_itab1-f1 = ws_itab2-f1 and
ws_itab1-f2 = ws_itab2-f2 and
ws_itab1-f3 = ws_itab2-f3.
ws_itab3 = ws_itab2.
append ws_itab3 into itab3. "Third internal table.
endif.
endloop.
delete itab2 index 1.
endloop.
ITAB3 will have all the duplicate records.
Regards,
Subramanian -
How to get the previous record value in the current record plz help me...
In my sql how to get the previous record value...
in table i m having the field called Date i want find the difference b/w 2nd record date value with first record date... plz any one help me to know this i m waiting for ur reply....
Thanx in Advance
with regards
kotreshFirst of this not hte mysql or database forum so don;t repeate again.
to get diff between two date in mysql use date_format() to convert them to date if they r not date type
then use - (minus)to get diff. -
I have migrated my SCCm 2007 environment to SCCM 2012 SP1 CU4.
I noticed in the System Status\Component Status\SMS_STATE_SYSTEM a lot of errors like the one below:
Microsoft SQL Server reported SQL message 2627, severity 14: [23000][2627][Microsoft][SQL Server Native Client 11.0][SQL Server]Violation of PRIMARY KEY constraint 'DeploymentSummary_PK'. Cannot insert duplicate key in object 'dbo.DeploymentSummary'. The duplicate key value is (1, 0, S0220438, 0). : spUpdateClassi
Please refer to your Configuration Manager documentation, SQL Server documentation, or the Microsoft Knowledge Base for further troubleshooting information.
When looking up the deployment ID and recreate the Deployment the problem is solved. But I have 700 packages and don't want to manually do this action on all packages. I think it is related to the migration i did and something went wrong there :-(
Besides it will retriggers the deployment to the clients which is also not preferred.
Is there another way to solve this by e.e.g do something directly in the SQL database tables ?Hi,
It is not supported by Microsoft that do something directly in SQL database.
If you want to do that, you could make a call to CSS.
Best Regards,
Joyce
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Start routine to filter the duplicate records
Dear Experts
I have two questions regarding the start routine.
1) I have a characteristic InfoObject with transactional InfoSource. Often the 'duplicate records' error happens during the data loading. I'm trying to put a start routine in the update rule to filter out the duplicate records.
After searching the SDN forum and SAPHelp, I use the code as:
DELETE ADJACENT DUPLICATES FROM DATA_PACKAGE COMPARING KEY1 KEY2 KEY3.
In my case, the InfoObject has 3 keys: SOURSYSTEM, /BIC/InfoObjectname, OBJVERS. My code is:
DELETE ADJACENT DUPLICATES FROM DATA_PACKAGE COMPARING SOURSYSTEM /BIC/InfoObjectname OBJVERS.
When checking the code I got message: 'E:No component exists with the name "OBJVERS".' So I only included the first 2 keys. But the routine does not work. The duplicate error is still happening. What is missing in this start routine?
2) Generally, for a start routine, do I really need to include the data declaration, ITAB or WA, SELECT statement etc.?
Do I have to use the statement below or just simply one line?
LOOP AT DATA_PACKAGE.
IF DATA_PACKAGE.....
ENDIF.
ENDLOOP.
Thanks for your help in advance, JessicaHello Jessica,
if it won't be possible for you to get unique data from the very beginning, there is still another way to manage this problem in a start routine.
Sort ... and delete adjacent ... must remain. Further on build up an internal table of type data_package, but defined with STATICS instead of DATA. This i-tab stays alive for all data-packages of one load. Fill it with the data of the transferred data-packages, and delete from every new data-package all records which already are in the statics i-tab. Alternatively you could do the same with a Z-(or Y-)database-table instead of the statics i-tab.
It will probably cost some performance, but better slow than wrong data.
Regards,
Ernst -
*** [23000][2627][Microsoft][SQL Server Native Client 11.0][SQL Server]Violation of UNIQUE KEY constraint 'ClientPushMachine_G_AK'. Cannot insert duplicate key in object 'dbo.ClientPushMachine_G'. The duplicate key value is (16777412). : sp_CP_CheckNewAssignedMachine
CCCRT::RunSQLStoredProc - Failed to execute SQL cmd exec [sp_CP_CheckNewAssignedMachine] N'xxx', 1
CCRQueueRequest::GetRequestFromQueue - Failed to execute SQL cmd sp_CP_CheckNewAssignedMachine
I get the above issue and the one below at a client site; the error started with the error below then changed to the one reported above and back to the one below. Everything is working as it should but the issues
started when one of the admins at the data-centre incorrectly applied a gpo which affected a number of service accounts (sccm inclusive) and they expired....hence reporting in sccm got broke as well as this error in the ccm.log file appeared.
Remote client install still works but I believe this error affects new client discovered by sccm, so in other words devices discovered by sccm do not get the client installed automatically....but if all access and permissions are in place...pushing out the
client to the new discovered system works, it just not done automatically, which kinda defeats one the reasons for using sccm.
I have searched the breadth of the tinternet and I can only find two technet reference to the same error - one says to edit the stored procedure on the sql server which I don't think should be done... Like Jason said and I concur....its bad joo joos.
The second suggestion, said you should select all the options in the Client Push Installation properties, I have tried this but hasn't solved the problem.
I am planning to upgrade the site to the R2 CU3 before the end of the year but I would like to resolve this error before the upgrade.
The site is currently sccm 2012 sp1
Any idea?> Resolution? sil vous plait!
MerciHi ,
Please back up the database of the SCCM site. Then run the following query against the Site DB and see how it goes.
DELETE FROM System_SMS_Resident_ARR
WHERE ItemKey IN (
SELECT ItemKey FROM vSystem_SMS_Resident_ARR
GROUP BY ItemKey
HAVING COUNT(ItemKey) > 1
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
Deleting the duplicate records
Hi all,
I have some duplicate records in the PSA . and i want to delete the duplicate records before the data is entered into the data target. Can anybody tell me how to achieve this.
Actually i need to achieve this by using the ABAP code.
Thanx & Regards,
RaviChandra
Edited by: Ravichandra.bi on Mar 5, 2012 3:37 PMHi Ravi,
If its a full load to the PSA and because of it you are getting duplicate records and you want to eliminate them,
you can do what Anshul and Durgesh suggested.
Write the code in the start routine of the transformation to your target
SORT SOURCE_PACKAGE BY KEY1 KEY2 KEY3. "(The key will be the key of your target)
DELETE ADJACENT DUPLICATES FROM SOURCE_PACKAGE
COMPARING KEY1 KEY2 KEY3.
Hope this helps
Regards,
Joe -
I am using SQL server 2008 R1 SP3. And when we are doing back up operations we are facing the below error
Msg 2601, Level 14, State 1, Procedure sp_flush_commit_table, Line 15
Cannot insert duplicate key row in object 'sys.syscommittab' with unique index 'si_xdes_id'. The
duplicate key value is (2238926153).
The statement has been terminated.
Please assist me with your inputs.
Thanks,
Rakesh.Hello,
Did you enable change tracking on the database? If so, please try to disable and re-enable the change tracking.
The following thread is about the similar issue, please refer to:
http://social.msdn.microsoft.com/forums/sqlserver/en-US/c2294c73-4fdf-46e9-be97-8fade702e331/backup-fails-after-installing-sql2012-sp1-cu1-build-3321
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support -
How to remove the duplicate record in DART Extract
Hi Guys,
We are getting duplicate record when we do validate the DART extract file through DATA VIEWS for FI General Ledger Account Balances. If any one have experance on this, pls help us.
Following are the steps we done to Validate the DART EXTRACT File for FI General Ledger Account Balances.
1. We have run the DART extract program to extract the data from table to directory file by period vice in T.code FTW1A.
2. When we do validate the data from DART extract file through DATA VIWE for FI General Ledger Account Balances in T.code FTWH, getting duplicate record.
We unable to find out from where the duplicate records are coming out. will be great if any one can help us immediately.
Thanks & Records,
Boobalan,vIf the dup records are actually in the DART View versus the DART Extract, you could try OSS Note 1139619 DART: Eliminate duplicate records from DART view.
Additional Note - 1332571 FTWH/FTWY - Performance for "Eliminate duplicate records
Colleen
Edited by: Colleen Geraghty on May 28, 2009 6:07 PM -
How to retrieve the duplicates records.
Hi friends,
My next issue, how to retrieve the duplicate records in web intelligence.I checked the option Retrieve duplicate records in web intelligence as well.But it is not helping me.
Hope you guys are help to solve this issue.
Thanks lot,
Regards,
-B-hi Blaji,,
ive tried this here and it worked perfectly with me
even so, you dont need to make the QTY as a dimension, you can leave it as a measure also, and it will work good with you.
click the block on the WebI, the Block itself.
and find its properties, under "Display", you will find
"Avoid Duplicate row Aggregations"
in the query it self you should flag "Retrieve Duplicated Rows"
i think this will work fine with you
good luck
Amr -
Help in Elimating the Duplicate records
Hi all,
Can anyone help me on this ..i am going to use this SQL in a FOR LOOP CURSOR
This SQL is now Returning Multiple records for each participant because of the c.cmpnt_beg_dte
because there are multiple begin dates for each participants but i want only single record for each
participant with the oldest cmpnt_beg_dte
SELECT
distinct a.partc_ssn, a.partc_id, b.LOC_ID, e.loc_desc, c.CMPNT_BEG_DTE
FROM etr_rgt_partc a,
etr_fst_partc_rgst b,
etr_fst_partc_cmpnt c,
etr_smt_fst_cmpnt_mstr d,
etr_smt_agy_loc_mstr e
WHERE a.partc_id = b.partc_id
AND b.partc_id = c.partc_id
AND b.fs_rgst_id = c.fs_rgst_id
AND c.cmpnt_nbr = d.cmpnt_nbr
AND b.loc_id = e.loc_id
AND B.LOC_ID = 'ES00205'
AND a.partc_id = 2402514
AND (c.cmpnt_beg_dte BETWEEN TO_DATE ('10/01/2007', 'MM/DD/YYYY')
AND TO_DATE ('09/30/2008', 'MM/DD/YYYY'))
This Query is giving me Result Like the below
521140046,2402514,ES00205,205- TN Career Center - Savannah,11/8/2007
521140046,2402514,ES00205,205- TN Career Center - Savannah,2/1/2008
but i want only 1st record ... now i executed this query manually by passing PARTC_ID but
i will pass this query inside my procedure ..if i limit by using order by and rownum=1
for all participants it is giving only 1 row ..
any one please help me in this issue ..
To be on Short i want to elimate the Duplicates .. ..but based on Single Column Value
Thanks in Advance ,
Data BoyIf SSN/partc_id and loc_id are always the same for each record (and seems like they should be), then:
SELECT
a.partc_ssn, a.partc_id, b.LOC_ID, e.loc_desc, MIN(c.CMPNT_BEG_DTE)
FROM etr_rgt_partc a,
etr_fst_partc_rgst b,
etr_fst_partc_cmpnt c,
etr_smt_fst_cmpnt_mstr d,
etr_smt_agy_loc_mstr e
WHERE a.partc_id = b.partc_id
AND b.partc_id = c.partc_id
AND b.fs_rgst_id = c.fs_rgst_id
AND c.cmpnt_nbr = d.cmpnt_nbr
AND b.loc_id = e.loc_id
AND B.LOC_ID = 'ES00205'
AND a.partc_id = 2402514
AND (c.cmpnt_beg_dte BETWEEN TO_DATE ('10/01/2007', 'MM/DD/YYYY')
AND TO_DATE ('09/30/2008', 'MM/DD/YYYY'))
GROUP BY a.partc_ssn, a.partc_id, b.LOC_ID, e.loc_desc
Maybe you are looking for
-
How do I change my Apple ID on my iPhone 3 S?
I have my 3S running the latest OS for iPhones, but it insists on using the Apple ID that was put in several years ago. I have since got an IPHONE 4S and and Ipad and they use an ID that I set up on ICloud. How can I change this so I can use Match
-
Hi, Can you please tell when Oracle releases April 2013 PSU for 10g and 11g databases? Thanks, ...
-
I saw the MIGO screen there for GR/GI slip no: F4 functionality have been given for forms AX45, AX45&FORMC, CT3, CT3&FORMC, FORMC, NONE. Why these people have configured this in MIGO can any one guide me on this......
-
We're trying to set up a Mac Mini to host a proposal system which our sales people will 'concurrently' access remotely. It's presently set up with a PC via Remote Desktop and we want to mirror the process in Mac-land. Once I set up the users in OSX a
-
Flash CS5.5 is Irritating me
There was a day when I looked forward to new releases from Adobe. Now I tend to dread every new release, and unfortunately Flash CS5.5 is giving me plenty of headaches so far. Why change what isn't broken!!?? It drives me crazy when shortcuts are cha