Duplicate record issue in IP file
Hello All,
I have a requirement to handle the duplicate record in IP flat file.
my interface is very simple, just to load the data using IP interface. There is no IP imput query just FF load.
My requirement is to apply validation check if file has similer two records that file should be rejected.
ex
Field 1 Field2 Amount
XXX ABC 100
XXX ABC 100
File should reject. As per the standard functionality it sum-up the data in the cube
Is there any way to handle that
Thanks
Samit
I dont think you can do it. This is standard. May be you can write your own class to check for duplicate records. You can use this class in a custom planning function and throw error message.
Best is to make sure the end users take the reponsibilty for data.
Arun
Similar Messages
-
Why do we have a duplicate record in the .wsr file of the mail.dat?
Any idea how we could have created a duplicate record in the .wsr (walk Sequence file)? We have a post presort software that errors out because of the dupe. The Mail.dat specification indicates that there are five 'key' fields (their combination must be unique) for the Walk Sequence file:
Hi Michael
Can you please tell me which field is being duplicated and can you please try to run the job again and wait for a couple of seconds before importing it to your post presort software.
Thanks
Anita. -
How can i duplicate records from a flat file to a cube
Hi all,
i got the next problem, i m loading a flat file, fur example:
Material KF1 KF2
A 10 12
B 25 30
01 02
I Need duplicate the records for material A and B but with the values of Not assign, in other words the result in the cube would be like this:
Material KF1 KF2
A 10 12
B 25 30
01 02
A 01 02
B 01 02
How can i do this, i know this must be a rutine, but where in the star? final? expert rutine? and how would be the code to do it, help guys and thanks!you can use end routine to accomplish that...
hope the below code helps...
DATA : IT_RESULT_PACKAGE LIKE TABLE OF RESULT_PACKAGE.
DATA : WA_RESULT_PACKAGE LIKE RESULT_PACKAGE.
SORT RESULT_PACKAGE BY MATERIAL.
IT_RESULT_PACKAGE[] = RESULT_PACKAGE[].
LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
READ TABLE RESULT_PACKAGE INTO WA_RESULT_PACKAGE WITH KEY MATERIAL IS INITIAL BINARY SEARCH.
IF SY-SUBRC = 0.
WA_RESULT_PACKAGE-MATERIAL = <RESULT_FIELDS>-MATERIAL.
APPEND WA_RESULT_PACKAGE INTO IT_RESULT_PACKAGE.
CLEAR WA_RESULT_PACKAGE.
ENDIF.
ENDLOOP.
CLEAR RESULT_PACKAGE[].
RESULT_PACKAGE[] = IT_RESULT_PACKAGE[].
this code will be suitable only if the material as unique entry with initial value...
else you would require to use a loop statement instead of the read statement which has been used...
rgds, Ghuru -
Delete duplicate records issue
Dear experts,
i did one development 1 month back for send mail.
In that i fetch email id of F_cord from one table(zcoordinator) based on some condition.now they want to changes in that object.
now two new fields added in that table ven_cord and HOD, both have some mail id.
As per their requirement , i need to fetch the ven_cord mail id and HOD mail id from that same table for priviously fetched F_cord.
And I need to delete duplicate ven_cord from tha table.
So for this should i need to create another structure or any ohther ways to solve the requirement.
\Plase suggest ?
Kin Regards,
Ajit@sanjeev,
I do't have create a separate structure for fetch these two fields ?
As per you, I need to add these two new fields to that previous structure which I created for F_CORD, Wright ?
And during fetch statement I need to add these two new fields. wright ?
Then how to delete the duplicate ven_cord for the perticular F_CORD.
Please suggst ?
I can't paste the code here ?
Regards,
Ajit -
J1INQEFILE - efile generation - Exported file shows Duplicate records.
Dear Team,
When I execute J1INQEFILE, I am facing problem with the e-file generation i.e. exported Excel file. When I execute and export the file in excel to the desktop, I can see duplicate records.
For eg. On execution of J1INQEFILE, I can see 4 records on the SAP screen, whereas the exported file to the desktop shows me 2 more identical records i.e 6 records. As a result, in the SAP system i can see Base Amount as 40000 ie. 10000 for each. on the contrary the excel sheet shows me 60000 i.e. 6 records of 10000 each (bcse of 2 more duplicated records) and also shows the TDS amount wrong. How are the records getting duplicated? Is there any SAP note to fix this? We are debugging on this but no clue....
Please assist on this issue....
Thanks in Advance !!!!Dear Sagar,
I am an abaper,
Even I came across the same kind of situation for one of our client ,When we execute J1INQEFILE, after exporting the same to Excel file we use to get duplicate records.
For this I have Debug the program and checked at point of efile generation, there duplicate records were getting appended for the internal table that is downloaded for Excel, so I have pulled the Document number in to Internal table and used Delete Adjacent duplicates by comparing all fields and hence able to resolve the issue.
Hope the same logic helps or guide you to proceed with the help of an abaper.
<<Text removed>>
Regards,
Kalyan
Edited by: Matt on Sep 8, 2011 9:14 PM -
Duplicate records in flat file extracted using openhub
Hi folks
I am extracting data from the cube to opnhub into a flat file, I see duplicate records in the file.
I am doing a full load to a flat file
I cannot have technical key because I am using a flat file.
PoonamI am using aggregates(In DTP there is a option to use aggregates) and the aggregates are compressed and I am still facing thiis issue.
Poonam -
How to find out duplicate record contained in a flat file
Hi Experts,
For my project I have written a program for flat file upload.
Requirement 1
In the flat file there may be some duplicate record like:
Field1 Field2
11 test1
11 test2
12 test3
13 test4
Field1 is primary key.
Can you please let me know how I can find out the duplicate record.
Requirement 2
The flat file contains the header row as shown above
Field1 Field2
How our program can skip this record and start reading / inserting records from row no 2 ie
11 test1
onwards.
Thanks
S
FORM upload1.
DATA : wf_title TYPE string,
lt_filetab TYPE filetable,
l_separator TYPE char01,
l_action TYPE i,
l_count TYPE i,
ls_filetab TYPE file_table,
wf_delemt TYPE rollname,
wa_fieldcat TYPE lvc_s_fcat,
tb_fieldcat TYPE lvc_t_fcat,
rows_read TYPE i,
p_error TYPE char01,
l_file TYPE string.
DATA: wf_object(30) TYPE c,
wf_tablnm TYPE rsdchkview.
wf_object = 'myprogram'.
DATA i TYPE i.
DATA:
lr_mdmt TYPE REF TO cl_rsdmd_mdmt,
lr_mdmtr TYPE REF TO cl_rsdmd_mdmtr,
lt_idocstate TYPE rsarr_t_idocstate,
lv_subrc TYPE sysubrc.
TYPES : BEGIN OF test_struc,
/bic/myprogram TYPE /bic/oimyprogram,
txtmd TYPE rstxtmd,
END OF test_struc.
DATA : tb_assum TYPE TABLE OF /bic/pmyprogram.
DATA: wa_ztext TYPE /bic/tmyprogram,
myprogram_temp TYPE ziott_assum,
wa_myprogram TYPE /bic/pmyprogram.
DATA : test_upload TYPE STANDARD TABLE OF test_struc,
wa2 TYPE test_struc.
DATA : wa_test_upload TYPE test_struc,
ztable_data TYPE TABLE OF /bic/pmyprogram,
ztable_text TYPE TABLE OF /bic/tmyprogram,
wa_upld_text TYPE /bic/tmyprogram,
wa_upld_data TYPE /bic/pmyprogram,
t_assum TYPE ziott_assum.
DATA : wa1 LIKE test_upload.
wf_title = text-026.
CALL METHOD cl_gui_frontend_services=>file_open_dialog
EXPORTING
window_title = wf_title
default_extension = 'txt'
file_filter = 'Tab delimited Text Files (*.txt)'
CHANGING
file_table = lt_filetab
rc = l_count
user_action = l_action
EXCEPTIONS
file_open_dialog_failed = 1
cntl_error = 2
OTHERS = 3. "#EC NOTEXT
IF sy-subrc 0.
EXIT.
ENDIF.
LOOP AT lt_filetab INTO ls_filetab.
l_file = ls_filetab.
ENDLOOP.
CHECK l_action = 0.
IF l_file IS INITIAL.
EXIT.
ENDIF.
l_separator = 'X'.
wa_fieldcat-fieldname = 'test'.
wa_fieldcat-dd_roll = wf_delemt.
APPEND wa_fieldcat TO tb_fieldcat.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
CLEAR wa_test_upload.
Upload file from front-end (PC)
File format is tab-delimited ASCII
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = l_file
has_field_separator = l_separator
TABLES
data_tab = i_mara
data_tab = test_upload
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc 0.
EXIT.
ELSE.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
IF test_upload IS NOT INITIAL.
DESCRIBE TABLE test_upload LINES rows_read.
CLEAR : wa_test_upload,wa_upld_data.
LOOP AT test_upload INTO wa_test_upload.
CLEAR : p_error.
rows_read = sy-tabix.
IF wa_test_upload-/bic/myprogram IS INITIAL.
p_error = 'X'.
MESSAGE s153 WITH wa_test_upload-/bic/myprogram sy-tabix.
CONTINUE.
ELSE.
TRANSLATE wa_test_upload-/bic/myprogram TO UPPER CASE.
wa_upld_text-txtmd = wa_test_upload-txtmd.
wa_upld_text-txtsh = wa_test_upload-txtmd.
wa_upld_text-langu = sy-langu.
wa_upld_data-chrt_accts = 'xyz1'.
wa_upld_data-co_area = '12'.
wa_upld_data-/bic/zxyzbcsg = 'Iy'.
wa_upld_data-objvers = 'A'.
wa_upld_data-changed = 'I'.
wa_upld_data-/bic/zass_mdl = 'rrr'.
wa_upld_data-/bic/zass_typ = 'I'.
wa_upld_data-/bic/zdriver = 'yyy'.
wa_upld_text-langu = sy-langu.
MOVE-CORRESPONDING wa_test_upload TO wa_upld_data.
MOVE-CORRESPONDING wa_test_upload TO wa_upld_text.
APPEND wa_upld_data TO ztable_data.
APPEND wa_upld_text TO ztable_text.
ENDIF.
ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ztable_data.
DELETE ADJACENT DUPLICATES FROM ztable_text.
IF ztable_data IS NOT INITIAL.
CALL METHOD cl_rsdmd_mdmt=>factory
EXPORTING
i_chabasnm = 'myprogram'
IMPORTING
e_r_mdmt = lr_mdmt
EXCEPTIONS
invalid_iobjnm = 1
OTHERS = 2.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
**Lock the Infoobject to update
CALL FUNCTION 'RSDG_IOBJ_ENQUEUE'
EXPORTING
i_objnm = wf_object
i_scope = '1'
i_msgty = rs_c_error
EXCEPTIONS
foreign_lock = 1
sys_failure = 2.
IF sy-subrc = 1.
MESSAGE i107(zddd_rr) WITH wf_object sy-msgv2.
EXIT.
ELSEIF sy-subrc = 2.
MESSAGE i108(zddd_rr) WITH wf_object.
EXIT.
ENDIF.
*****Update Master Table
IF ztable_data IS NOT INITIAL.
CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
EXPORTING
i_iobjnm = 'myprogram'
i_tabclass = 'M'
I_T_ATTR = lt_attr
TABLES
i_t_table = ztable_data
EXCEPTIONS
attribute_name_error = 1
iobj_not_found = 2
generate_program_error = 3
OTHERS = 4.
IF sy-subrc 0.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'E'
txtnr = '054'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
MESSAGE e054(zddd_rr) WITH 'myprogram'.
ELSE.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'S'
txtnr = '053'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
ENDIF.
*endif.
*****update Text Table
IF ztable_text IS NOT INITIAL.
CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
EXPORTING
i_iobjnm = 'myprogram'
i_tabclass = 'T'
TABLES
i_t_table = ztable_text
EXCEPTIONS
attribute_name_error = 1
iobj_not_found = 2
generate_program_error = 3
OTHERS = 4.
IF sy-subrc 0.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'E'
txtnr = '055'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
ENDIF.
ENDIF.
ELSE.
MESSAGE s178(zddd_rr).
ENDIF.
ENDIF.
COMMIT WORK.
CALL FUNCTION 'RSD_CHKTAB_GET_FOR_CHA_BAS'
EXPORTING
i_chabasnm = 'myprogram'
IMPORTING
e_chktab = wf_tablnm
EXCEPTIONS
name_error = 1.
IF sy-subrc 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
****Release locks on Infoobject
CALL FUNCTION 'RSDG_IOBJ_DEQUEUE'
EXPORTING
i_objnm = 'myprogram'
i_scope = '1'.
ENDIF.
ENDIF.
PERFORM data_selection .
PERFORM update_alv_grid_display.
CALL FUNCTION 'MESSAGES_SHOW'.
ENDFORM.Can you please let me know how I can find out the duplicate record.
you need to split the records from flat file structure into your internal table ans use a delete ADJACENT duplicates comparing fields
split flat_str into wa_f1 wa_f2 wa_f2 at tab_space. -
Script to merge multiple CSV files together with no duplicate records.
I like a Script to merge multiple CSV files together with no duplicate records.
None of the files have any headers and column A got a unique ID. What would be the best way to accomplish that?OK here is my answer :
2 files in a directory with no headers.
first column is the unique ID, second colomun you put whatever u want
The headers are added when using the import-csv cmdlet
first file contains :
1;a
2;SAMEID-FIRSTFILE
3;c
4;d
5;e
second file contains :
6;a
2;SAMEID-SECONDFILE
7;c
8;d
9;e
the second file contains the line : 2;b wich is the same in the first file
the code :
$i = 0
Foreach($file in (get-childitem d:\yourpath)){
if($i -eq 0){
$ref = import-csv $file.fullname -Header id,value -Delimiter ";"
}else{
$temp = import-csv $file.fullname -Header id,value -Delimiter ";"
foreach($line in $temp){
if(!($ref.id.contains($line.id))){
$objet = new-object Psobject
Add-Member -InputObject $objet -MemberType NoteProperty -Name id -value $line.id
Add-Member -InputObject $objet -MemberType NoteProperty -Name value -value $line.value
$ref += $objet
$i++
$ref
$ref should return:
id
value
1
a
2
SAMEID-FIRSTFILE
3
c
4
d
5
e
6
a
7
c
8
d
9
e
(get-childitem d:\yourpath) -> yourpath containing the 2 csv file -
Hi All,
I have a situation. I am using BPC NW 7.0 and I have updated my dimension files. When I try to validate my transaction file every single record is validated successfully. But when I try to import the flat file into my application, I am getting a lot of dupplication records error and these are my questions.
1. Will we get duplicate records in transaction files?
2. Even if there are duplication, since it is a cube it should summarize not display that as a error and reject records?
3. Is there something I can do to accept duplicates (I have checked the Replace option in the data package, to overwrite the simillar records, but it is only for account, category and entity only.
5. In mycase I see identical values in all my dimension and the $value is the only difference. Why is it not summing up.
Your quickest reply is much appreciated.
Thanks,
Alex.Hi,
I have the same problem.
In my case the file that I want to upload has different row that differ for the nature column. In the conversion file I map different nature to one internal nature.
ES: cost1 --> cost
cost2 --> cost
cost3 --> cost
In my desire was that in BPC the nature cost assume the result cost = cost1 + cost2 + cost3.
The result is that only the first record is uploaded and all other recorda are rejected as duplicate.
Any suggestion? -
How to avoid duplicate record in a file to file
Hi Guys,
Could you please provide a soultion
in order to avoid duplicate entries in a flat file based on key field.
i request in terms of standard functions
either at message mappingf level or by configuring the file adapter.
warm regards
mahesh.hi mahesh,
write module processor for checking the duplicate record in file adapter
or
With a JAVA/ABAP mapping u can eliminate the duplicate records
and check this links
Re: How to Handle this "Duplicate Records"
Duplicate records
Ignoring Duplicate Records--urgent
Re: Duplicate records frequently occurred
Re: Reg ODS JUNK DATA
http://help.sap.com/saphelp_nw2004s/helpdata/en/d0/538f3b294a7f2de10000000a11402f/frameset.htm
regards
srinivas -
How to suppress duplicate records in rtf templates
Hi All,
I am facing issue with payment reason comments in check template.
we are displaying payment reason comments. Now the issue is while making batch payment we are getting multiple payment reason comments from multiple invoices with the same name and it doesn't looks good. You can see payment reason comments under tail number text field in the template.
If you provide any xml syntax to suppress duplicate records for showing distinct payment reason comments.
Attached screen shot, template and xml file for your reference.
Thanks,
Sagar.I have CRXI, so the instructions are for this release
you can create a formula, I called it cust_Matches
if = previous () then 'true' else 'false'
IN your GH2 section, right click the field, select format field, select the common tab (far left at the top)
Select the x/2 to the right of Supress in the formula field type in
{@Cust_Matches} = 'true'
Now every time the {@Cust_Matches} is true, the CustID should be supressed,
do the same with the other fields you wish to hide. Ie Address, City, etc. -
Duplicate records in Fact Tables
Hi,
We are using BPC 7.0 MS SP7. BPC created duplicate records in WB and Fac2 tables. We faced similar issue before and the solution was to reboot the server and cleanup the additional data created. I think it should not be an issue with the script logic files we have. We had the issue across all applications. Data is fine now after the server reboot and running the same logic files. I want to know if any one faced this issue and if there is any solution other than reboot. I appreciate your help.
Thanks
RajHi Sorin,
I know this thread is rather old, but i have a problem which is pretty much related to this thread and appreciate if you could assist me. I have client running on 7.0 MS who has been using it for the past 3 years.
It is a heavily customized system with many batch files running daily to update dimensions, copy data and sort. And Yes we do use custom packages that incorporates stored procedures.
Recently, with no change in environment, we encountered our factwb ballooning up out of no where. fact table only contains less then 1 gb data but, factwb has 200 gb data and practically paralayzed the system. There is also equilavent 300 gb increase in the log files.
We are not able to find out what caused this? Or if even the 200gb records in wb are even valid records that are duplicated. Is there a way to troubleshoot this? -
Duplicate records in Cube Level ( BI 7.0 )
Dear All
I am working on BI 7.0 , I have an issue , i am loading the data from Flat File to ODS and from ODS to Cube . In ODS we are selected Overwrite option, in cube level we have an Summation option. the problem is while loading the data from Flatfile to ODS records are fine . from ODS to cube data loading also fine but here i am getting the Duplicate records .
what are the best options to go ahead in such a situations??
Regards
KKI am sharing a case occured for me. Please see if its applicable for you.
Sometimes, in the step loading to Cube, when any type of problem occurs, we restart the load. If the cube load prompts saying 'the lost load was unsuccessful.... reload?', this problem may occur. It will load the records in the previous load also.
Verify what has been duplicated from the ODS changelog table and the cube load record count. If you see the number of records updated being the total of the records in the different ODSR request (in the change log table). Delete the previous load in the cube. (provided no other side effect is produced e.g from start routine etc)
Cheers. -
Calendar and Adressbook error: Duplicate records found for GUID
Hi all,
i have a Mountaion Lion Server running on a mac mini and everything was working well.
This morning one user of mine is unable to connect to his calendar and adressbook.... i found this error in the log files:
2013-06-30 15:19:50+0200 [-] [caldav-1] [-] [twistedcaldav.directory.appleopendirectory.OpenDirectoryService#error] Duplicate records found for GUID ****USER_GUID****:
2013-06-30 15:19:50+0200 [-] [caldav-1] [-] [twistedcaldav.directory.appleopendirectory.OpenDirectoryService#error] Duplicate: ***USER_Shortname***
Apperetnly there is a duplicate match in the database. how can i fix this issue?
In Server App this user is only listed once.
Mail and other services for this user are working correctly.
Thanks for any advice!Hi Samuel,
You may try:
select code,count(code)
from [dbo\].[@XTSD_XA_CMD\]
group by code having count(code) > 1
What is the result?
Thanks,
Gordon -
Duplicate Records in InfoProvider
Hi,
I am loading the Transaction Data from the flat files in to the Data Sources.
Initially I have One Request (data from one flat file) loaded in to PSA and InfoCube that has say 100 records.
Later, I loaded another flatfile in to PSA with 50 records (without deleting the initial request). Now in PSA, I have 150 records.
But I would like to load only 50 New records in to the Infocube. When I am executing the DTP, its loading 150 records. i.e. Total 100 (initial records) + 150 = 250 records.
Is there any option by which I can avoid loading the duplicate records in to my InfoCube.
I can find an option that says "Get Data by Request" in DTP. I tried checking that, but no luck.
How can I solve this issue and what exactly the check on "Get Data by Request" does?
Thanks,Hi Sesh,
There is an option in DTP where you can load only the new records. I think you have to select "Do not allow Duplicate records" radio button(i guess)... then try to load the data... I am not sure but you can research for that option in DTP...
Regards,
Kishore
Maybe you are looking for
-
Problems with iTunes and Podcasts after upgrade to iOS 8.1.3.
I upgraded my iPhone 5 to iOS 8.1.3, then I purchased some music on iTunes. While hearing the music, incomming call shut the Music player down. From that time on, cannot open neither Music, nor iTunes Store, Podcasts, Settings/General/About, (Nike) R
-
Saving-problems with the Activity Clipboard in the Interaction Record
Hi all, when we create a contact in the ic record, the business-agreement , the business-partner and the created contact gets displayed in the activity clipboard. But after clearing and calling the created contact again, the bp and business-agreement
-
Satellite P305-S8909 - why the site doesn't have information about it?
Hello Guys, I'd like to know if someone got this model: P305-S8909 Does it really run warm as I saw in some reviews? Also, is a notebook worth getting? Is the keyboard good? Why can't I find this model anymore in some websites? Did it get discontinue
-
User level mail filters in MySQL db
Is it possible adding some words to be filtered into MySQL database and let mail server filter mails ?
-
Looking for Financials Functional Expert - $100/Hr + Contract Opportunity
Our company has been looking for Oracle Financials functional consultants who have implemented the below modules for other organizations - iReceivables, iPayables, Adv. Collections and LockBox This is an existing EBS installation with A/R already imp