Array permitting duplicate record
Hi,
I have string array like below:
String strArr[] = {"style1","style2","style2","style4"};
In this string array index 1 and index 2 is having same value which is style2.
My target is to generate key(system time) for each value as below:
"key1" : "style1"
"key2" : "style2"
"key2" : "style2" //Here I want to reuse already generated key2 in place of creating new key like key3
"key4" : "style4 "
At last generated array should be:
["key1" : "style1",
"key2" : "style2",
"key2" : "style2" , //Here I want to retain this duplicate record
"key4" : "style4 "]
Value position is also important, like below array is not accepted:
["key2" : "style2",
"key1" : "style1",
"key2" : "style2" ,
"key4" : "style4 "]
I am struggling to achieve this.
Please guide me to achieve this!
Regards
My Efforts:
static Map<String, String> m = new HashMap<String, String>();
static int[] intarr;
static List<Integer> list = new ArrayList<Integer>();
public static void main(String[] args) throws InterruptedException {
String strArr[] = {"style1","style2","style2","style4"};
/**Find Index having duplicate value*/
for(int i=0; i<strArr.length; i++){
for(int j=0; j<strArr.length; j++){
if((strArr==strArr[j])&&(i!=j)){
list.add(i);
String newclass = "class"+new Date().getTime();//
//Thread.sleep(1000);//
m.put(newclass, strArr[i]);//
break;
for(int i : list){
//System.out.println("Duplicate value found at Index:: "+i);
/*String newduplicateclass = "class"+new Date().getTime();
Thread.sleep(1000);
for(int i=0; i<strArr.length; i++){
if(list.contains(i)){
System.out.println("i="+i);
m.put(newduplicateclass, strArr[i]);
}else{
String newclass = "class"+new Date().getTime();
m.put(newclass, strArr[i]);
Thread.sleep(1000);
System.out.println(m);
But want to do with smartest way using few lines of code.
Edited by: 898087 on Mar 17, 2012 3:16 AM
@TPD Thanks for your help! It gave me a speed up.
At last I am done with what I wanted and below is that code:
public static void main(String[] args){
String strArr[] = {"style7","style1","style5","style7","style2","style1","style2","style7","style2","style5","style7","style7"};
Set<String> set = new HashSet<String>();
for(int i=0; i<strArr.length; i++){
set.add(strArr);
Object objArr[] = set.toArray();
int counter = 0;
Map<String, Integer> map = new HashMap<String, Integer>();
for(int j=0; j<objArr.length; j++){
map.put(objArr[j].toString(), counter);
counter++;
JSONArray jsonArr = new JSONArray();
JSONObject jsonObj;
for(int i=0; i<strArr.length; i++){
jsonObj = new JSONObject();
jsonObj.put(strArr[i], map.get(strArr[i]));
jsonArr.add(jsonObj);
System.out.println(jsonArr);
//OUTPUT: [{"style7":0},{"style1":2},{"style5":1},{"style7":0},{"style2":3},{"style1":2},{"style2":3},{"style7":0},{"style2":3},{"style5":1},{"style7":0},{"style7":0}]
Similar Messages
-
The ABAP/4 Open SQL array insert results in duplicate Record in database
Hi All,
I am trying to transfer 4 plants from R/3 to APO. The IM contains only these 4 plants. However a queue gets generated in APO saying 'The ABAP/4 Open SQL array insert results in duplicate record in database'. I checked for table /SAPAPO/LOC, /SAPAPO/LOCMAP & /SAPAPO/LOCT for duplicate entry but the entry is not found.
Can anybody guide me how to resolve this issue?
Thanks in advance
Sandeep PatilHi Sandeep,
Now try to delete ur location before activating the IM again.
Use the program /SAPAPO/DELETE_LOCATIONS to delete locations.
Note :
1. Set the deletion flag (in /SAPAPO/LOC : Location -> Deletion Flag)
2. Remove all the dependencies (like transportation lane, Model ........ )
Check now and let me know.
Regards,
Siva.
null -
Check duplicate record in array
suppose i have an array
int a[]={1,2,3,4,4,5,6,7,7};
how should i check duplicate record and remove them?
thanks...Write yourself some pseudo-code:
// Loop though all the elements in the array from 0 to length-2.
// For each element i, loop through following elements from i+1 to length-1
// if the following element is equal to the current one, do something with it.
"do something" can mean either one of two things:
(1) Remove it (VERY messy) or
(2) Create a new array and only add in the non-duplicate elements.
The easiest thing of all would be to create a List and then create a Set from that. Why code all this logic when somebody has already done it for you? Look at the java.util.Arrays and java.util.Collections classes. Use what's been given to you.
% -
Duplicate records in a collection
Hi Experts,
Just now I've seen a thread related to finding duplicate records in a collection. I understand that it is not advisable to sort/filter data in a collection.
(https://forums.oracle.com/thread/2584168)
Just for curiosity I tried to display duplicate records in a collection. Please Please .. this is just for practice purpose only. Below is the rough code which I wrote.
I'm aware of one way - can be handled effectively by passing data into a global temporary table and display the duplicate/unique records.
Can you please let me know if there is any other efficient wayto do this.
declare
type emp_rec is record ( ename varchar2(40), empno number);
l_emp_rec emp_rec;
type emp_tab is table of l_emp_rec%type index by binary_integer;
l_emp_tab emp_tab;
l_dup_tab emp_tab;
l_cnt number;
n number :=1;
begin
-- Assigning values to Associative array
l_emp_tab(1).ename := 'suri';
l_emp_tab(1).empno := 1;
l_emp_tab(2).ename := 'surya';
l_emp_tab(2).empno := 2;
l_emp_tab(3).ename := 'suri';
l_emp_tab(3).empno := 1;
-- Comparing collection for duplicate records
for i in l_emp_tab.first..l_emp_tab.last
loop
l_cnt :=0;
for j in l_emp_tab.first..l_emp_tab.last
loop
if l_emp_tab(i).empno = l_emp_tab(j).empno and l_emp_tab(i).ename = l_emp_tab(j).ename then
l_cnt := l_cnt+1;
if l_cnt >=2 then
l_dup_tab(n):= l_emp_tab(i);
end if;
end if;
end loop;
end loop;
-- Displaying duplicate records
for i in l_dup_tab.first..l_dup_tab.last
loop
dbms_output.put_line(l_dup_tab(i).ename||' '||l_dup_tab(i).empno);
end loop;
end;
Cheers,
SuriDunno if this is either easier or more efficient but it is different. The biggest disadvantage to this technique is that you have extraneous database objects (a table) to keep track of. The advantage is that you can use SQL to perform the difference checks easily.
Create 2 global temporary tables with the structure you need, load them, and use set operators (UNION [ALL], INTERSECT, MINUS) to find the differences. Or, create 1 GTT with an extra column identifying the set and use the extra column to identify the set records you need. -
Oracle 10 - Avoiding Duplicate Records During Import Process
I have two databases on different servers(DB1&DB2) and a dblink connecting the two. In DB2, I have 100 million records in table 'DB2Target'.
I tried to load 100 million more records from tables DB1Source to DB2Target on top of existing 400 million records but after an hour I got a network error from DB2.
The load failed after inserting 70% records.Now I have three tasks. First I have to find the duplicate record between DB1 and DB2. Second I have to find out the remaining 30% records missing from DB2Target.
Third I have to re-load the remaining 30% records. What is the best solution?
SELECT COUNT(*), A, B FROM DB2TARGET
GROUP BY A, B
HAVING COUNT(*) > 2
re-loading
MERGE INTO DB2TARGET tgt
USING DB1SOURCE src
ON ( tgt .A= tgt .A)
WHEN NOT MATCHED THEN
INSERT ( tgt.A, tgt .B)
VALUES ( src .A, src .B)Thanks for any guidance.when I execute this I get the folllowing error message:
SQL Error: ORA-02064: distributed operation not supported
02064. 00000 - "distributed operation not supported"
*Cause: One of the following unsupported operations was attempted
1. array execute of a remote update with a subquery that references
a dblink, or
2. an update of a long column with bind variable and an update of
a second column with a subquery that both references a dblink
and a bind variable, or
3. a commit is issued in a coordinated session from an RPC procedure
call with OUT parameters or function call.
*Action: simplify remote update statement -
Error RSMPTEXTS~:Duplicate record dur durin EHP5 in phase SHADOW_IMPORT_INC
Hi expert,
i find this error during an EHP5 upgrade in phase shadow_import_inc:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SHADOW IMPORT ERRORS and RETURN CODE in SAPK-701DOINSAPBASIS.ERD
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2EETW000 Table RSMPTEXTS~: Duplicate record during array insert occured.
2EETW000 Table RSMPTEXTS~: Duplicate record during array insert occured.
1 ETP111 exit code : "8"
here also the last part of log SAPK-701DOINSAPBASIS.ERD
4 ETW000 Totally 4 tabentries imported.
4 ETW000 953984 bytes modified in database.
4 ETW000 [ dev trc,00000] Thu Aug 11 16:58:45 2011 7954092 8.712985
4 ETW000 [ dev trc,00000] Disconnecting from ALL connections: 28 8.713013
4 ETW000 [ dev trc,00000] Disconnecting from connection 0 ... 38 8.713051
4 ETW000 [ dev trc,00000] Closing user session (con=0, svc=0000000005C317C8, usr=0000000005C409A0)
4 ETW000 8382 8.721433
4 ETW000 [ dev trc,00000] Detaching from DB Server (con=0,svchp=0000000005C317C8,srvhp=0000000005C32048)
4 ETW000 7275 8.728708
4 ETW000 [ dev trc,00000] Now I'm disconnected from ORACLE 8648 8.737356
4 ETW000 [ dev trc,00000] Disconnected from connection 0 84 8.737440
4 ETW000 [ dev trc,00000] statistics db_con_commit (com_total=13, com_tx=13) 18 8.737458
4 ETW000 [ dev trc,00000] statistics db_con_rollback (roll_total=0, roll_tx=0) 14 8.737472
4 ETW000 Disconnected from database.
4 ETW000 End of Transport (0008).
4 ETW000 date&time: 11.08.2011 - 16:58:45
4 ETW000 1 warning occured.
4 ETW000 1 error occured.
1 ETP187 R3TRANS SHADOW IMPORT
1 ETP110 end date and time : "20110811165845"
1 ETP111 exit code : "8"
1 ETP199 ######################################
4 EPU202XEND OF SECTION BEING ANALYZED IN PHASE SHADOW_IMPORT_INC
and i've already try to use the last version of R3trans.
can you help me????
thanks a lot
FranciHello Fransesca,
I am also facing same error while upgrading ehp5 upgradation please if you know tell me steps to solve it.
Thanks,
Venkat -
Stop processing the duplicate record values
Hi Experts,
I have a requirement that i need do not process the duplicate values from the file place in FTP.
Scenerio is File->PI->RFC.
For ex:
Source Data:
Name,Emp_id,DOB,Designation,Location,Joining_Date,Time_Stamp
Moni,654654,11-09-1980,Developer,TN,20-02-2008,24-03-2014:3.38pm
Shiva,654612,21-02-1982,Developer,TN,15-08-2009,24-03-2014:3.38pm
Venkat,654655,19-01-1983,Developer,TN,28-10-2010,24-03-2014:3.38pm
Moni,654654,11-09-1980,Developer,TN,20-02-2008,24-03-2014:9.38pm
If next time the same record comes like Moni,654654,11-09-1980,Developer,TN,20-02-2008,24-03-2014:9.38pm,no need to process this record values.
How to stop processing the duplicate record.Kindly share some ideas to achieve this requirement by using PI 7.1.
Best Regards,
Monikandan.Hi ,
Here is one of the clean sol:
1)Using FCC read record by record by giving end separator as newline in FCC.
Reading a delimeter separated file whose columns may jumble
2)Use two mapping in one operation mapping .
1st Mapping :
Sourcefield()-->sort(ascending)-->splitbyvalue(valuechange)-->eachvalue-->collapsecontext-->udf-->
splitbyvalue(eachvalue)-->map to all target fields
Keep context of source field to top node to come all values in single array
for(int i=0;i<input.length;i++){
String temp[] = input[i].split(",");
if(temp.length == 7)
Name.addValue(temp[0]) ;
Emp_id.addValue(temp[1]) ;
DOB.addValue(temp[2]) ;
Designation.addValue(temp[3]) ;
Location.addValue(temp[4]) ;
Joining_Date.addValue(temp[5]) ;
Time_Stamp.addValue(temp[6]) ;
else{throw new StreamTransformationException("field missing in col "+i)}// up to you
Mapping 2:Actual file structure to RFC
Regards
Venkat -
Java resultset duplicate records
I have a database from postgresql
data are as follows
Name | Description | Date
P1 | Des | 2009-01-01
P2 | Des | 2009-01-01 <-- P2 are considered to be the same as P1
P3 | Des | 2009-01-02
P4 | Des2 | 2009-01-01
P3 & P4 are not considered to be duplicated
If a record's 'Description' and 'Date' is the same as others, it is logically considered as duplicate.
When I get a resultset from database.
I use an array to store the new/not duplicated records such as [Des,2009-01-01],[Des2,2009-01-01]
I have to compare each record to the array whether it is a new record.
If yes, process the data and store it in the array, If no, ignore this data.
Is there other way to do so which make it much faster and simpler?
Edited by: uatIvan on Feb 2, 2009 8:25 PMBefore determining what your sql statement should look like, I think you will need to investigate further
on which of the two duplicate records you want to keep and which to ignore. For example, if P1 and P2
are duplicates (description/date), then which do you want to use P1 or P2? You need additional selection criteria to determine which you want.
You shouldn't arbitrarily pick one over the other.
One thing to look at is the date. Is it really a date (month/day/year) in the database or a date/time (month,day,year, hour, minute, second).
If its a date/time, then use either P1 or P2 whichever one has the more recent time even if they have the same date.
Lastly, do you have so many records that you have to worry about getting the most efficient algorithm? If not, don't be overly concerned about performance.
P1 | Des | 2009-01-01
P2 | Des | 2009-01-01 <-- P2 are considered to be the same as P1
P3 | Des | 2009-01-02
P4 | Des2 | 2009-01-01 -
How to suppress duplicate records in rtf templates
Hi All,
I am facing issue with payment reason comments in check template.
we are displaying payment reason comments. Now the issue is while making batch payment we are getting multiple payment reason comments from multiple invoices with the same name and it doesn't looks good. You can see payment reason comments under tail number text field in the template.
If you provide any xml syntax to suppress duplicate records for showing distinct payment reason comments.
Attached screen shot, template and xml file for your reference.
Thanks,
Sagar.I have CRXI, so the instructions are for this release
you can create a formula, I called it cust_Matches
if = previous () then 'true' else 'false'
IN your GH2 section, right click the field, select format field, select the common tab (far left at the top)
Select the x/2 to the right of Supress in the formula field type in
{@Cust_Matches} = 'true'
Now every time the {@Cust_Matches} is true, the CustID should be supressed,
do the same with the other fields you wish to hide. Ie Address, City, etc. -
Hello Friends,
we have an issue with the Duplicate records in to DSO let me enplain the senarion
The Heder and Details data is loaded to saperate DSO's
and these 2 DSO's data shuld get merged in the third one,
the Key fields in
DSO 1 : DWRECID, 0AC_DOC_NO
DSO 2 : DWRECID , DWPOSNR
DSO 3 will fetch data from these the above 2
Key Fields are : ]
DWTSLO,
DWRECID,
DWEDAT ,
AC_DOC_NO
DWPOSNR,
0CUSTOMER
Now the data shuld be merge in to a single record in the 3 rd dso
DSO 1 do not have the DWPOSNR object in its data fields also.
we even have start routine data from DSO 1 to populate some values in the result fields from dso2 ,
Please provide if you have any inputs to merge the data record wise.
and also give me all the posibilites or options we have to over write " apart from mappings " the data ,Hi,
You should go for creating an Infoset instead of creating third DSO.
In that DSO provide the Keys of DSOs and the Common records with those keys will be merged in that Infoset.
Hope It Helps.
Regards
Praeon -
How to create duplicate records in end routines
Hi
Key fields in DSO are:
Plant
Storage Location
MRP Area
Material
Changed Date
Data Fields:
Safety Stocky
Service Level
MRP Type
Counter_1 (In flow Key figure)
Counter_2 (Out flow Key Figure)
n_ctr (Non Cumulative Key Figure)
For every record that comes in, we need to create a dupicate record. For the original record, we need to make the Counter_1 as 1 and Counter_2 as 0. For the duplicate record, we need to update Changed_Date to today's date and rest of the values will remain as is and update the counter_1 as 0 and counter_2 as -1. Where is the best place to write this code in DSO. IS it End
routine?
please let me know some bais cidea of code.Hi Uday,
I have same situation like Suneel and have written your logic in End routine DSO as follows:
DATA: l_t_duplicate_records TYPE TABLE OF TYS_TG_1,
l_w_duplicate_record TYPE TYS_TG_1.
LOOP AT RESULT_PACKAGE ASSIGNING <result_fields>.
MOVE-CORRESPONDING <result_fields> TO l_w_duplicate_record.
<result_fields>-/BIC/ZPP_ICNT = 1.
<result_fields>-/BIC/ZPP_OCNT = 0.
l_w_duplicate_record-CH_ON = sy-datum.
l_w_duplicate_record-/BIC/ZPP_ICNT = 0.
l_w_duplicate_record-/BIC/ZPP_OCNT = -1.
APPEND l_w_duplicate_record TO l_t_duplicate_records.
ENDLOOP.
APPEND LINES OF l_t_duplicate_records TO RESULT_PACKAGE.
I am getting below error:
Duplicate data record detected (DS ZPP_O01 , data package: 000001 , data record: 4 ) RSODSO_UPDATE 19
i have different requirement for date. Actually my requirement is to populate the CH_ON date as mentioned below:
sort the records based on the key and get the latest CH_ON value with unique Plant,sloc, material combination and populate
that CH_ON value for duplicate record.
Please help me to resolve this issue.
Thanks,
Ganga -
USE of PREVIOUS command to eliminate duplicate records in counter formula
i'm trying to create a counter formula to count the number of documents paid over 30 days. to do this i have to subtract the InvDate from the PayDate. and then create a counter based on this value. if {days to pay} is greater than 30 then 1 else 0.
then sum the {days to pay} field to each group. groups are company, month, and supplier.
becuase invoices can have multiple payments and payments can have multiple invoices. there is no way around having duplicate records for the field.
so my counter is distorted by by the duplicate records and my percentage of payments over 30 days formula will not be accurate do to these duplicates.
I've tried Distinct Count based on this formula if {days to pay} is greater than 30 then . and it works except that is counts 0.00 has a distinct records so my total is off 1 for summaries with a record that (days to pay} is less than or equal to 30.
if i subract 1 from the formula then it will be inaccurate for summaries with no records over 30 days.
so i'm come to this.
if Previous() do not equal
then
if {day to days} greater than 30
then 1
else 0.00
else 0.00
but it doesn't work. i've sorted the detail section by
does anyone have any knowledge or success using the PREVIOUS command in a report?
Edited by: Fred Ebbett on Feb 11, 2010 5:41 PMSo, you have to include all data and not just use the selection criteria 'PayDate-InvDate>30'?
You will need to create a running total on the RPDOC ID, one for each section you need to show a count for, evaluating for your >30 day formula.
I don't understand why you're telling the formula to return 0.00 in your if statement.
In order to get percentages you'll need to use the distinct count (possibly running totals again but this time no formula). Then in each section you'd need a formula that divides the two running totals.
I may not have my head around the concept since you stated "invoices can have multiple payments and payments can have multiple invoices". So, invoice A can have payments 1, 2 and 3. And Payment 4 can be associated with invoice B and C? Ugh. Still though, you're evaluating every row of data. If you're focus is the invoices that took longer than 30 days to be paid...I'd group on the invoice number, put the "if 'PayDate-InvDate>30' then 1 else 0" formula in the detail, do a sum on it in the group footer and base my running total on the sum being >0 to do a distinct count of invoices.
Hope this points you in the right direction.
Eric -
Hi everyone,
I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
LOAN RECORD NO. LANGUAGE CODE
123456 ENG
123456 FRE
So, although the loan only occurred once I have two instances of it in my report.
I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
ENG 1
FRE 1
A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
LOAN RECORD LANGUAGE CODE
123456 ENG, FRE
Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
Thanks!if you create a group by loan
then create a group by language
place the values in the group(loan id in the loan header)
you should only see the loan id 1x.
place the language in the language group you should only see that one time
a group header returns the 1st value of a unique id....
then in order to calculate avoiding the duplicates
use manual running totals
create a set for each summary you want- make sure each set has a different variable name
MANUAL RUNNING TOTALS
RESET
The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
whileprintingrecords;
Numbervar X := 0;
CALCULATION
The calculation is placed adjacent to the field or formula that is being calculated.
(if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
whileprintingrecords;
Numbervar X := x + ; ( or formula)
DISPLAY
The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
whileprintingrecords;
Numbervar X;
X -
How to find out duplicate record contained in a flat file
Hi Experts,
For my project I have written a program for flat file upload.
Requirement 1
In the flat file there may be some duplicate record like:
Field1 Field2
11 test1
11 test2
12 test3
13 test4
Field1 is primary key.
Can you please let me know how I can find out the duplicate record.
Requirement 2
The flat file contains the header row as shown above
Field1 Field2
How our program can skip this record and start reading / inserting records from row no 2 ie
11 test1
onwards.
Thanks
S
FORM upload1.
DATA : wf_title TYPE string,
lt_filetab TYPE filetable,
l_separator TYPE char01,
l_action TYPE i,
l_count TYPE i,
ls_filetab TYPE file_table,
wf_delemt TYPE rollname,
wa_fieldcat TYPE lvc_s_fcat,
tb_fieldcat TYPE lvc_t_fcat,
rows_read TYPE i,
p_error TYPE char01,
l_file TYPE string.
DATA: wf_object(30) TYPE c,
wf_tablnm TYPE rsdchkview.
wf_object = 'myprogram'.
DATA i TYPE i.
DATA:
lr_mdmt TYPE REF TO cl_rsdmd_mdmt,
lr_mdmtr TYPE REF TO cl_rsdmd_mdmtr,
lt_idocstate TYPE rsarr_t_idocstate,
lv_subrc TYPE sysubrc.
TYPES : BEGIN OF test_struc,
/bic/myprogram TYPE /bic/oimyprogram,
txtmd TYPE rstxtmd,
END OF test_struc.
DATA : tb_assum TYPE TABLE OF /bic/pmyprogram.
DATA: wa_ztext TYPE /bic/tmyprogram,
myprogram_temp TYPE ziott_assum,
wa_myprogram TYPE /bic/pmyprogram.
DATA : test_upload TYPE STANDARD TABLE OF test_struc,
wa2 TYPE test_struc.
DATA : wa_test_upload TYPE test_struc,
ztable_data TYPE TABLE OF /bic/pmyprogram,
ztable_text TYPE TABLE OF /bic/tmyprogram,
wa_upld_text TYPE /bic/tmyprogram,
wa_upld_data TYPE /bic/pmyprogram,
t_assum TYPE ziott_assum.
DATA : wa1 LIKE test_upload.
wf_title = text-026.
CALL METHOD cl_gui_frontend_services=>file_open_dialog
EXPORTING
window_title = wf_title
default_extension = 'txt'
file_filter = 'Tab delimited Text Files (*.txt)'
CHANGING
file_table = lt_filetab
rc = l_count
user_action = l_action
EXCEPTIONS
file_open_dialog_failed = 1
cntl_error = 2
OTHERS = 3. "#EC NOTEXT
IF sy-subrc 0.
EXIT.
ENDIF.
LOOP AT lt_filetab INTO ls_filetab.
l_file = ls_filetab.
ENDLOOP.
CHECK l_action = 0.
IF l_file IS INITIAL.
EXIT.
ENDIF.
l_separator = 'X'.
wa_fieldcat-fieldname = 'test'.
wa_fieldcat-dd_roll = wf_delemt.
APPEND wa_fieldcat TO tb_fieldcat.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
CLEAR wa_test_upload.
Upload file from front-end (PC)
File format is tab-delimited ASCII
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = l_file
has_field_separator = l_separator
TABLES
data_tab = i_mara
data_tab = test_upload
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc 0.
EXIT.
ELSE.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
IF test_upload IS NOT INITIAL.
DESCRIBE TABLE test_upload LINES rows_read.
CLEAR : wa_test_upload,wa_upld_data.
LOOP AT test_upload INTO wa_test_upload.
CLEAR : p_error.
rows_read = sy-tabix.
IF wa_test_upload-/bic/myprogram IS INITIAL.
p_error = 'X'.
MESSAGE s153 WITH wa_test_upload-/bic/myprogram sy-tabix.
CONTINUE.
ELSE.
TRANSLATE wa_test_upload-/bic/myprogram TO UPPER CASE.
wa_upld_text-txtmd = wa_test_upload-txtmd.
wa_upld_text-txtsh = wa_test_upload-txtmd.
wa_upld_text-langu = sy-langu.
wa_upld_data-chrt_accts = 'xyz1'.
wa_upld_data-co_area = '12'.
wa_upld_data-/bic/zxyzbcsg = 'Iy'.
wa_upld_data-objvers = 'A'.
wa_upld_data-changed = 'I'.
wa_upld_data-/bic/zass_mdl = 'rrr'.
wa_upld_data-/bic/zass_typ = 'I'.
wa_upld_data-/bic/zdriver = 'yyy'.
wa_upld_text-langu = sy-langu.
MOVE-CORRESPONDING wa_test_upload TO wa_upld_data.
MOVE-CORRESPONDING wa_test_upload TO wa_upld_text.
APPEND wa_upld_data TO ztable_data.
APPEND wa_upld_text TO ztable_text.
ENDIF.
ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ztable_data.
DELETE ADJACENT DUPLICATES FROM ztable_text.
IF ztable_data IS NOT INITIAL.
CALL METHOD cl_rsdmd_mdmt=>factory
EXPORTING
i_chabasnm = 'myprogram'
IMPORTING
e_r_mdmt = lr_mdmt
EXCEPTIONS
invalid_iobjnm = 1
OTHERS = 2.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
**Lock the Infoobject to update
CALL FUNCTION 'RSDG_IOBJ_ENQUEUE'
EXPORTING
i_objnm = wf_object
i_scope = '1'
i_msgty = rs_c_error
EXCEPTIONS
foreign_lock = 1
sys_failure = 2.
IF sy-subrc = 1.
MESSAGE i107(zddd_rr) WITH wf_object sy-msgv2.
EXIT.
ELSEIF sy-subrc = 2.
MESSAGE i108(zddd_rr) WITH wf_object.
EXIT.
ENDIF.
*****Update Master Table
IF ztable_data IS NOT INITIAL.
CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
EXPORTING
i_iobjnm = 'myprogram'
i_tabclass = 'M'
I_T_ATTR = lt_attr
TABLES
i_t_table = ztable_data
EXCEPTIONS
attribute_name_error = 1
iobj_not_found = 2
generate_program_error = 3
OTHERS = 4.
IF sy-subrc 0.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'E'
txtnr = '054'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
MESSAGE e054(zddd_rr) WITH 'myprogram'.
ELSE.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'S'
txtnr = '053'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
ENDIF.
*endif.
*****update Text Table
IF ztable_text IS NOT INITIAL.
CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
EXPORTING
i_iobjnm = 'myprogram'
i_tabclass = 'T'
TABLES
i_t_table = ztable_text
EXCEPTIONS
attribute_name_error = 1
iobj_not_found = 2
generate_program_error = 3
OTHERS = 4.
IF sy-subrc 0.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'E'
txtnr = '055'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
ENDIF.
ENDIF.
ELSE.
MESSAGE s178(zddd_rr).
ENDIF.
ENDIF.
COMMIT WORK.
CALL FUNCTION 'RSD_CHKTAB_GET_FOR_CHA_BAS'
EXPORTING
i_chabasnm = 'myprogram'
IMPORTING
e_chktab = wf_tablnm
EXCEPTIONS
name_error = 1.
IF sy-subrc 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
****Release locks on Infoobject
CALL FUNCTION 'RSDG_IOBJ_DEQUEUE'
EXPORTING
i_objnm = 'myprogram'
i_scope = '1'.
ENDIF.
ENDIF.
PERFORM data_selection .
PERFORM update_alv_grid_display.
CALL FUNCTION 'MESSAGES_SHOW'.
ENDFORM.Can you please let me know how I can find out the duplicate record.
you need to split the records from flat file structure into your internal table ans use a delete ADJACENT duplicates comparing fields
split flat_str into wa_f1 wa_f2 wa_f2 at tab_space. -
Check for duplicate record in SQL database before doing INSERT
Hey guys,
This is part powershell app doing a SQL insert. BUt my question really relates to the SQL insert. I need to do a check of the database PRIOR to doing the insert to check for duplicate records and if it exists then that record needs
to be overwritten. I'm not sure how to accomplish this task. My back end is a SQL 2000 Server. I'm piping the data into my insert statement from a powershell FileSystemWatcher app. In my scenario here if the file dumped into a directory starts with I it gets
written to a SQL database otherwise it gets written to an Access Table. I know silly, but thats the environment im in. haha.
Any help is appreciated.
Thanks in Advance
Rich T.
#### DEFINE WATCH FOLDERS AND DEFAULT FILE EXTENSION TO WATCH FOR ####
$cofa_folder = '\\cpsfs001\Data_pvs\TestCofA'
$bulk_folder = '\\cpsfs001\PVS\Subsidiary\Nolwood\McWood\POD'
$filter = '*.tif'
$cofa = New-Object IO.FileSystemWatcher $cofa_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
$bulk = New-Object IO.FileSystemWatcher $bulk_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
#### CERTIFICATE OF ANALYSIS AND PACKAGE SHIPPER PROCESSING ####
Register-ObjectEvent $cofa Created -SourceIdentifier COFA/PACKAGE -Action {
$name = $Event.SourceEventArgs.Name
$changeType = $Event.SourceEventArgs.ChangeType
$timeStamp = $Event.TimeGenerated
#### CERTIFICATE OF ANALYSIS PROCESS BEGINS ####
$test=$name.StartsWith("I")
if ($test -eq $true) {
$pos = $name.IndexOf(".")
$left=$name.substring(0,$pos)
$pos = $left.IndexOf("L")
$tempItem=$left.substring(0,$pos)
$lot = $left.Substring($pos + 1)
$item=$tempItem.Substring(1)
Write-Host "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp" -fore green
Out-File -FilePath c:\OutputLogs\CofA.csv -Append -InputObject "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp"
start-sleep -s 5
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=PVSNTDB33; Initial Catalog=adagecopy_daily; Integrated Security=TRUE")
$conn.Open()
$insert_stmt = "INSERT INTO in_cofa_pvs (in_item_key, in_lot_key, imgfileName, in_cofa_crtdt) VALUES ('$item','$lot','$name','$timestamp')"
$cmd = $conn.CreateCommand()
$cmd.CommandText = $insert_stmt
$cmd.ExecuteNonQuery()
$conn.Close()
#### PACKAGE SHIPPER PROCESS BEGINS ####
elseif ($test -eq $false) {
$pos = $name.IndexOf(".")
$left=$name.substring(0,$pos)
$pos = $left.IndexOf("O")
$tempItem=$left.substring(0,$pos)
$order = $left.Substring($pos + 1)
$shipid=$tempItem.Substring(1)
Write-Host "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp" -fore green
Out-File -FilePath c:\OutputLogs\PackageShipper.csv -Append -InputObject "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp"
Rich ThompsonHi
Since SQL Server 2000 has been out of support, I recommend you to upgrade the SQL Server 2000 to a higher version, such as SQL Server 2005 or SQL Server 2008.
According to your description, you can try the following methods to check duplicate record in SQL Server.
1. You can use
RAISERROR to check the duplicate record, if exists then RAISERROR unless insert accordingly, code block is given below:
IF EXISTS (SELECT 1 FROM TableName AS t
WHERE t.Column1 = @ Column1
AND t.Column2 = @ Column2)
BEGIN
RAISERROR(‘Duplicate records’,18,1)
END
ELSE
BEGIN
INSERT INTO TableName (Column1, Column2, Column3)
SELECT @ Column1, @ Column2, @ Column3
END
2. Also you can create UNIQUE INDEX or UNIQUE CONSTRAINT on the column of a table, when you try to INSERT a value that conflicts with the INDEX/CONSTRAINT, an exception will be thrown.
Add the unique index:
CREATE UNIQUE INDEX Unique_Index_name ON TableName(ColumnName)
Add the unique constraint:
ALTER TABLE TableName
ADD CONSTRAINT Unique_Contraint_Name
UNIQUE (ColumnName)
Thanks
Lydia Zhang
Maybe you are looking for
-
Bookmark text in PDF being truncated at 40 characters
I created styles (H1-H5) in my Word document. When I convert the Word document to PDF and generate a Bookmark structure, the Bookmark text in the PDF truncates my long headings at 40 characters. Any advise on how to stop the truncation of the text?
-
BED & CESS not get updated while creating excise invoice
Hi experts, When i create excise invoice with reference to commertial invoice,BED,CESS notget ubdated in excise utilization tab "DEEMED is in blue color.How can i this into "NOBAND" I have done the CIN Customizing Please correct me ASA
-
Whole block can get infinity and I can't...
I have just signed up to BT standard broadband and am hoping to possibly upgrade to fibre in the near future. I have tried calling BT however they say I can't get fibre due to them entering in the phone number and saying it is unavailable. When I ch
-
Message Interface in XI as Web-Service
Hi, I should create one web-service from message interface. This web-service should be message-literal web service(not sure what that means) and mapped to an IDOC. Is it possible and how to do that? thx
-
I own lightroom4 but i got a new hard drive and i can't find my install disk can i download it
i need to download lightroom 4 because i can't find my install disk and i just got a new hard drive. Is this a possibility????