Remove duplicates in oracle table
Hi,
I want to remove duplicates from a Account table
Contains 2 column Account_id and Account_type
values in Account table are
Account_id Account_type
1 GPR
1 GPR
1 GPR
I want only one entry and remove other entry with Account_Id = 1
Thanks,
Petri
Petri wrote:
Hi,
I want to remove duplicates from a Account table
Contains 2 column Account_id and Account_type
values in Account table are
Account_id Account_type
1 GPR
1 GPR
1 GPR
I want only one entry and remove other entry with Account_Id = 1
Thanks,
PetriHi Petri,
Depending on how important performance is for you, go for option 1 if performance is key, else go for option 2. Option 3 is highly recommended, if this is one time exercise:
Option 1. For OLTP [performance is important]
DELETE FROM account_table
WHERE ROWID IN
(SELECT ROWID
FROM (SELECT ROWID,
ROW_NUMBER ()
OVER (
PARTITION BY Account_id,
Account_type
ORDER BY
Account_id, Account_type
rn
FROM account_table)
WHERE rn > 1);
Option 2. [If you are playing around]
DELETE FROM account_table
WHERE rowid not in ( SELECT min(rowid)
FROM account_table
GROUP BY Account_id,
Account_type);
Option 3. [Seriously considering to make sure that no more dirty stuff]
a) Create a temporary table account_table_temp
b) Use the below
INSERT INTO ACCOUNT_TABLE_TEMP
(SELECT *
FROM ACCOUNT_TABLE
WHERE ROWID IN ( SELECT MIN (ROWID)
FROM ACCOUNT_TABLE
GROUP BY ACCOUNT_ID, ACCOUNT_TYPE));
c) Create constraint so that in future you won't have duplicates in future.
Thanks,
Similar Messages
-
Union all-distinct and remove duplicates from nested table?
Hi all,
I need a select that will bulk collect some data in my nested table.
I have two tables from which I need to select all the accounts only once.(remove duplicates).
Tried to search on the forum...but no luck.
I have a table with one column:
create table a1(account_no number);
and a second table with 3 columns.
create table a2 (account_no number, name number, desc varchar2 (100));
I have a nested table like:
table of a2%rowtype;
Can I select from this two table in one select and put in my nested table just one row per account?
if a I have in a 2a row like :
1 'test' 'test2'
2 aaaa aa
and in a1 a row like:
1
I want to put in my nested table just (1, null,null and 2,aaaa, aa)) or (1,test,test2 and 2,aaaa, aa). it does no matter what row from those two I insert.
Second question:
If I use:
BANNER
Oracle9i Release 9.2.0.5.0 - Production
PL/SQL Release 9.2.0.5.0 - Production
CORE 9.2.0.6.0 Production
TNS for 32-bit Windows: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production
SQL>
what is the best solution to remove duplicates from a neste table like mine?
I thought that I can build another nested table and loop in my first nt and for each row I check in there was the same account in previous lines.
it will be like:
for i in 1....nt_first.count loop
for j in 1..i loop
--check if my line was in previous lines. if it was...do not move it in my second collection
end loop;it is this best option in oracle 9i?I have a table with one column:
create table a1(account_no number);
and a second table with 3 columns.
create table a2 (account_no number, name number, desc varchar2 (100));
all I need are the accounts. the rest ar extra data
that I can ignore in this step. But if it is
available, it is ok to use it.
using one select in this case is is much better that
trying to remove duplicates parsing some nested table
with FOR many times?
Thankshi,
try to use union. Union automatically removes duplicates between two or more tables.
with t1 AS
(select '3300000' account_no FROM DUAL UNION
select '6500000' account_no FROM DUAL union
select '6500000' account_no FROM DUAL union
select '6500000' account_no FROM DUAL union
select '6500000' account_no FROM DUAL
select * from t1ACCOUNT_NO
3300000
6500000 -
Trick to remove duplicate entries from tables ?
hi.
i have 53tables which are having duplicate entries and names of all 53 tables r listed in top_t table ?
can any1 provide me solution to show and if possible ask for remove of those duplicates entries from each table if required ?
daily i am removing duplicates manually ....its too tedious now !
can any1 help me out ?Well, I suppose if the duplication is such that
SELECT DISTINCT * FROM tablename;gives you the required result, then you could have a procedure that made a copy of the table, deleted/truncated the original, then inserted the distinct values back into it.
In 10g you could even use flashback to avoid the temp copy - but it also means you can't use TRUNCATE so whether it's any more efficient I'm not sure. But just for fun and since it's urgent:
CREATE OR REPLACE PROCEDURE dedupe_table
( p_table_name user_tables.table_name%TYPE )
IS
k_start_timestamp TIMESTAMP := SYSTIMESTAMP;
BEGIN
SAVEPOINT start_of_dedupe;
BEGIN
EXECUTE IMMEDIATE 'DELETE ' || p_table_name;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO start_of_dedupe;
RAISE_APPLICATION_ERROR
( -20000
, 'Error deleting ' || UPPER(p_table_name) ||
CHR(10) || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE
, TRUE );
END;
BEGIN
EXECUTE IMMEDIATE
'INSERT INTO ' || p_table_name ||
' SELECT DISTINCT * FROM ' || p_table_name || ' AS OF TIMESTAMP :b1'
USING k_start_timestamp;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO start_of_dedupe;
RAISE_APPLICATION_ERROR
( -20000
, 'Error repopulating ' || UPPER(p_table_name) ||
CHR(10) || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE
, TRUE );
END;
END dedupe_table;
SQL> select * from wr_test;
COL1 C C
1 A B
1 A B
2 C D
2 C D
4 rows selected.
SQL> BEGIN
2 dedupe_table('WR_TEST');
3 END;
4 /
PL/SQL procedure successfully completed.
SQL> select * from wr_test;
COL1 C C
1 A B
2 C D
2 rows selected.I make no claims for robustness, efficiency or human safety.
Edited by: William Robertson on Sep 24, 2009 7:12 PM -
Remove duplicate entry from table
Hello all,
In my one table
File Stage log (File_Stage_Log_ID int identity(1,1),File ID int ,Quarter_Date nvarchar(50),StageID int )
have done duplicate entry by mistake on similar quarter_date
something like ..
FILE_ID
Quarter
FILE_STAGE_LOG_ID
STAGE_ID
22401
Dec-13
233091
450
22401
Dec-13
244116
420
22401
Mar-14
233095
450
22401
Mar-14
237478
405
22401
Jun-14
237479
405
22401
Jun-14
233099
450
22401
Sep-14
233102
450
22401
Sep-14
237480
405
22401
Dec-14
237481
405
22401
Dec-14
227275
420
there are too many files which have the same duplicacy ..
now , above you can see that dec -13 quarter coming twice for a single file
tell me the way to delete one entry from the table for a files
so that i have output at the end like ...
FILE_ID
Quarter
FILE_STAGE_LOG_ID
STAGE_ID
22401
Dec-13
233091
450
22401
Mar-14
233095
450
22401
Jun-14
237479
405
22401
Sep-14
233102
450
22401
Dec-14
237481
405
Please help me with easiest possible way ..
Dilip Patil..How do you determine which one out of duplicate to be kept? As per output it doesnt follow any pattern
so it may be this
--DELETE t
SELECT *
FROM
SELECT ROW_NUMBER() OVER (PARTITION BY FILE_ID,Quarter_Date ORDER BY FILE_ID) AS Rn,*
FROM FileStageLog
)t
WHERE Rn >1
Run the select above to see records to be removed and once happy uncomment the delete, comment the select * and run the query to do the delete
If it doesnt give expected records, explain on what basis you want to identify records to be deleted
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
Commit interval while removing duplicates
I am trying to remove duplicates from a table with over 10million records. Below query is working fine but it doesnt contain any COMMIT interval. I have to commit after every 20k or 30k records deletion for which IF loop is necessary.
Can anyone help me with it please ?
Query:
delete from
customer
where rowid in
(select rowid from
(select
rowid,
row_number()
over
(partition by custnbr order by custnbr) dup
from customer)
where dup > 1);937851 wrote:
I am trying to remove duplicates from a table with over 10million records. Below query is working fine but it doesnt contain any COMMIT interval. I have to commit after every 20k or 30k records deletion for which IF loop is necessary. It is more efficient to delete all the rows and commit once
10M rows is a good number but modern systems should be able to support the delete. If you are worried about running out of rollback/undo you can (only if necessary!) batch deletes and perform periodic commits, something like (again, only if necessary)
delete from whatever
where key in (result from subquery)
and rownum <= 1000000;
delete from whatever
where key in (result from subquery)
and rownum <= 1000000;
delete from whatever
where key in (result from subquery)
and rownum <= 1000000;
. . . -
How do i remove duplicate records?
I have a table in which almost all the records have been duplicated. I want to remove the duplicates now. How do i remove them?
Hi,
Here is the select stmt which will remove duplicate rows from table.
delete table_name a
where rowid not in (select max(rowid) from tablename b where a.rep_col = b.rep_col);
Hope this Helps.
Regards,
Ganesh R -
Best way to remove duplicates based on multiple tables
Hi,
I have a mechanism which loads flat files into multiple tables (can be up to 6 different tables) using external tables.
Whenever a new file arrives, I need to insert duplicate rows to a side table, but the duplicate rows are to be searched in all 6 tables according to a given set of columns which exist in all of them.
In the SQL Server Version of the same mechanism (which i'm migrating to Oracle) it uses an additional "UNIQUE" table with only 2 columns(Checksum1, Checksum2) which hold the checksum values of 2 different sets of columns per inserted record. when a new file arrives it computes these 2 checksums for every record and look it up in the unique table to avoid searching all the different tables.
We know that working with checksums is not bulletproof but with those sets of fields it seems to work.
My questions are:
should I use the same checksums mechanism? if so, should I use the owa_opt_lock.checksum function to calculate the checksums?
Or should I look for duplicates in all tables one after the other (indexing some of the columns we check for duplicates with)?
Note:
These tables are partitioned with day partitions and can be very large.
Any advice would be welcome.
Thanks.>
I need to keep duplicate rows in a side table and not load them into table1...table6
>
Does that mean that you don't want ANY row if it has a duplicate on your 6 columns?
Let's say I have six records that have identical values for your 6 columns. One record meets the condition for table1, one for table2 and so on.
Do you want to keep one of these records and put the other 5 in the side table? If so, which one should be kept?
Or do you want all 6 records put in the side table?
You could delete the duplicates from the temp table as the first step. Or better
1. add a new column WHICH_TABLE NUMBER to the temp table
2. update the new column to -1 for records that are dups.
3. update the new column (might be done with one query) to set the table number based on the conditions for each table
4. INSERT INTO TABLE1 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 1
INSERT INTO TABLE6 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 6
When you are done the WHICH_TABLE will be flagged with
1. NULL if a record was not a DUP but was not inserted into any of your tables - possible error record to examine
2. -1 if a record was a DUP
3. 1 - if the record went to table 1 (2 for table 2 and so on)
This 'flag and then select' approach is more performant than deleting records after each select. Especially if the flagging can be done in one pass (full table scan).
See this other thread (or many, many others on the net) from today for how to find and remove duplicates
Best way of removing duplicates -
Remove carriage returns from a field in an oracle table
I have a field that is defined as a LONG in my oracle table; the data contained in this field has carriage returns/line feeds (it's free form text); as i'm selecting data from this field, i need the carriage returns removed so that the data from this field all appears on one line.
I tried using the TRANSLATE function to convert the carriage returns to something else, but that doesn't work.
Example:
Select comment from Notes:
COMMENT
the applicant called for an appointment;
an exam was scheduled for 4/1/05 at 9am;
called applicant to confirm app
this needs to be extracted as: "the applicant called for an appointment; an exam was scheduled for 4/1/05 at 9am; called applicant to confirm app"
How can i do this? Can the decode function be used to remove the carriage returns in this field?when i used translate its giving correctly,
SQL> ed
Wrote file afiedt.buf
1 select translate('the applicant called for an appointment;
2 an exam was scheduled for 4/1/05 at 9am;
3 called applicant to confirm app
4 this needs to be extracted as: "the applicant called for an appointment; an exam was scheduled
5 How can i do this? Can the decode function be used to remove the carriage returns in this field
6* ',' ') from dual
SQL> /
TRANSLATE('THEAPPLICANTCALLEDFORANAPPOINTMENT;ANEXAMWASSCHEDULEDFOR4/1/05AT9AM;CALLEDAPPLICANTTOCONF
the applicant called for an appointment; an exam was scheduled for 4/1/05 at 9am; called applicant t
SQL> ed
Wrote file afiedt.buf
1 select 'the applicant called for an appointment;
2 an exam was scheduled for 4/1/05 at 9am;
3 called applicant to confirm app
4 this needs to be extracted as: "the applicant called for an appointment; an exam was scheduled
5* How can i do this? Can the decode function be used to remove the carriage returns in this field
SQL> /
'THEAPPLICANTCALLEDFORANAPPOINTMENT;ANEXAMWASSCHEDULEDFOR4/1/05AT9AM;CALLEDAPPLICANTTOCONFIRMAPPTHIS
the applicant called for an appointment;
an exam was scheduled for 4/1/05 at 9am;
called applicant to confirm app
this needs to be extracted as: "the applicant called for an appointment; an exam was scheduled for 4
How can i do this? Can the decode function be used to remove the carriage returns in this field?
SQL> -
First attempt to remove duplicate rows from a table...
I have seen many people asking for a way to remove duplicate rows from data. I made up a fairly simple script. It adds a column to the table with the cell selected in it, and adds the concatenation of the data to the left into that new column. then it reads that into a list, and walks through that list to find any that are listed twice. Any that are it marks for DELETE.
It then walks through to find each one marked for delete and removes them (you must go from bottom to top to do this, otherwise your row markings for delete don't match up to the original rows anymore). Last is to delete the column we added.
tell application "Numbers"
activate
tell document 1
-- DETERMINE THE CURRENT SHEET
set currentsheetindex to 0
repeat with i from 1 to the count of sheets
tell sheet i
set x to the count of (tables whose selection range is not missing value)
end tell
if x is not 0 then
set the currentsheetindex to i
exit repeat
end if
end repeat
if the currentsheetindex is 0 then error "No sheet has a selected table."
-- GET THE TABLE WITH CELLS
tell sheet currentsheetindex
set the current_table to the first table whose selection range is not missing value
end tell
end tell
log current_table
tell current_table
set list1 to {}
add column after column (count of columns)
set z to (count of columns)
repeat with j from 1 to (count of rows)
set m to ""
repeat with i from 1 to (z - 1)
set m to m & value of (cell i of row j)
end repeat
set value of cell z of row j to m
end repeat
set MyRange to value of every cell of column z
repeat with i from 1 to (count of items of MyRange)
set n to item i of MyRange
if n is in list1 then
set end of list1 to "Delete"
else
set end of list1 to n
end if
end repeat
repeat with i from (count of items of list1) to 1 by -1
set n to item i of list1
if n = "Delete" then remove row i
end repeat
remove column z
end tell
end tell
Let me know how it works for y'all, it worked good on my machine, but I know localization is causing errors sometimes when I post things.
Thanks,
Jason
Message was edited by: jaxjasonHi jason
I hope that with the added comments it will be clear.
Ask if something is always opaque.
set {current_Range, current_table, current_Sheet, current_Doc} to my getSelection()
tell application "Numbers09"
tell document current_Doc to tell sheet current_Sheet to tell table current_table
set list1 to {}
add column after column (count of columns)
set z to (count of columns)
repeat with j from 1 to (count of rows)
set m to ""
tell row j
repeat with i from 1 to (z - 1)
set m to m & value of cell i
end repeat
set value of cell z to m
end tell
end repeat
set theRange to value of every cell of column z
repeat with i from (count of items of theRange) to 1 by -1
(* As I scan the table backwards (starting from the bottom row),
I may remove a row immediately when I discover that it is a duplicate *)
set n to item i of theRange
if n is in list1 then
remove row i
else
set end of list1 to n
end if
end repeat
remove column z
end tell
end tell
--=====
on getSelection()
local _, theRange, theTable, theSheet, theDoc, errMsg, errNum
tell application "Numbers09" to tell document 1
set theSheet to ""
repeat with i from 1 to the count of sheets
tell sheet i
set x to the count of tables
if x > 0 then
repeat with y from 1 to x
(* Open a trap to catch the selection range.
The structure of this item
«class
can't be coerced as text.
So, when the instruction (selection range of table y) as text
receive 'missing value' it behaves correctly and the lup continue.
But, when it receive THE true selection range, it generates an error
whose message is errMsg and number is errNum.
We grab them just after the on error instruction *)
try
(selection range of table y) as text
on error errMsg number errNum (*
As we reached THE selection range, we are here.
We grab the errMsg here. In French it looks like:
"Impossible de transformer «class
The handler cuts it in pieces using quots as delimiters.
item 1 (_) "Impossible de transformer «class » "
item 2 (theRange) "A2:M25"
item 3 (_) " of «class NmTb» "
item 4 (theTable) "Tableau 1"
item 5 (_) " of «class NmSh» "
item 6 (theSheet) "Feuille 1"
item 7 (_) " of document "
item 8 (theDoc) "Sans titre"
item 9 ( I drop it ) " of application "
item 10 ( I drop it ) "Numbers"
item 11 (I drop it ) " en type string."
I grab these items in the list
{_, theRange, _, theTable, _, theSheet, _, theDoc}
Yes, underscore is a valid name of variable.
I often uses it when I want to drop something.
An alternate way would be to code:
set ll to my decoupe(errMsg, quote)
set theRange to item 2 of ll
set theTable to item 4 of ll
set theSheet to item 8 of ll
set theDoc to item 10 of ll
it works exactly the same but it's not so elegant.
set {_, theRange, _, theTable, _, theSheet, _, theDoc} to my decoupe(errMsg, quote)
exit repeat (*
as we grabbed the interesting datas, we exit the lup indexed by y.*)
end try
end repeat -- y
if theSheet > "" then exit repeat (*
If we are here after grabbing the datas, theSheet is not "" so we exit the lup indexed by i *)
end if
end tell -- sheet
end repeat -- i
(* We may arrive here with two kinds of results.
if we grabbed a selection, theSheet is something like "Feuille 1"
if we didn't grabbed a selection, theSheet is the "" defined on entry
and we generate an error which is not trapped so it stops the program *)
if theSheet = "" then error "No sheet has a selected table."
end tell -- document
(* Now, we send to the caller the interesting datas :
theRange "A2:M25"
theTable "Tableau 1"
theSheet "Feuille 1"
theDoc "Sans titre" *)
return {theRange, theTable, theSheet, theDoc}
end getSelection
--=====
on decoupe(t, d)
local l
set AppleScript's text item delimiters to d (*
Cut the text t in pieces using d as delimiter *)
set l to text items of t
set AppleScript's text item delimiters to "" (*
Resets the delimiters to the standard value. *)
(* Send the list to the caller *)
return l
end decoupe
--=====
Have fun
And if it's not clear enough, you may ask for more explanations.
Yvan KOENIG (from FRANCE mardi 27 janvier 2009 21:49:19) -
Powershell and oracle and duplicate data in table
I have created powershell script to insert data in oracle table from csv file and I want to know how to stop insert duplicate row when Powershell script runs multiple time My powershell script is as follow:
'{0,-60}{1,20}'
-f
"Insert TEEN PREGNANCY ICD9 AND ICD10 CODES into the su_edit_detail ",(Get-Date
-Format yyyyMMdd:hhmmss);
$myQuery
=
SET PAGES 600;
SET LINES 4000;
SET ECHO ON;
SET serveroutput on;
WHENEVER sqlerror exit sql.sqlcode;
foreach
($file
in
dir
"$($UCMCSVLoadLocation2)"
-recurse
-filter
"*.csv")
$fileContents
=
Import-Csv
-Path
$file.fullName
foreach ($line
in
$fileContents)
$null
= Execute-NonQuery-Oracle -sql
insert into SU_EDIT_DETAIL(EDIT_FUNCTION,TABLE_FUNCTION,CODE_FUNCTION,CODE_TYPE,CODE_BEGIN,CODE_END,EXCLUDE,INCLUDE_X,OP_NBR,TRANSCODE,VOID,YMDEFF,YMDEND,YMDTRANS)
Values
('$($line."EDIT_FUNCTION")','$($line."TABLE_FUNCTION")','$($line."CODE_FUNCTION")','$($line."CODE_TYPE")','$($line."CODE_BEGIN")','$($line."CODE_END")',' ',' ', 'MIS', 'C', ' ', 20141001, 99991231,
20131120)
Vijay Patelplease read "PLEASE READ BEFORE POSTING"
This forum is about the Small Basic programming language.
Try another forum.
Jan [ WhTurner ] The Netherlands -
Remove duplicates while loading data from text file
Hi,
Data in text file (some times has duplicates) is being loaded into Oracle 9i database using Informatica. To improve performance, we would like to remove duplicates at the time of each load using Oracle procedure. Could you please help me with this?
Thanks,
LakshmiNo, our table doesn't have that. Most of the functionality is managed at the informatica level. Is there any other way? Thanks,
-
Dear All,
I have oracle 10g R2 On windows.
I have table structure like below...
ASSIGNED_TO
USER_ZONE
CREATED
MASTER_FOLIO_NUMBER
NAME
A_B_BROKER_CODE
INTERACTION_ID
INTERACTION_CREATED
INTERACTION_STATE
USER_TEAM_BRANCH
A4_IN_CALL_TYPE
A5_IN_CALL_SUBTYPE
DNT_AGING_IN_DAYS
DNT_PENDING_WITH
DNT_ESCALATION_STAGE_2
DT_UPDATEI use sql loader to load the data from .csv file to oracle table and have assign the value to dt_update sysdate. Everytime i execute the sql loader control file dt_update set as sysdate.
Sometimes problem occures while inserting data through sql loader and half row get insert. after solving the problem again i execute sql loader and hence these duplicate records get inserted.
Now I want to remove all the duplicate records for those dt_update is same.
Please help me to solve the problem
Regards,
Chanchal Wankhade.Galbarad wrote:
Hi
I think you have two ways
first - if it is first import in your table - you can delete all record from table and run import yet one time
second - you can delete all duplicate records and not running import
try this script
<pre>
delete from YOUR_TABLE
where rowid in (select min(rowid)
from YOUR_TABLE
group by ASSIGNED_TO,
USER_ZONE,
CREATED,
MASTER_FOLIO_NUMBER,
NAME,
A_B_BROKER_CODE,
INTERACTION_ID,
INTERACTION_CREATED,
INTERACTION_STATE,
USER_TEAM_BRANCH,
A4_IN_CALL_TYPE,
A5_IN_CALL_SUBTYPE,
DNT_AGING_IN_DAYS,
DNT_PENDING_WITH,
DNT_ESCALATION_STAGE_2,
DT_UPDATE)
</pre>Have you ever tried that script for deleting duplicates? I think not. If you did you'd find it deleted non-duplicates too. You'd also find that it only deletes the first duplicate where there are duplicates.
XXXX> CREATE TABLE dt_test_dup
2 AS
3 SELECT
4 mod(rownum,3) id
5 FROM
6 dual
7 CONNECT BY
8 level <= 9
9 UNION ALL
10 SELECT
11 rownum + 3 id
12 FROM
13 dual
14 CONNECT BY
15 level <= 3
16 /
Table created.
Elapsed: 00:00:00.10
XXXX> select * from dt_test_dup;
ID
1
2
0
1
2
0
1
2
0
4
5
6
12 rows selected.
Elapsed: 00:00:00.18
XXXX> delete
2 from
3 dt_test_dup
4 where
5 rowid IN ( SELECT
6 MIN(rowid)
7 FROM
8 dt_test_dup
9 GROUP BY
10 id
11 )
12 /
6 rows deleted.
Elapsed: 00:00:00.51
XXXX> select * from dt_test_dup;
ID
1
2
0
1
2
0
6 rows selected.
Elapsed: 00:00:00.00 -
Removing duplicate values from selectOneChoice bound to List Iterator
I'm trying to remove duplicate values from a selectOneChoice that i have. The component binds back to a List Iterator on the pageDefinition.
I have a table on a JSF page with 5 columns; the table is bound to a method iterator on the pageDef. Then above the table, there are 5 separate selectOneChoice components each one of which is bound to the result set of the table's iterator. So this means that each selectOneChoice only contains vales corresponding to the columns in the table which it represents.
The selectOneChoice components are part of a search facility and allow the user to select values from them and restrict the results that are returned. The concept is fine and i works. However if i have repeating values in the selectOneChoice (which is inevitable given its bound to the table column result set), then i need to remove them. I can remove null values or empty strings using expression language in the rendered attribute as shown:
<af:forEach var="item"
items="#{bindings.XXXX.items}">
<af:selectItem label="#{item.label}" value="#{item.label}"
rendered="#{item.label != ''}"/>
</af:forEach>
But i dont know how i can remove duplicate values easily. I know i can programatically do it in a backing bean etc.... but i want to know if there is perhaps some EL that might do it or another setting that ADF gives which can overcome this.
Any help would be appreciated.
Kind RegardsHi,
It'll be little difficult removing duplicates and keeping the context as it is with exixting standard functions. Removing duplicates irrespective of context changes, we can do with available functions. Please try with this UDF code which may help you...
source>sort>UDF-->Target
execution type of UDF is Allvalues of a context.
public void UDF(String[] var1, ResultList result, Container container) throws StreamTransformationException{
ArrayList aList = new ArrayList();
aList.add(var1(0));
result.addValue(var1(0));
for(int i=1; i<var1.length; i++){
if(aList.contains(var1(i)))
continue;
else{
aList.add(var1(i));
result.addValue(var1(i));
Regards,
Priyanka -
Hi
i am removing duplicate records while importing bulk data into the table...I am checking for some columns...when they are same, i am removing the old records...i have used the following code to remove duplicates...
execute immediate 'DELETE FROM test1 WHERE ROWID IN (SELECT ROWID FROM (SELECT ROWID,ROW_NUMBER() OVER (PARTITION BY c1,c2 ORDER BY 1) row_no FROM test1)WHERE row_no > 1)';
here i check c1 and c2 columns...if they are same the old records are to be deleted...but in this code, the new records are deleted..can anyone say how to remove old duplicate records?
VallyHi
i am removing duplicate records while importing
bulk data into the tableWhat you mean by using "while"?
During the process of importing(read inserting) - you want to delete duplicate records?
As you say in the following you have C1 and C2 - using both of them - you find duplicates.
I deem you have other columns besides C1 and C2. And these columns have different fileds in NEW record and OLD record - then why don't you use UPDATE statement?
...I am checking for some
columns...when they are same, i am removing the old
records...i have used the following code to remove
duplicates...you should clarify on what criteria you separate old records from new records and place this condition in your query.
E.g. you have a field DATE_OF_ENTRY
and the latest one is the new record which shouldn't be deleted
then you would be able to put it into your delete statement:
DELETE FROM test1
WHERE ROWID IN (SELECT ROWID
FROM (SELECT ROWID,
ROW_NUMBER() OVER(PARTITION BY c1, c2 ORDER BY DATE_OF_ENTRY desc) row_no
FROM test1)
WHERE row_no > 1) -
Mapping To Oracle Table, but no output after transform
Hi, All
I want to transform some records to Oracle table and met a very werid issue. I use a transform in odx. this is the mapping
name by name
and I add a output before the mapping, this is the output xml
<InforProd xmlns="">
- <InforProdRow xmlns="">
<PROD_CD>00000013 JOBST</PROD_CD>
<PROD_DESCRP>Description</PROD_DESCRP>
<STD_COST>100.0000</STD_COST>
<SELL_PRICE>1000.0000</SELL_PRICE>
<UOM>11.000000000000</UOM>
<USERSTRING_0>213020900030</USERSTRING_0>
<USERSTRING_1>SN</USERSTRING_1>
<USERSTRING_2>这是中文</USERSTRING_2>
<USERSTRING_3>Active</USERSTRING_3>
<USERSTRING_4>1</USERSTRING_4>
<USERSTRING_5>2</USERSTRING_5>
<USERSTRING_6>3</USERSTRING_6>
<USERSTRING_7>4</USERSTRING_7>
<USERSTRING_8>5</USERSTRING_8>
<LOADER_REF>7</LOADER_REF>
<LastModifiedDate>2014-01-01T00:00:00</LastModifiedDate>
<LastUpdatedDate>2014-01-01T00:00:00</LastUpdatedDate>
</InforProdRow>
</InforProd>
after the transform, I get no resuslt.
<ns0:Insert xmlns:ns0="....">
- <ns0:RECORDSET>
- <ns0:IMP_RAW_PRODRECORDINSERT>
<ns0:PROD_CD />
<ns0:PROD_DESCRP />
<ns0:STD_COST />
<ns0:SELL_PRICE />
<ns0:UOM />
<ns0:USERSTRING_0 />
<ns0:USERSTRING_1 />
<ns0:USERSTRING_2 />
<ns0:USERSTRING_3 />
<ns0:USERSTRING_4 />
<ns0:USERSTRING_5 />
<ns0:USERSTRING_6 />
<ns0:USERSTRING_7 />
<ns0:USERSTRING_8 />
<ns0:LOADER_REF />
</ns0:IMP_RAW_PRODRECORDINSERT>
</ns0:RECORDSET>
</ns0:Insert>
This the instance I use Generate Intance of the schema.
<ns0:InforProd xmlns:ns0="">
- <ns0:InforProdRow>
<ns0:PROD_CD>PROD_CD_0</ns0:PROD_CD>
<ns0:PROD_DESCRP>PROD_DESCRP_0</ns0:PROD_DESCRP>
<ns0:STD_COST>10.4</ns0:STD_COST>
<ns0:SELL_PRICE>10.4</ns0:SELL_PRICE>
<ns0:UOM>UOM_0</ns0:UOM>
<ns0:USERSTRING_0>USERSTRING_0_0</ns0:USERSTRING_0>
<ns0:USERSTRING_1>USERSTRING_1_0</ns0:USERSTRING_1>
<ns0:USERSTRING_2>USERSTRING_2_0</ns0:USERSTRING_2>
<ns0:USERSTRING_3>USERSTRING_3_0</ns0:USERSTRING_3>
<ns0:USERSTRING_4>USERSTRING_4_0</ns0:USERSTRING_4>
<ns0:USERSTRING_5>USERSTRING_5_0</ns0:USERSTRING_5>
<ns0:USERSTRING_6>USERSTRING_6_0</ns0:USERSTRING_6>
<ns0:USERSTRING_7>USERSTRING_7_0</ns0:USERSTRING_7>
<ns0:USERSTRING_8>USERSTRING_8_0</ns0:USERSTRING_8>
<ns0:LOADER_REF>10.4</ns0:LOADER_REF>
<ns0:LastModifiedDate>1999-05-31T13:20:00.000-05:00</ns0:LastModifiedDate>
<ns0:LastUpdatedDate>1999-05-31T13:20:00.000-05:00</ns0:LastUpdatedDate>
</ns0:InforProdRow>
</ns0:InforProd>
it has <ns0:>, but my schema input doesn't have, does it matter, I did the same thing on Sqlserver, the map result is correct.
can anyone help?Hi Neal,
It wont matter much to have <ns0> attribute added to your Map output .
We can remove ns0 prefix simply by set the schema elements property or both elements and attributes properties to be qualified. To do that follow my steps:
1- Open your schema
2- Right Click <Schema> and select properties
3- Use schema property editior and Set [Element FromDefult] to Unqualified, and then set [Attribute FromDefault] to Unqualified if you are using attributes in your schema.
You can look at below links
http://geekswithblogs.net/dmillard/archive/2004/10/20/12935.aspx
http://www.techtalkz.com/microsoft-biztalk-server/295831-why-biztalk-2006-piut-ns0-prefix-part-my-xml-even-if-i-dont-wantit-plz-some1-explain-me.html
Maybe you are looking for
-
Screen Layout Field in the Asset Master Record
What is the use of the screen in the asset master record, ANLA-FELEI? We have some assets where there are different values between this screen layout and the screen layout defined at the asset class (i.e. through configuration). It seems the one defi
-
About comma and assotiation in Java language
I have the next code: method(in.readUTF(), in.readLong());What will be read the first and what will be read the second: String or long? Does java language tell us something about it? Is it jvm independent? thank you.
-
I am trying to find out if Weblogic supports functionality that TOPLink provides to represent inheritance. (http://www.webgain.com/products/toplink/mapping_workbench/demos/ check the "Using Advanced Descriptor Functions", "Inheritance"). I want to ha
-
Why does iTunes resync audiobooks every time?
Hi guys, I got a problem syncing my iPhone my iTunes... Since I updated to iOS 5 iTunes is sending my audiobooks to the iPhone every time I sync them... Sending a 4GB file to the iPhone is quite annoying and I'm pretty sure that before the update iTu
-
IPhone with my car ipod connection
I have a 2007 with the built in ipod connection...will my iphone work in my car? im scared to plug my iPhone in. Please Help, thanx.