Remove duplicates based on a condition

Hi all,
I need help on a query to remove duplicates based on a condition.
E.g. My table is
FE CC DATE FLAG
FE1 CC1 10/10 FB
FE1 CC1 9/10 FB
FE1 CC1 11/10 AB
FE1 CC2 9/10 AB
FE1 CC2 10/10 FB
FE1 CC2 11/10 AB
I want to remove all duplicate rows on FE and CC based on the below condition :
DATE <MAX(DATE) WHERE FLAG='FB'
That means I want to remove the row FE1 CC1 9/10 FB
but not the rows
FE1 CC1 10/10 FB
and
FE1 CC1 11/10 AB
as only the row FE1 CC1 9/10 FB has date <MAX(DATE) WHERE FLAG='FB'.
Similarly I want to keep
FE1 CC2 10/10 FB
FE1 CC2 11/10 AB
but not
FE1 CC2 9/10 AB
Many thanks.

Hi,
Do you want to DELETE rows from the table, or just not show some rows in the output? Since you're talking about a "query", rather that a "DELETE statement", I'll assume you want to leave those rows in the table, but not show them in the output.
Here's one way:
WITH     got_r_num     AS
     SELECT     fe, cc, dt, flag
     ,     RANK () OVER ( PARTITION BY  fe_cc, flag
                      ORDER BY          dt       DESC
                    )          AS r_num
     FROM     table_x
SELECT     fe
,     cc
,     TO_CHAR (dt, 'fmMM/YY')          AS dt
,     flag
FROM     got_r_num
WHERE     flag     != 'FB'
OR     r_num     = 1
;if you'd care to post CREATE TABLE and INSERT statements for your sample data, then I could test it.
This assumes that the column you called DATE (which is not a good column name, so I called it dt) is a DATE, and that you are displaying it in MM/YY format.
This also assumes that dt and flag are never NULL.
If I guessed wrong about these things, then the query can be changed; it will just be a little messier.

Similar Messages

  • Best way to remove duplicates based on multiple tables

    Hi,
    I have a mechanism which loads flat files into multiple tables (can be up to 6 different tables) using external tables.
    Whenever a new file arrives, I need to insert duplicate rows to a side table, but the duplicate rows are to be searched in all 6 tables according to a given set of columns which exist in all of them.
    In the SQL Server Version of the same mechanism (which i'm migrating to Oracle) it uses an additional "UNIQUE" table with only 2 columns(Checksum1, Checksum2) which hold the checksum values of 2 different sets of columns per inserted record. when a new file arrives it computes these 2 checksums for every record and look it up in the unique table to avoid searching all the different tables.
    We know that working with checksums is not bulletproof but with those sets of fields it seems to work.
    My questions are:
    should I use the same checksums mechanism? if so, should I use the owa_opt_lock.checksum function to calculate the checksums?
    Or should I look for duplicates in all tables one after the other (indexing some of the columns we check for duplicates with)?
    Note:
    These tables are partitioned with day partitions and can be very large.
    Any advice would be welcome.
    Thanks.

    >
    I need to keep duplicate rows in a side table and not load them into table1...table6
    >
    Does that mean that you don't want ANY row if it has a duplicate on your 6 columns?
    Let's say I have six records that have identical values for your 6 columns. One record meets the condition for table1, one for table2 and so on.
    Do you want to keep one of these records and put the other 5 in the side table? If so, which one should be kept?
    Or do you want all 6 records put in the side table?
    You could delete the duplicates from the temp table as the first step. Or better
    1. add a new column WHICH_TABLE NUMBER to the temp table
    2. update the new column to -1 for records that are dups.
    3. update the new column (might be done with one query) to set the table number based on the conditions for each table
    4. INSERT INTO TABLE1 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 1
    INSERT INTO TABLE6 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 6
    When you are done the WHICH_TABLE will be flagged with
    1. NULL if a record was not a DUP but was not inserted into any of your tables - possible error record to examine
    2. -1 if a record was a DUP
    3. 1 - if the record went to table 1 (2 for table 2 and so on)
    This 'flag and then select' approach is more performant than deleting records after each select. Especially if the flagging can be done in one pass (full table scan).
    See this other thread (or many, many others on the net) from today for how to find and remove duplicates
    Best way of removing duplicates

  • How to dynamically add/remove a button from the ribbon based on some condition? (Ribbon XML)

    Hi,
    I have a ribbon (done using ribbon XML) with menu options. I need to remove few buttons from the menu dynamically based on some condition. Also, I want to change the label of another button. How to achieve this programmatically? (C#)
    Thanks in advance.
    Thanks Prasad

    Hello Prasad,
    Use callbacks for populating Ribbon controls such as menu, dropDown, gallery and etc. Then you can use the
    Invalidate or
    InvalidateControl methods of the
    IRibbonUI interface to get your callbacks invoked when required. Thus, you will be able to delete the required item(s).
    You will find the following articles in MSDN helpful:
    Chapter 11: Creating Dynamic Ribbon Customizations (1 of 2)
    Chapter 11: Creating Dynamic Ribbon Customizations (2 of 2)
    To change the label of your controls at runtime you need to use the getLabel callback and call the Invalidate or InvalidateControl methods of the IRibbonUI interface. The following series of articles describe the Fluent UI in depth:
    Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3)
    Customizing the 2007 Office Fluent Ribbon for Developers (Part 2 of 3)
    Customizing the 2007 Office Fluent Ribbon for Developers (Part 3 of 3)

  • Delete duplicate based on conditions

    Hi,
    <div style="border: 1pt solid windowtext; padding: 1pt 4pt">
    {size:9pt}SQL&gt; select * from
    v$version;{size}
    {size:9pt} BANNER
    ----------------------------------------------------------------{size}
    {size:9pt}Oracle Database 10g Express
    Edition Release 10.2.0.1.0 - Product{size}
    {size:9pt}PL/SQL Release 10.2.0.1.0 -
    Production{size}
    {size:9pt}CORE 10.2.0.1.0 Production{size}
    {size:9pt}TNS for 32-bit Windows:
    Version 10.2.0.1.0 - Production{size}
    {size:9pt}NLSRTL Version 10.2.0.1.0 &ndash;
    Production{size}
    </div>
    Need to delete duplicate rows from Oracle table.
    Source
    EMPID NAME CIY
    1001 A MACH1
    1001 A MACH2
    1002 B MAH1
    1002 C MAH1
    1002 A MAH1
    1003 X MACH12
    1003 X MACH1
    Output:
    I just want to delete duplicate based on CIY='MAH1'
    EMPID NAME CIY
    1001 A MACH1
    1001 A MACH2
    1002 B MAH1
    1003 X MACH12
    1003 X MACH1
    I tried with this query but it actually considers all the duplicaes in the table ......
    DELETE FROM test00 WHERE rowid NOT IN (SELECT max(rowid) FROM test00 WHERE ITEM='A'AND GROUP BY item )
    Thanks in advance
    Ananda.

    Something like this ->
    SCOTT>
    SCOTT>select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
    PL/SQL Release 10.2.0.3.0 - Production
    CORE    10.2.0.3.0      Production
    TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    Elapsed: 00:00:00.04
    SCOTT>
    SCOTT>
    SCOTT>create table ananda
      2  as
      3    (
      4       select 1001 EMPID, 'A' N_AME, 'MACH1' CITY from dual
      5       union all
      6       select 1001, 'A', 'MACH2' from dual
      7       union all
      8       select 1002, 'B', 'MAH1' from dual
      9       union all
    10       select 1002, 'C', 'MAH1' from dual
    11       union all
    12       select 1002, 'A', 'MAH1' from dual
    13       union all
    14       select 1003, 'X', 'MACH12' from dual
    15       union all
    16       select 1003, 'X', 'MACH1' from dual
    17    );
    Table created.
    Elapsed: 00:00:00.18
    SCOTT> 
    SCOTT>
    SCOTT>select * from ananda;
         EMPID N CITY
          1001 A MACH1
          1001 A MACH2
          1002 B MAH1
          1002 C MAH1
          1002 A MAH1
          1003 X MACH12
          1003 X MACH1
    7 rows selected.
    Elapsed: 00:00:00.09
    SCOTT>
    SCOTT>
    SCOTT>delete from ananda
      2  where city = 'MAH1' 
      3  and   rowid NOT IN (
      4                        select rr
      5                        from ( 
      6                                select rowid rr,
      7                                      empid,
      8                                      n_ame,
      9                                      city,
    10                                      row_number() over(partition by city order by empid) rn
    11                                from ananda
    12                                where city = 'MAH1'
    13                            )
    14                        where rn =1
    15                  );
    2 rows deleted.
    Elapsed: 00:00:00.10
    SCOTT>
    SCOTT>select * from ananda;
         EMPID N CITY
          1001 A MACH1
          1001 A MACH2
          1002 B MAH1
          1003 X MACH12
          1003 X MACH1
    Elapsed: 00:00:00.09
    SCOTT>Regards.
    Satyaki De.

  • Search for records in the event viewer after the last run (not the entire event log), remove duplicate - Output Logon type for a specific OU users

    Hi,
    The following code works perfectly for me and give me a list of users for a specific OU and their respective logon types :-
    $logFile = 'c:\test\test.txt'
    $_myOU = "OU=ABC,dc=contosso,DC=com"
    # LogonType as per technet
    $_logontype = @{
        2 = "Interactive" 
        3 = "Network"
        4 = "Batch"
        5 = "Service"
        7 = "Unlock"
        8 = "NetworkCleartext"
        9 = "NewCredentials"
        10 = "RemoteInteractive"
        11 = "CachedInteractive"
    Get-WinEvent -FilterXml "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[System[(EventID=4624)]]</Select><Suppress Path=""Security"">*[EventData[Data[@Name=""SubjectLogonId""]=""0x0""
    or Data[@Name=""TargetDomainName""]=""NT AUTHORITY"" or Data[@Name=""TargetDomainName""]=""Window Manager""]]</Suppress></Query></QueryList>" -ComputerName
    "XYZ" | ForEach-Object {
        #TargetUserSid
        $_cur_OU = ([ADSI]"LDAP://<SID=$(($_.Properties[4]).Value.Value)>").distinguishedName
        If ( $_cur_OU -like "*$_myOU" ) {
            $_cur_OU
            #LogonType
            $_logontype[ [int] $_.Properties[8].Value ]
    #Time-created
    $_.TimeCreated
        $_.Properties[18].Value
    } >> $logFile
    I am able to pipe the results to a file however, I would like to convert it to CSV/HTML When i try "convertto-HTML"
    function it converts certain values . Also,
    a) I would like to remove duplicate entries when the script runs only for that execution. 
    b) When the script is run, we may be able to search for records after the last run and not search in the same
    records that we have looked into before.
    PLEASE HELP ! 

    If you just want to look for the new events since the last run, I suggest to record the EventRecordID of the last event you parsed and use it as a reference in your filter. For example:
    <QueryList>
      <Query Id="0" Path="Security">
        <Select Path="Security">*[System[(EventID=4624 and
    EventRecordID>46452302)]]</Select>
        <Suppress Path="Security">*[EventData[Data[@Name="SubjectLogonId"]="0x0" or Data[@Name="TargetDomainName"]="NT AUTHORITY" or Data[@Name="TargetDomainName"]="Window Manager"]]</Suppress>
      </Query>
    </QueryList>
    That's this logic that the Server Manager of Windows Serve 2012 is using to save time, CPU and bandwidth. The problem is how to get that number and provide it to your next run. You can store in a file and read it at the beginning. If not found, you
    can go through the all event list.
    Let's say you store it in a simple text file, ref.txt
    1234
    At the beginning just read it.
    Try {
    $_intMyRef = [int] (Get-Content .\ref.txt)
    Catch {
    Write-Host "The reference EventRecordID cannot be found." -ForegroundColor Red
    $_intMyRef = 0
    This is very lazy check. You can do a proper parsing etc... That's a quick dirty way. If I can read
    it and parse it as an integer, I use it. Else, I just set it to 0 meaning I'll collect all info.
    Then include it in your filter. You Get-WinEvent becomes:
    Get-WinEvent -FilterXml "<QueryList><Query Id=""0"" Path=""Security""><Select Path=""Security"">*[System[(EventID=4624 and EventRecordID&gt;$_intMyRef)]]</Select><Suppress Path=""Security"">*[EventData[Data[@Name=""SubjectLogonId""]=""0x0"" or Data[@Name=""TargetDomainName""]=""NT AUTHORITY"" or Data[@Name=""TargetDomainName""]=""Window Manager""]]</Suppress></Query></QueryList>"
    At the end of your script, store the last value you got into your ref.txt file. So you can for example get that info in the loop. Like:
    $Result += $LogonRecord
    $_intLastId = $Event.RecordId
    And at the end:
    Write-Output $_intLastId | Out-File .\ref.txt
    Then next time you run it, it is just scanning the delta. Note that I prefer this versus the date filter in case of the machine wasn't active for long or in case of time sync issue which can sometimes mess up with the date based filters.
    If you want to go for a date filtering, do it at the Get-WinEvent level, not in the Where-Object. If the query is local, it doesn't change much. But in remote system, it does the filter on the remote side therefore you're saving time and resources on your
    side. So for example for the last 30 days, and if you want to use the XMLFilter parameter, you can use:
    <QueryList>
    <Query Id="0" Path="Security">
    <Select Path="Security">*[System[TimeCreated[timediff(@SystemTime) &lt;= 2592000000]]]</Select>
    </Query>
    </QueryList>
    Then you can combine it, etc...
    PS, I used the confusing underscores because I like it ;)
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • How to remove duplicates

    Hi
    i am removing duplicate records while importing bulk data into the table...I am checking for some columns...when they are same, i am removing the old records...i have used the following code to remove duplicates...
    execute immediate 'DELETE FROM test1 WHERE ROWID IN (SELECT ROWID FROM (SELECT ROWID,ROW_NUMBER() OVER (PARTITION BY c1,c2 ORDER BY 1) row_no FROM test1)WHERE row_no > 1)';
    here i check c1 and c2 columns...if they are same the old records are to be deleted...but in this code, the new records are deleted..can anyone say how to remove old duplicate records?
    Vally

    Hi
    i am removing duplicate records while importing
    bulk data into the tableWhat you mean by using "while"?
    During the process of importing(read inserting) - you want to delete duplicate records?
    As you say in the following you have C1 and C2 - using both of them - you find duplicates.
    I deem you have other columns besides C1 and C2. And these columns have different fileds in NEW record and OLD record - then why don't you use UPDATE statement?
    ...I am checking for some
    columns...when they are same, i am removing the old
    records...i have used the following code to remove
    duplicates...you should clarify on what criteria you separate old records from new records and place this condition in your query.
    E.g. you have a field DATE_OF_ENTRY
    and the latest one is the new record which shouldn't be deleted
    then you would be able to put it into your delete statement:
    DELETE FROM test1
    WHERE ROWID IN (SELECT ROWID
                       FROM (SELECT ROWID,
                                    ROW_NUMBER() OVER(PARTITION BY c1, c2 ORDER BY DATE_OF_ENTRY desc) row_no
                               FROM test1)
                      WHERE row_no > 1)

  • Remove duplicate entries from dropdownlist in web dynpro abap

    How to remove duplicate entries from dropdownlist in web dynpro abap? Can someone please help me
    I have maintained the data in the z table wherein the records of particular fields are repeated but when i show that record in the Web Dynpro application dropdown list, the user should only be able to view the unique data for selection of that particular field.

    Hi,
    try this code in init method.
    use the
    Delete adjacent duplicates.
    <set the table>
    select <f1>  from <table> into TABLE <Itab> whre <condition>.
       DELETE ADJACENT DUPLICATES FROM <Itab> COMPARING <f1>.
         lo_nd_vbap->bind_table( new_items = <itab> set_initial_elements = abap_true ).

  • Remove Duplicate Rows in Numbers 09

    Is there a way to remove duplicate rows in Numbers 09? For example I have 2 Tables and the values of Column A are mainly the same, but there are definitely a few dozen unique values in 1 table which are not in table 2 and visa versa. I'd like to make a new table with a column A with all of the values, but with duplicates removed so that I can then compare the values of a different column based on the value of Column A for each table.

    I copied tabeau 1 and tableau 2 in tableau 3 then in cell E2 of tableau 3 I entered the formula:
    =COUNTIF($A$1:$A1,"="&A)
    Using fillDown, I filled the column E
    I get 0 if the cell a is unique
    I get 1 (or higher) if the cell is available several times.
    Sort upon column E
    delete the rows whose cell E is not 0.
    Yvan KOENIG (VALLAURIS, France.) samedi 22 août 2009 11:08:15

  • Remove Duplicate Music Files, But Keep iTunes Copy

    How can you remove duplicate copies of music files on your hard drive but be sure to keep the one iTunes is using and keeping the iTunes Library intact?
    I have hundred of duplicate music files, using a tool like CleanGenius or others I can get a list of duplicate files, but I do not want to break iTunes Library and only want to remove the duplicates that are not being used in iTunes.

    I have the same issue. If I have a file in my music folder and then click on it and it opens in itunes it creates a duplicate file in my itunes library. Besides sorting in finder and deleting one by one based upon folder location is there a less mind numbing way to accomplish this task.
    Or do I have to pay the $10 for a program like Gemini?

  • Remove duplicates without using Sort OR Script compnent OR Staging ?

    Team , Can some advise on how do we go about removing duplicates without using  either of above option quoted in the subject lines ? My source is a huge  flat file  .
    Thanks in advance !
    Rajkumar Yelugu

    I think you can do like this
    1. Add a Data Flow Task with flat file source
    2. Add a multicast to flat file source
    3. Join an output from Multicast to Aggregate transform and group by your required field and take min or max over a unique valued column(s) (id or date or primary key)
    4. Add a Merge join transform and  add the multicast output and Aggregate Transform outputs as sources. join on the group by fields from aggregate and include min/max column also in output
    5. Add a conditional split and define an output as unique valued column(s) >(<) Min/Max value from aggregate
    6. Join the defined output to your destination to get only distinct records from the duplicate sets
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Script for removing duplicates in Mail

    I have thousands of saved messages, of which a great many are dupllicates. I have been able to eliminate many of them using the script provided by Andreas Amann http://homepage.mac.com/aamann/, but large numbers of others resist the script because they are not "quite" identical. For example, if one looks in the .emix files, one may find that one has charset="iso-8859-1" and the other has charset=iso-8859-1 without quotation marks. No simple script is going to catch things like that.
    I don't know the cause, but one problem is that I often receive an inbox item from the Exchange server that can't be read: the message appears in the viewer, but won't open. Then, seconds or minutes later, a valid version will pop up and, perhaps, the bad one will disappear. However, I have a rule that automatically copies inbox messages to a local folder. As a result, I have two items in that folder (both of which, by the way, I can usually read). Over time, this means thousands of duplicates.
    Another symptom for at least some of the duplicates is that what appear to be duplicates appear in the viewer with variations of my name, such as John Smith in one and Smith, John in the other. I can imagine that this is due to message-handling protocols in my company's Exchange-based system, but actually have no clue. Further, not everyone in the company that uses Mail seems to have this problem.
    Ideas?
    P.S., why can't Apple have a Remove Duplicates feature, as it does in Address Book?

    I've written a script called DeDuper which can help remove unwanted duplicates. See this thread for background. I can't necessarily claim it is the best, but it is certainly free.
    I have another script called FindTracks which attempts to do just that.
    tt2

  • How do I remove duplicate software from listings

    How do I remove duplicate software entries from Reports run. Ironically, the Novell iFolder client is listed three times on reports because it detects every folder of the packge.
    How do I fix/ignore these duplicate values?
    See attachment - a PC software listing

    Originally Posted by pcwoodring
    How do I remove duplicate software entries from Reports run. Ironically, the Novell iFolder client is listed three times on reports because it detects every folder of the packge.
    How do I fix/ignore these duplicate values?
    See attachment - a PC software listing
    As usual I answer my own questions. I created a Local Software Product based on an .htm file in the Program Files folder that never changes with the version we have. Lame.

  • Removing duplicates finally fixed!

    Why doesn't apple incorporate the ability to remove duplicates automatically. There is no way I am going to hand select over 300 songs to remove the duplicates. Make a feature (a code based query) that will remove any duplicates, minus 1.

    I'm sure that's possible (and there are scripted solutions that will do exactly what you want).  The bigger issue, though, is what do you mean by a "duplicate"?  iTunes currently has two functions:
    View > Show Duplicate Items will show all cases where a song exists in your library with the same Artist and Name fields - therefore you may get different edits, mixes, live vs. studio versions, of the same song.  I understand that some people may want to eliminate at least some of these, but how would a fully-automated version know which songs you want to keep and which to get rid of in this example?
    In extreme cases, you could have many songs that would be shown in the results of the View > Show Duplicate Items query yet this would represent an absolutely correct library with nothing that the user (me, in this case) would regard as a "duplicate":
    The highlight tracks here show a case where the same "song" (Artist and Name matched) can occur within the same album - quite correctly.
    SHIFT View > Show Exact Duplicate Items is much more restrictive, in that as well as Artist and Name it will only show songs that also have identical values for Album, Disc Number and Track Number.  Unlike the first case, where duplicates may be entirely value, anything shown by this second function is likely to be an error. This is an area in which iTunes could maybe offer an automated function, though there are still questions that would need to be resolved.  For example, such a duplicate may be reported if:
    There are two entries in the iTunes database that point to the same media file
    There are two entries in the iTunes database that point to different media files - still the same song/recording but they could have different filenames or be in different locations
    The biggest barrier to an automated de-duplication function within iTunes, though, is that unless it offered a host of user options there's a very significant risk that it would delete duplicates but not the ones that you want to remove - and where the de-duplication process also involves file deletion this is not easy to support a robust Undo function for.  The other factor is that the second case of duplication (exactly the same song occurring more than once in your library) is almost always the result of some kind of user error, or user misunderstanding of how iTunes works and manages the content of its library.  iTunes is complex enough (far too complex, in some people's opinion) without adding functionality that addresses the consequence of misuse.

  • Best app to remove duplicates from iTunes 2014

    Hi All,
    I've been trying to research the best application to sort and remove duplicates from my iTunes library. I have over 7000 songs and iTunes built in duplicate finder doesn't look at the track fingerprint, which is useful for those songs which are labelled "Track_1" etc.
    Has anyone reviewed any recent products? I was looking at TuneUp, but after reading so many negative comments, I've decided not to go down that path. I would prefer a program that did most of the work for me, due to the amount of songs. Happy to pay for a good product...
    I do have MusicBrainz Picard, which has done a great job of tagging, but don't remove duplicates.
    Thanks in advance :-)

    Tune up is a great app.  When they moved from version 2 to version 3 is when it went to crap and all heck broke loose.  They shut their doors  but they have since re opened and went back to developing  version 2.  I use that version and I am pretty happy with it as being an overall cleanup utility.  I also use Musicbrainz and a couple of other utilities but in the end if you have an enormous library 20k plus then you are going to have a few slip through.  I would probably go with Tuneup if I were you and a thorough third party duplicate finder.  Dupe Guru's music edition seems to do a pretty good job.

  • Removing duplicate values from selectOneChoice bound to List Iterator

    I'm trying to remove duplicate values from a selectOneChoice that i have. The component binds back to a List Iterator on the pageDefinition.
    I have a table on a JSF page with 5 columns; the table is bound to a method iterator on the pageDef. Then above the table, there are 5 separate selectOneChoice components each one of which is bound to the result set of the table's iterator. So this means that each selectOneChoice only contains vales corresponding to the columns in the table which it represents.
    The selectOneChoice components are part of a search facility and allow the user to select values from them and restrict the results that are returned. The concept is fine and i works. However if i have repeating values in the selectOneChoice (which is inevitable given its bound to the table column result set), then i need to remove them. I can remove null values or empty strings using expression language in the rendered attribute as shown:
    <af:forEach var="item"
    items="#{bindings.XXXX.items}">
    <af:selectItem label="#{item.label}" value="#{item.label}"
    rendered="#{item.label != ''}"/>
    </af:forEach>
    But i dont know how i can remove duplicate values easily. I know i can programatically do it in a backing bean etc.... but i want to know if there is perhaps some EL that might do it or another setting that ADF gives which can overcome this.
    Any help would be appreciated.
    Kind Regards

    Hi,
    It'll be little difficult removing duplicates and keeping the context as it is with exixting standard functions. Removing duplicates irrespective of context changes, we can do with available functions. Please try with this UDF code which may help you...
    source>sort>UDF-->Target
    execution type of UDF is Allvalues of a context.
    public void UDF(String[] var1, ResultList result, Container container) throws StreamTransformationException{
    ArrayList aList = new ArrayList();
    aList.add(var1(0));
    result.addValue(var1(0));
    for(int i=1; i<var1.length; i++){
    if(aList.contains(var1(i)))
         continue;
    else{
    aList.add(var1(i));
    result.addValue(var1(i));
    Regards,
    Priyanka

Maybe you are looking for

  • 8Gb Nano capacity questions...

    Hi people. I had to post this here as i couldn't find a general iPod forum. Basically i've just sold my iPod photo, and am looking to replace it. At the moment, it's a toss up between the Nano 8Gb and the Classic 80Gb (i'd have a Touch 8Gb but there'

  • Add new project but "BP Project" in Accounting not updated.

    Hi, This is a bit tricky.  If you have a solution for this it would be cool. I have a UDF called Project Name and when the user adds the Sales Order, I would like the "BP Project" in Accounting be updated with the new project which I just added with

  • I am unable to download zip files

    http://avl1.teltonika.lt/Downloads.FM22 I am unable to download zip files from this URL

  • MS Word Plugin

    I'm sure this has been asked a hundred times BUT: Can anyone tell me where I can get the best plugin for Safari to view MS Word Documents, the whole download manager thing is really annoying sometimes. Thanks! Eben

  • VA01 How to go through source determination again?

    Hi experts, I'm using userexit_source_determination in MV45AFZB to define a default storage location. To get the storage location, the plant must be specified. However, I think the transaction only goes through the exit the first time you pressed ent