Dedupe a comma list

I have a string of values in a comma delimited list. Is there
a quick and easy way to find exact duplicated and remove all but
one?
I am sure I could sit down and come up with CFLOOP/IF-fest
but wondered if there was already something out there.

Jonathan,
Everything I've seen and used involved loops, structs and the
like. I did find this UDF on CFlib.org:
http://www.cflib.org/udf.cfm?ID=1275
It's pretty straightforward and might save you a little
typing!
Cheers,
Craig

Similar Messages

  • Output in a comma list

    Hi all,
    i am need the change the output of the below query.
    SQL> select department_id,employee_id from employees order by department_id;
    DEPARTMENT_ID EMPLOYEE_ID
               10         200
               20         201
               20         202
               30         114
               30         119
               30         115
               30         116
               30         117
               30         118
               40         203
               50         198
    DEPARTMENT_ID EMPLOYEE_ID
               50         199
               50         120
               50         121
               50         122
               50         123
               50         124
               50         125
               50         126
               50         127
               50         128
               50         129
    DEPARTMENT_ID EMPLOYEE_ID
               50         130
               50         131
               50         132
               50         133
               50         134
               50         135
               50         136
               50         137
               50         138
               50         139
               50         140
    DEPARTMENT_ID EMPLOYEE_ID
               50         141
               50         142
               50         143
               50         144
               50         180
               50         181
               50         182
               50         183
               50         184
               50         185
               50         186
    DEPARTMENT_ID EMPLOYEE_ID
               50         187
               50         188
               50         189
               50         190
               50         191
               50         192
               50         193
               50         194
               50         195
               50         196
               50         197
    DEPARTMENT_ID EMPLOYEE_ID
               60         104
               60         103
               60         107
               60         106
               60         105
               70         204
               80         176
               80         177
               80         179
               80         175
               80         174
    DEPARTMENT_ID EMPLOYEE_ID
               80         173
               80         172
               80         171
               80         170
               80         169
               80         168
               80         145
               80         146
               80         147
               80         148
               80         149
    DEPARTMENT_ID EMPLOYEE_ID
               80         150
               80         151
               80         152
               80         153
               80         154
               80         155
               80         156
               80         157
               80         158
               80         159
               80         160
    DEPARTMENT_ID EMPLOYEE_ID
               80         161
               80         162
               80         163
               80         164
               80         165
               80         166
               80         167
               90         101
               90         100
               90         102
              100         110
    DEPARTMENT_ID EMPLOYEE_ID
              100         108
              100         111
              100         112
              100         113
              100         109
              110         206
              110         205
                          178
    107 rows selected.Now i need to change this output in the floowing manner.
    DEPARTMENT_ID EMPLOYEE_ID
               10         200
               20         201,202
               30         114,119,115,116,117,118
               30         119
               40         203
    the employee id should be in a comma separated list instead of a single row.

    rajavu1 wrote:
    You can use Sys_Connect_By_path also as given below.
    SQL> SELECT DEPTNO,
    2 LTRIM(MAX(SYS_CONNECT_BY_PATH(EMPNO,',')),',')  AS EMPNOLIST
    3 FROM   (SELECT deptno,
    4                         EMPNO,
    5                         ROW_NUMBER() OVER (PARTITION BY DEPTNO ORDER BY EMPNO) AS CURR,
    6                         ROW_NUMBER() OVER (PARTITION BY deptno ORDER BY empno) -1 AS prev
    7               FROM   emp)
    8  GROUP BY deptno
    9  CONNECT BY PREV = PRIOR CURR AND DEPTNO = PRIOR DEPTNO
    10  START WITH curr = 1;
    DEPTNO EMPNOLIST
    30 7499,7521,7654,7698,7844,7900
    20 7369,7566,7876,7902
    10 7782,7839
    SQL>
    Urgh! Misuse of MAX and GROUP BY in a hierarchical query, and an unnecessary extra row_number calculation.
    SQL> ed
    Wrote file afiedt.buf
      1  select deptno
      2        ,trim(',' from sys_connect_by_path(empno,',')) as empnolist
      3  from  (select deptno
      4               ,empno
      5               ,row_number() over (partition by deptno order by empno) as rn
      6         from emp)
      7  where connect_by_isleaf = 1
      8  connect by deptno = prior deptno and rn = prior rn + 1
      9* start with rn = 1
    SQL> /
        DEPTNO EMPNOLIST
            10 7782,7839,7934
            20 7369,7566,7788,7876,7902
            30 7499,7521,7654,7698,7844,7900In 10g you can use CONNECT_BY_ISLEAF to give only the leaf nodes of the hierarchy, and when comparing row numbers it's easier to just "+1" to the prior in the connect by clause rather than calculating an extra row_number value.

  • Commit on thousands of records

    Hello,
    I've encountered the following problem while trying to update records in an Oracle 8i database :
    I have a java program that updates thousands of records from a flat file to the oracle database, the "commit" command is done at the end of the program,the problem is that some records are not updated in the database but no exception is raised !
    If I try to do a commit after each update, the problem seems to be solved, but of course it takes more time to do the massive update, and I think it is not recommended to do a commit after each record?
    Is there a limit to which a commit can be done? (a number of maximum records to be updated)
    Thanks greatly for your help!
    Regards,
    Carine

    If it was a problem with the size of the rollback statements, you would have received an error.
    But are you sure that you don't have any neglected errors (like a when others that does no handling?). In that case you wouldn't receive any error and no rollback would be performed (but a commit instead) resulting in "saving" your already done modifications.
    In the book "expert one-on-one" from thomas kyte, there is a chapter of what exactly a commit does.
    a small extract:
    basicly a commit has a fairly flat response time. This because 99.9 percent of the work is already done before you commit.
    [list]
    [*]you have already genererated the rollback segments in the sga
    [*]modified data blocks have been generated in the sga
    [*]buffered redo for the above two items has been generated in the sga
    [*]depending on the size of the above three, and the amount of time spent, some combination of the above data may have been flushed onto disk already
    [*]all locks have been acquired
    [list]
    when you commit, all that is left is the following
    [list]
    [*]generate a scn (system change number) for our transaction
    [*]lgwr writes all of our remaining buffered redo log entries to disk, and records the scn in the online redo log files as well. This step is actually the commit. if this step occurs, we have committed. Our transaction entry is removed, this shows that we have committed. Our record in the v$transaction view will 'disappear'.
    [*]All locks held by our session are released, and everyone who was enqueued waiting on locks we held will be released.
    [*]Many of the blocks our transaction modified will be visited and 'cleaned out' in a fast mode if they are still in the buffer cache.
    [list]
    Flushing the redo log buffer by lgwris the lengthiest operation.
    To avoid long waiting, this flushing is done continuously as we are processing:
    [list]
    [*]every three seconds
    [*]when the redo log buffer is one third or one MB full
    [*] upon any transaction commit
    [list]
    for more information do a search on akstom.oracle.com or read his book.
    But is must be clear that the commit on itself has no limits on processed rows.
    There's no limit re: commit. There is a limit on the number of rows that can be modified (updt, del, ins) ina transaction (e.g. between commits). It depends on rollback segment size (and other activity). This varies with each database (see your DBA).
    If you were hitting this limit it would normally "rolllback" all changes to the last commit.
    Ken
    =======
    Hello Ken,
    Thanks a lot for this quick answer. The wonder is that I do not get any error message concerning the rollback segment:
    if I do the commit at the end after updating thousands of records, it seems like it was done correctly but I see that only some records have not been updated in the database (thus I would not be hitting the limit as all changes would have been rolledback) ?
    Is there a way to get a return status from the commit ? Should I do a commit after each 1000 records for example?
    Thanks again,
    Carine

  • Questions on StringTokenizer, trim(), and parsing.

    im doing an assignment that requires parsing and havent had that much practice with parsing regular text from a file, ive been doing mostly parsing from a html. my question is if im parsing a line and i need 2 different types of information from it, should i just tokenize it twice or do it all at once, assuming the text format is always the same. for example
    String input= "this is a test[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]";if parsed correctly with StringTokenizer it would be 4 Strings(this is a test) and 10 ints for the countdown of numbers. now since the Strings doesnt have a delimiter such as a comma i can use the default delimiter which is the whitespace but then would that mean i would have to parse that same String twice since the numbers has "," has a delimiter? also should i worry about the whitespace that is separating the numbers after the comma? i did a small driver to test out the trim() using this and both outputs were the same. this may be a dumb question but if i call the trim() it eliminates the white space right, therefore i can just set "," as my delimiter, question is why is my output for both Strings the same?
        String input= "this is a test[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]";
        String trimmed = test.trim();
        System.out.println(input);
        System.out.println("\n" + trimmed);SORRY if its confusing im trying not to reveal too much of the hw assignment and just get hints on parsing this efficiently. Thanks in advance

    similar example on how to parse out the numbers with
    "," as a delimiter. thanks in advanceThe following is a simple recursive descent parser to parse a comma delimited list of numbers "(1, 2, 3, 5, 6)". The grammar parser by this code is
    START -> LPAREN LIST RPAREN
    LIST -> NUMBER TAIL
    TAIL -> COMMA LIST | LambdaThe nonterminals NUMBER, LPAREN, RPAREN, and COMMA are defined by the regular expressions in the code.
    Lexical analysis is done by the function advanceToken(). It stores the next token in the variable "lookahead" for further processing. The parse tree is represented recursively through the functions start(), lparen(), rparen(), list(), number(), tail(), comma(), which match the corresponding symbols in the grammar. Finally, translation is done in the function number(). All it does is put the numbers it finds into the List intList. you can modify it to your needs.
    This code originally parsed simple arithmetic expressions, but it took only ten minutes to parse lists of integers. It's not perfect and there are several obvious improvement that will speed up performance, however the strength of the design is that it can be easily changed to suit a variety of simple parsing needs.
    import java.util.regex.*;
    import java.util.*;
    public class RDPParenList {
        private static final Pattern numberPat=
                        Pattern.compile("([1-9]\\d*)|0");
        public static final Object NUMBER = new Object();
        public static final Pattern commaPat = Pattern.compile(",");
        public static final Object COMMA = new Object();
        public static final Pattern lparenPat=
                        Pattern.compile("\\(");
        public static final Object LPAREN = new Object();
        public static final Pattern rparenPat=
                        Pattern.compile("\\)");
        public static final Object RPAREN = new Object();
        public static final Token NULLTOKEN = new Token(null, null);
        String input;
        String workingString=null;
        Token lookahead=NULLTOKEN;
        List intList = new ArrayList();
        /** Creates a new instance of RecursiveDescentParse */
        public RDPParenList(String input) {
            this.input=input;
        public void parse(){
            workingString=input;
            advanceToken();
            start();
            if(! "".equals(workingString))
                error("Characters still remaining in input '" + workingString + "'");
        private void advanceToken() {
            // calling advanceToken must give a token
            if("".equals(workingString))
                error("End of input reached unexpectedly");
            // prune the old token, and whitespace...
            if(lookahead != NULLTOKEN){
                int cutPoint = lookahead.symbol.length();
                while(cutPoint < workingString.length() &&
                        Character.isWhitespace(workingString.charAt(cutPoint))){
                    ++ cutPoint;
                workingString=workingString.substring(cutPoint);
            // Now check for the next token, starting with the null token...
            if("".equals(workingString)){
                lookahead=NULLTOKEN;
                return;
            Matcher m=numberPat.matcher(workingString);
            if(m.lookingAt()){
                lookahead=new Token(m.group(), NUMBER);
                return;
            m=commaPat.matcher(workingString);
            if(m.lookingAt()){
                lookahead=new Token(m.group(), COMMA);
                return;
            m=lparenPat.matcher(workingString);
            if(m.lookingAt()){
                lookahead=new Token(m.group(), LPAREN);
                return;
            m=rparenPat.matcher(workingString);
            if(m.lookingAt()){
                lookahead=new Token(m.group(), RPAREN);
                return;
            error("Error during lexical analysis. Working string: '" +
                           workingString + "'");
        private void start() {
            lParen(); list(); rParen();
        private void lParen(){
            if(lookahead.attrib == LPAREN){
                advanceToken();
                // OK. Do nothing...
            else error("Error at token '" + lookahead.symbol + "' expected '('");
        private void rParen(){
            if(lookahead.attrib == RPAREN){
                advanceToken();
                // OK. Do nothing...
            else error("Error at token '" + lookahead.symbol + "' expected ')'");
        private void list() {
            number(); tail();
        private void number() {
            if(lookahead.attrib == NUMBER){
                // Do something with the number!
                try{
                    intList.add(new Integer(lookahead.symbol));
                catch(NumberFormatException e){
                    // This shouldn't happen if the lexer is working...
                    e.printStackTrace();
                    error("Unknown Error");
                advanceToken();
            else error("Error at token '" + lookahead.symbol + "' expected a number");
        private void tail() {
            if(lookahead.attrib == COMMA){
                comma(); list();
            else {
                // Lambda production
        private void comma() {
            if(lookahead.attrib == COMMA){
                advanceToken();
                // OK. Do nothing...
            else error("Error at token '" + lookahead.symbol + "' expected ','");
        private void error(String message){
            System.out.println(message);
            System.exit(-1);
        public static class Token{
            public Token(String symbol, Object attrib) {
                this.symbol=symbol;
                this.attrib=attrib;
            public String symbol;
            public Object attrib;
        public static void main(String []args){
            if(args.length == 0)
                return;
            System.out.println("\nParse String: " + args[0]);
            RDPParenList p=new RDPParenList(args[0]);
            p.parse();
            System.out.println("OK!");

  • Explain the annotation used in a ______ListMaintenance.java

    HI,
    Can anyone explain me the annotation used in a ______ListMaintenance.java(CurrencyListMaintenance.java) what does each attributes map in CCB and DB.
    Like the one marked in bold
    @EntityListPageMaintenance
    ( service* = CILTCURP, modules={foundation}, entity = currency, program* = CIPTCURP,
    body = @DataElement (contents = { @FieldGroup ( *_name_* = SRCH-CRITERIA,
    contents = { @DataField (name = CURRENCY_CD)})
    , @ListField (name = CURRENCY_CD)}),
    lists = { @List (name = CURRENCY_CD, size = 50, program = CIPTCURL, *_constantName_* = CI-CONST-CT-MAX-COMM-LIST-COLL,
    baseCobolGroupName* = TCURL,
    body = @DataElement (contents = { @RowField (includeRCopybook = false, entity = currency, baseCobolGroupName = TCURT)}),
    headerFields* = { "CURRENCY_CD"
    , "LAST_CURRENCY_CD"})})
    -- I want to write a ListMaintenance.java what are the mandatory attributes in the annotation.
    -- Steps required to write a ListMaintenance.java
    -- Where can we find the document for annotation?

    The problem is that ther eare no definite tests for character encoding. A particular byte stream can be valid in any number of different encodings (even if the resulting characters are not correct). If the characters don't happen to include any above unicode 127 then a UTF-8 stream is identical to the same characters in any number of different encodings.
    It's not just a matter of there being no code for it in the library, it's impossible to do with any certainty, and to do it even probabalistically you'd have to run the results through a multi-lingual spelling checker.
    If you just ask java.io to open a Reader without specifying an encoding it will assume the default encoding of your system.

  • A many-to-many relational problem (SQL and CFM)

    My question:
    How to get CFM to return a many-to-many relationship in one
    row using cfloop
    My table structure:
    Table A - Books
    BookID | BookName
    1 | Book One
    Table B -
    RelatingTable
    BookID | AuthorID
    1 | 60
    1 | 61
    Table C - Authors
    AuthorID | AuthorName
    60 | Bob
    61 | Joe
    My query:
    SELECT * FROM Books, RelatingTable, Authors AS a
    INNER JOIN Books AS b ON b.BookID = r.BookID
    INNER JOIN RelatedTable AS r ON r.AuthorID = a.AuthorID
    Output I am getting:
    b.BookID | b.BookName | r.BookID | r.AuthorID | a.AuthorID |
    a.AuthorName
    ---------|------------|----------|------------|------------|-------------
    1 | Book One | 1 | 60 | 60 | Bob
    1 | Book One | 1 | 61 | 61 | Joe
    I am using a UDF that turns my relationship into a comma list
    (authorlist), but the duplicates still return in CFM because the
    JOIN relationship
    The code I am using in CFM:
    <cfloop query="rsBooksQuery">
    #b.BookName#, written by #authorlist#
    </cfloop>
    How Coldfusion is displaying my output:
    Book One, written by Bob, Joe
    Book One, written by Bob, Joe
    How I want my output displayed:
    Book One, written by Bob, Joe
    I need this to work in cfloop and not cfoutput! I know that
    you can use group in CF output, but for the conditions I am using
    this query, it must be in a loop
    the reason why i keep the JOINs even though i have a UDF to
    create a comma list is that some of my CFM pages use variables
    passed to the qry to limit which books are displayed, for example
    &author=60 (which would display a list of Bob's books that
    include the comma list)
    If you can suggest anything to help me I will be very
    thankful

    I need this to work in cfloop and not cfoutput! I know that
    you can use
    group in CF output, but for the conditions I am using this
    query, it
    must be in a loop
    If you can suggest anything to help me I will be very
    thankful
    If you can not use <cfoutput...> with its group
    feature, you need to
    recreate the functionality with <cfloop...>. You can
    create nested
    <cfloop...> tags that keep track of the changing group
    value. It takes
    more code, but that's what happens when one sets outside the
    bounds of
    the built in functionality.

  • Dynamic menu question

    HI,
    I am dynamically updating the run-time menu, such that the choices constantly keep changing. So how do I account for the handling of them considering I do not know what they will be ahead of time (I know they will be numeric)? Do I need to use Get Menu Info? Mainly what do I label the case statement since I will not know that submenu item will be selected?

    > I am dynamically updating the run-time menu, such that the choices
    > constantly keep changing. So how do I account for the handling of them
    > considering I do not know what they will be ahead of time (I know they
    > will be numeric)? Do I need to use Get Menu Info? Mainly what do I
    > label the case statement since I will not know that submenu item will
    > be selected?
    If I understand correctly, you have menus that are being
    built dynamically. The tags for the menu will be numeric
    strings. That means that one thing you can switch on is
    the string itself. You will have a case statement with
    the possible strings already programmed into it.
    Another option is to convert the selected menu string
    into a number and use a case with ranges or comma lists
    to divi
    de up the items however you like.
    Greg McKaskle

  • Get Artists and show Artist Name as string

    Hi
    I'm using VS2013, EF6, WPF
    I have tables Media, MediaArtists, Artists
    MediaArtist is a bridge table that use MediaId and ArtistId. it's mean that every Media has more Artists.
    dbContext db = new dbContext();
    mediumDataGrid.ItemsSource = db.Media.Select(m => new{
    m.MediaId,
    m.Title,
    m.LastPlayedDate,
    ????m.MediaArtists.Select(a => Artist)????
    }).ToList();
    I want to get the Artists "Name" and show in gridview as Artist1, Artist2, Artist3
    can you please let me know how do that?
    thanks

    Hello Zorig,
    It seems that you are trying to perform a group by comma list in linq, if you data is not larger, you could use the LINQ2Object as below:
    var result = db.X.ToList().Select(x => new { XID = x.XID, XName = x.XName, YNames = string.Join(",", x.Y.Select(y => y.XName)) }).ToList();
    If data is large, you could check the workaround from Emran Hussain:
    http://stackoverflow.com/a/25676673
    Or to use store produce for a complex query instead.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • OpenLDAP fatal error

    Hi John,
    Today when we tried to start the services the openldap was nt starting.This happened before too and we tried recovering the openLDAp and we got an error as follows
    db_recover Findong last valid log LSN:file:4 offset 4778137
    db_recover: Recovery starting from [1][3285961]
    db_recover:txnid 80000025 commit record found,already on commit list
    db_recover:Recovery function for LSN 4 4480781 failed on backward pass
    db_recovery:PANIC: invalid argument
    db_recovery:PANIC: fatal egion error detected:run recovery
    db_recovery:PANIC: fatal egion error detected:run recovery
    db_recovery:PANIC: fatal egion error detected:run recovery
    db_recovery:PANIC: fatal egion error detected:run recovery
    Bt when we tried to start it again it started. everything is fine
    But the users are unable to login to the application. It says an error occurred please check the log for details.
    Admin is able to login. Could you please tell me how to take the backup of my openLDAP and make use of it
    Could you please tell what could be the problem?
    Edited by: Sravan Ganti on Jun 1, 2009 1:30 PM

    Sorry you took a backup of the openldap folder when it was working correctly, before you had issues with it ?
    If you did take a copy of when it was working and replaced it back you wouldn't even need to run the recovery.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • 10.1.3.3 ESB DB Adapter Performance Problem

    Hello,
    We are trying to update oracle database using DB Adapter. The insertion into database via DBAdapter (& only with DBAdapter alone) is slow. Even for transferring 50 records of ~1K data, 5-6 seconds are spent.
    Environment:
    Oracle SOA suite 10.1.3 with 10.1.3.3 Patch Applied
    AIX 5
    8 CPU & 20 GB RAM
    Our test setup.
    Tool:ESB & BPEL
    Inbound Adapter to read data from Oracle Table
    TransformActivity to convert source schema to destination schema
    Outbound Adapter to write data into same oracle table in the same machine. (This has performance problem).
    ESB Console shows much of total time spent in the Outbound Adapter activity.
    We also created a BPEL process to do the data transfer between Oracle Databases. Adapter statistics for outbound insert activity in BPEL console shows higher value under "Commit" listed under "Adapter Post Processing".
    If we are to read data using DB adapter from oracle table & write it to a file using File adapter, transfer of 10,000 records (~2K each) happens in 2 secs.Only writing into database is taking long time. We are unsure why writing into database takes so much. Any help would be appreciated to solve this problem.
    We have modified the DB values stated by Oracle documentation for performance improvement. We have done the JVM tuning. We tried using "UsesBatchWriting" & UseDirectSql=true. However there is no improvement.
    We also tried creating an outbound adapter which executes custom sql. Custom sql inserts 10000 records into destination table. (insert into dest_table select * from source_table). There is no performance issue with this approach. Customsql is executed in less than 2 seconds. Also we dont see any performance problem if we are to use any of the sql clients to update data in the same destination table. Only via DB Adapter we face this issue.
    Interestingly, in a different setup,a Windows machine with just 1CPU, 1GB RAM running 10.1.3 is able to transfer 10,000 records (~2K per record) to a different Oracle database over the network(with in LAN).
    Please let me know if you would like know setting of any parameter in the system.We would appreciate if any help can be provided to find where the bottleneck is.
    Thanks

    I'm presuming this is just merge and not insert.
    do alter system set sql_Trace=true and capture the trace files on the database . It's probably only waiting on SQLNET message from client but we need to rule that out.
    dmstool should show you some of the activity stuff inside the client, it may also be worth doing a truss on the java process to see what syscalls it is waiting on.
    Also are you up to MLR7 , the latest ESB release ?

  • CFPDF MERGE null null Error

    Hi All,
    I discovered a CFPDF bug this week, and thought I'd post the
    bug and a work-around.
    We had an app which had been using CFPDF to MERGE files since
    August (just after upgrading to CF8). We suddenly began
    experiencing errors this past weekend.
    The Error:
    DIAGNOSTICS: "null null <br>The error occurred on line
    45."
    STACKTRACE: "java.lang.NullPointerException at
    com.adobe.internal.pdftoolkit.services.manipulations.impl.PMMPages.clonePage(Unknown
    Source) [...]"
    The Cause:
    It turns out over the weekend, the company providing the PDFs
    had changed the layout slightly. Something about the way the new
    file had been saved was throwing off the CFPDF tag when trying to
    MERGE.
    Other Symptoms:
    - When using the IsPDFFile() function, the new file checked
    out.
    - Using CFPDF GETINFO on the new file yielded identical info
    as an old file.
    - CFPDF was able to successfully READ and WRITE the new file.
    - READing the new file into a variable, or MERGEing directly
    from the new file made no difference.
    - Using a comma-list of PDF vars, or using CFPDFPARAM tags
    made no difference.
    - Specifying the pages from the new file to be included in
    the MERGE made no difference.
    The Solution:
    The solution was surprising to me. We needed to simply
    re-save (or WRITE) the file while including the "saveoption"
    attribute and setting it to "linear".
    So before we attempt a merge, we first need something like
    this:
    <cfpdf action="write"
    destination="C:\website\file-one.pdf"
    source="C:\website\file-one.pdf" overwrite="yes"
    saveoption="linear">
    Then when we try our merge, it all works fine:
    <cfpdf action="merge" name="FinalOutput">
    <cfpdfparam source="C:\website\template.pdf">
    <cfpdfparam source="C:\website\file-one.pdf">
    </cfpdf>
    I hope this helps someone else out there.

    I have now submitted this as a bug, https://bugbase.adobe.com/index.cfm?event=bug&id=3546237

  • MOD Question

    I use the following code to display data in two columns :
    &lt;cfoutput
    query=&quot;qryGet_Error&quot;&gt;
    &lt;td valign=&quot;top&quot;
    align=&quot;left&quot;&gt;
    #qryGet_Error.error_description#,
    &lt;/td&gt;
    &lt;cfif CurrentRow MOD 2 EQ 0&gt;
    I put a comma at the end of the description so that each
    output would have a comma separator. How do I do it so that the
    last output value does not have a comma ?
    Thanks

    quote:
    Originally posted by:
    trojnfn
    I think I understand what you are doing here, but how do you
    prevent the code from breaking ?
    My list can contain a minimum of one value, or a maximum of
    ten values. If I have three per line, for three lines, the last
    line will only have one value. If I have four per line for two
    lines, the last line will have two values. And if I have two per
    line for five lines, the last line will also contain two values.
    Changing the value for the step will determine the number of
    columns (<td></td> sets) in your table row. Step="2"
    will give 2, step="3" will give 3, etc...
    Here is how my suggestion would work.
    <table ...>
    <CFLOOP from="1" to="#ListLen(myList)#" step="3"
    index="x">
    <tr>
    <td>#ListGetAt(myList, x)#</td>
    <td><CFIF x+1 LTE
    ListLen(myList)>#ListGetAt(myList,
    x+1)#<CFELSE> </CFIF></td>
    <td><CFIF x+2 LTE
    ListLen(myList)>#ListGetAt(myList,
    x+2)#<CFELSE> </CFIF></td>
    </tr>
    </CFLOOP>
    </table>
    If you change the step value in order to add columns, just
    copy/paste the last <td>...</td> line and change the
    x+2 for the new line to x+3 and you will be fine. If you need to
    set step="2", just remove the 3rd <td> line.
    This will produce a table like this:
    <table>
    <tr>
    <td>query data 1</td>
    <td>query data 2</td>
    <td>query data 3</td>
    </tr>
    <tr>
    <td>query data 4</td>
    <td>query data 5</td>
    <td>query data 6</td>
    </tr>
    <tr>
    <td>query data 7</td>
    <td>query data 8</td>
    <td>query data 9</td>
    </tr>
    <tr>
    <td>query data 10</td>
    <td> </td>
    <td> </td>
    </tr>
    </table>
    Since you created the list of values using the ValueList
    function, you will not have the commas (list delimiter)displayed.
    Just make sure the query data does not contain commas or it will
    alter the list.
    CR

  • How to get distinct values in a comma separated list of email addresses?

    Hi Friends,
    I have a cursor which fetches email address along with some other columns. More than one record can have same email address.
    Ex
    CURSOR C1 IS
    SELECT 1 Buyer,'XX123' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 2 Buyer,'XX223' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 1 Buyer,'XX124' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 2 Buyer,'XX224' PO, '[email protected]' Buyer_email from dualNow, i open the cursor write the contents into a file and also form a comma separated list of buyer emails as follows
    for cur_rec in c1
    LOOP
    --write contents into a file
    l_buyer_email_list := l_buyer_email_list||cur_rec.buyer_email||',';
    END LOOP
    l_buyer_email_list := RTRIM(l_buyer_email_list,',');
    The buyer email list will be like: '[email protected],[email protected],[email protected],[email protected]'
    Inorder to avoid duplicate email address in the list, i can store each of this value is a table type variable and compare in each iteration whether the email already exist in the list or not.
    Is there any other simpler way to achieve this?
    Regards,
    Sreekanth Munagala.

    If you are using oracle version 11, you can use listagg function
    with c as
    (SELECT 1 Buyer,'XX123' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 2 Buyer,'XX223' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 1 Buyer,'XX124' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 2 Buyer,'XX224' PO, '[email protected]' Buyer_email from dual
    select buyer, listagg(buyer_email,',') within group (order by  buyer) 
    from c
    group by buyer
    order by buyerFor prior versions
    {cod}
    with c as
    (SELECT 1 Buyer,'XX123' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 2 Buyer,'XX223' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 1 Buyer,'XX124' PO, '[email protected]' Buyer_email from dual
    UNION ALL
    SELECT 2 Buyer,'XX224' PO, '[email protected]' Buyer_email from dual
    select buyer, rtrim(xmlagg(xmlelement(e,buyer_email||',').extract('//text()')),',')
    from c
    group by buyer
    order by buyer

  • How can I use comma in the return values of a static list of values

    Hi all,
    I want to create a select list (static LOV) like the following:
    Display Value / Return Value
    both are "Y" / 'YY'
    one is "Y" / 'YN','NY'
    I write the List of values definition is like this:
    STATIC:both are "Y"; 'YY',one is "Y";'YN', 'NY'
    However, it is explain by htmldb like this:
    Display Value / Return Value
    both are "Y" / 'YY'
    one is "Y" / 'YN'
    / 'NY'
    I tried using "\" before the ",", or using single or double quote, but all these do not work:(
    How can I use a comma in the return values?
    Thanks very much!

    "Better still, why not process the code of both Y with 2Y and one is Y with 1Y? "
    Could you please explain in detail? thanks! I am quite new to htmldb
    In fact I have a table which has too columns "a1" and "a2", both the values of these two columns are "Y" or "N". AndI want to choose the records that both a1 and a2 are "Y", or just one of a1, a2 is "Y".
    So I write the report sql like this:
    "select * from t1 where a1 || a2 in(:MYSELECTLIST) "
    Thus, I need to use "," in the LOV, since expression list in IN(,,,) using ",".
    Any other way to implement this?

  • Obtaining comma-separated list of text values associated with bitwise flag column

    In the table msdb.dbo.sysjobsteps, there is a [flags] column, which is a bit array with the following possible values:
    0: Overwrite output file
    2: Append to output file
    4: Write Transact-SQL job step output to step history
    8: Write log to table (overwrite existing history)
    16: Write log to table (append to existing history)
    32: Include step output in history
    64: Create a Windows event to use as a signal for the Cmd jobstep to abort
    I want to display a comma-separated list of the text values for a row. For example, if [flags] = 12, I want to display 'Write Transact-SQL job step output to step history, Write log to table (overwrite existing history)'.
    What is the most efficient way to accomplish this?

    Here is a query that gives the pattern:
    DECLARE @val int = 43
    ;WITH numbers AS (
       SELECT power(2, n) AS exp2 FROM (VALUES(0), (1), (2), (3), (4), (5), (6)) AS n(n)
    ), list(list) AS (
       SELECT
         (SELECT CASE WHEN exp2 = 1  THEN 'First flag'
                      WHEN exp2 = 2  THEN 'Flag 2'
                      WHEN exp2 = 4  THEN 'Third flag'
                      WHEN exp2 = 8  THEN 'IV Flag'
                      WHEN exp2 = 16 THEN 'Flag #5'
                      WHEN exp2 = 32 THEN 'Another flag'
                      WHEN exp2 = 64 THEN 'My lucky flag'
                 END + ', '
          FROM   numbers
          WHERE  exp2 & @val = exp2
          ORDER BY exp2
          FOR XML PATH(''), TYPE).value('.', 'nvarchar(MAX)')
    SELECT substring(list, 1, len(list) - 1)
    FROM   list
    Here I'm creating the numbers on the fly, but it is better to have a table of numbers in your database. It can be used in many places, see here for a short discussion:
    http://www.sommarskog.se/arrays-in-sql-2005.html#numbersasconcept
    (Only read down to the next header.)
    For FOR XML PATH thing is the somewhat obscure way we create concatenated lists. There is not really any using trying to explain how it works; it just works. The one thing to keep in mind is that it adds an extra comma at the end and the final query strips
    it off.
    This query does not handle that 0 has a special meaning - that is left as an exercise to the reader.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for