Query with duplicates

I have a query which basically returns one row (location ID) from the select which is shown below
SELECT location_id
FROM tasks
WHERE container_id = :container_id
AND task_id = (SELECT MIN(task_id
FROM tasks
WHERE container_id = :container_id)
But my requirement is changing now as i have to select location_id in the main select if i have two or more task_id from the inner select and select location_group on the main select if i have only one task_id from the inner select.
I hope this clearly defines the question
Thanks in advance.

Like this ?
SQL> create table tasks
  2  as
  3  select 1 task_id, 11 location_id, 21 location_group, 99 container_id from dual union all
  4  select 1, 12, 22, 99 from dual union all
  5  select 1, 13, 23, 99 from dual union all
  6  select 1, 14, 24, 99 from dual union all
  7  select 2, 15, 25, 98 from dual
  8  /
Tabel is aangemaakt.
SQL> var container_id number
SQL> exec :container_id := 99;
PL/SQL-procedure is geslaagd.
SQL> SELECT location_id
  2  FROM tasks
  3  WHERE container_id = :container_id
  4  AND task_id = (SELECT MIN(task_id)
  5  FROM tasks
  6  WHERE container_id = :container_id)
  7  /
                           LOCATION_ID
                                    11
                                    12
                                    13
                                    14
4 rijen zijn geselecteerd.
SQL> select case count_task_id
  2         when 1 then 'location_group'
  3         else 'location_id'
  4         end what
  5       , case count_task_id
  6         when 1 then location_group
  7         else location_id
  8         end location_id_or_group
  9    from ( select task_id
10                , location_id
11                , location_group
12                , count(task_id) over (partition by task_id) count_task_id
13                , min(task_id) over () min_task_id
14             from tasks
15            where container_id = :container_id
16         )
17   where task_id = min_task_id
18  /
WHAT                             LOCATION_ID_OR_GROUP
location_id                                        11
location_id                                        12
location_id                                        13
location_id                                        14
4 rijen zijn geselecteerd.
SQL> exec :container_id := 98;
PL/SQL-procedure is geslaagd.
SQL> select case count_task_id
  2         when 1 then 'location_group'
  3         else 'location_id'
  4         end what
  5       , case count_task_id
  6         when 1 then location_group
  7         else location_id
  8         end location_id_or_group
  9    from ( select task_id
10                , location_id
11                , location_group
12                , count(task_id) over (partition by task_id) count_task_id
13                , min(task_id) over () min_task_id
14             from tasks
15            where container_id = :container_id
16         )
17   where task_id = min_task_id
18  /
WHAT                             LOCATION_ID_OR_GROUP
location_group                                     25
1 rij is geselecteerd.Regards,
Rob.

Similar Messages

  • Mysql query with duplicate results

    I have a db that someone else set up that I am querying. It
    should have been set up to use a couple of different tables... one
    for clients and one for subclients with foreign keys to identify.
    However, each row has both the client and subclient which means
    there are duplicate clients since a single client may have several
    subclients.
    So...
    My query returns multiple clients where I need only one. I am
    guessing I can handle this in a loop with AS2.0 but I can't figure
    out how to tell when a unique client is detected. Does this make
    sense? Do I need to set my query up differently?

    try {
    ResultSet rs = DB.exQuery("SELECT max(transaction_date) as lastest_date, transaction_value,transaction_type"
                                  + "FROM transaction_log WHERE user_id =" + user.getId() + " AND is_credit = '1'");
    if ( rs.next () ) {can you explain
    *     trans_date = rs.getString("lastest_date");* // trans_date is a string type or date type... and max(transaction_date) as lastest_date returns which type...
         credit = rs.getString("transaction_value");
         trans_method = rs.getString("transaction_type");
    } catch(SQLException ex) {}

  • Query produces duplicate values in successive rows. Can I null them out?

    I've had very good success (Thanks Andy) in getting detailed responses to my posted questions, and I'm wondering whether there is a way in a report region join query that produces successive rows with duplicate values that I can suppress(replace) the print of the duplicate values in those rows. In case I've managed to twist the question into an unintelligible mess (One of my specialties), let me provide an example:
    We're trying to list the undergraduate institutions that an applicant has attended, and display information about dates of attendence, gpa, and major(s)/minor(s) taken. The rub is that there can be multiple major(s)/minor(s) for a given undergraduate institution, so the following is produced by the query:
    University of Hard Knox 01/02 01/06 4.00 Knitting
    University of Hard Knox 01/02 01/06 4.00 Cloth Repair
    Advanced University 02/06 01/08 3.75 Clothing Design
    Really Advanced U 02/08 01/09 4.00 Sports Clothing
    Really Advanced U 02/08 01/09 4.00 Tennis Shoe Burlap
    I want it to look like this:
    University of Hard Knox 01/02 01/06 4.00 Knitting
    Cloth Repair
    Advanced University 02/06 01/08 3.75 Clothing Design
    Really Advanced U 02/08 01/09 4.00 Sports Clothing
    Tennis Shoe Burlap
    * (edit) Please note that the cloth repair and tennis shoe repair rows would be correctly positioned in a table, but unfortunately got space suppresed here for some reason. *
    Under Andy's tuteage, I'd say the answer is probably javascript looping through the DOM looking for the innerHTML of specific TDs in the table, but I'd like to confirm that, or, does Apex provide a checkbox that I can check that will produce the same results? Thanks in advance guys, and sorry for all the questions. We've been tasked to use Apex for our next project and to learn it by using it, since the training budget is non-existant this year. I love unfunded mandates ;)
    Phil
    Edited by: Phil McDermott on Aug 13, 2009 9:34 AM

    Hi Phil,
    Javascript is useful, as is the column break functionality within Report Attributes (which would be my first choice if poss).
    If you need to go past 3 columns, I would suggest doing something in the SQL statement itself. This will mean that the sorting would probably having to be controlled, but it is doable.
    Here's a fairly old thread on the subject: Re: Grouping on reports (not interactive) - with an example here: [http://htmldb.oracle.com/pls/otn/f?p=33642:112] (I've even used a custom template for the report, but you don't need to go that far!)
    This uses the LAG functionality within SQL to compare the values on one row to the values on the preceeding row - being able to compare these values allows you to determine which ones to display or hide.
    Andy

  • SQL Report query with condition (multiple parameters) in apex item?

    Hello all,
    I have a little problem and can't find a solution.
    I need to create reports based on a SQL query or I.R. Nothing hard there.
    Then I need to add the WHERE clause dynamically with javascript from an Apex item.
    Again not very hard. I defined an Apex item, set my query like this "SELECT * FROM MYTAB WHERE COL1 = :P1_SEARCH" and then I call the page setting the P1_SEARCH value. For instance COL1 is rowid. It works fine.
    But here is my problem. Let's consider that P1_SEARCH will contain several rowids and that I don't know the number of those values,
    (no I won't create a lot of items and build a query with so many OR!), I would like sotheming like "SELECT * FROM MYTAB WHERE ROWID IN (:P1_SEARCH) with something like : ROWID1,ROWID2 in P1_SEARCH.
    I also tried : 'ROWID1,ROWID2' and 'ROWID1','ROWID2'
    but I can't get anything else than filter error. It works with IN with one value but as soon as there are two values or more, it seems than Apex can't read the string.
    How could I do that, please?
    Thanks for your help.
    Max

    mnoscars wrote:
    But here is my problem. Let's consider that P1_SEARCH will contain several rowids and that I don't know the number of those values,
    (no I won't create a lot of items and build a query with so many OR!), I would like sotheming like "SELECT * FROM MYTAB WHERE ROWID IN (:P1_SEARCH) with something like : ROWID1,ROWID2 in P1_SEARCH.
    I also tried : 'ROWID1,ROWID2' and 'ROWID1','ROWID2'
    but I can't get anything else than filter error. It works with IN with one value but as soon as there are two values or more, it seems than Apex can't read the string.For a standard report, see +{message:id=9609120}+
    For an IR&mdash;and improved security avoiding the risk of SQL Injection&mdash;use a <a href="http://download.oracle.com/docs/cd/E17556_01/doc/apirefs.40/e15519/apex_collection.htm#CACFAICJ">collection</a> containing the values in a column instead of a CSV list:
    {code}
    SELECT * FROM MYTAB WHERE ROWID IN (SELECT c001 FROM apex_collections WHERE collection_name = 'P1_SEARCH')
    {code}
    (Please close duplicate threads spawned by your original question.)

  • Need help in optimizing the query with joins and group by clause

    I am having problem in executing the query below.. it is taking lot of time. To simplify, I have added the two tables FILE_STATUS = stores the file load details and COMM table that is actual business commission table showing records successfully processed and which records were transmitted to other system. Records with status = T is trasnmitted to other system and traansactions with P is pending.
    CREATE TABLE FILE_STATUS
    (FILE_ID VARCHAR2(14),
    FILE_NAME VARCHAR2(20),
    CARR_CD VARCHAR2(5),
    TOT_REC NUMBER,
    TOT_SUCC NUMBER);
    CREATE TABLE COMM
    (SRC_FILE_ID VARCHAR2(14),
    REC_ID NUMBER,
    STATUS CHAR(1));
    INSERT INTO FILE_STATUS VALUES ('12345678', 'CM_LIBM.TXT', 'LIBM', 5, 4);
    INSERT INTO FILE_STATUS VALUES ('12345679', 'CM_HIPNT.TXT', 'HIPNT', 4, 0);
    INSERT INTO COMM VALUES ('12345678', 1, 'T');
    INSERT INTO COMM VALUES ('12345678', 3, 'T');
    INSERT INTO COMM VALUES ('12345678', 4, 'P');
    INSERT INTO COMM VALUES ('12345678', 5, 'P');
    COMMIT;Here is the query that I wrote to give me the details of the file that has been loaded into the system. It reads the file status and commission table to show file name, total records loaded, total records successfully loaded to the commission table and number of records that has been finally transmitted (status=T) to other systems.
    SELECT
        FS.CARR_CD
        ,FS.FILE_NAME
        ,FS.FILE_ID
        ,FS.TOT_REC
        ,FS.TOT_SUCC
        ,NVL(C.TOT_TRANS, 0) TOT_TRANS
    FROM FILE_STATUS FS
    LEFT JOIN
        SELECT SRC_FILE_ID, COUNT(*) TOT_TRANS
        FROM COMM
        WHERE STATUS = 'T'
        GROUP BY SRC_FILE_ID
    ) C ON C.SRC_FILE_ID = FS.FILE_ID
    WHERE FILE_ID = '12345678';In production this query has more joins and is taking lot of time to process.. the main culprit for me is the join on COMM table to get the count of number of transactions transmitted. Please can you give me tips to optimize this query to get results faster? Do I need to remove group and use partition or something else. Please help!

    I get 2 rows if I use my query with your new criteria. Did you commit the record if you are using a second connection to query? Did you remove the criteria for file_id?
    select carr_cd, file_name, file_id, tot_rec, tot_succ, tot_trans
      from (select fs.carr_cd,
                   fs.file_name,
                   fs.file_id,
                   fs.tot_rec,
                   fs.tot_succ,
                   count(case
                            when c.status = 'T' then
                             1
                            else
                             null
                          end) over(partition by c.src_file_id) tot_trans,
                   row_number() over(partition by c.src_file_id order by null) rn
              from file_status fs
              left join comm c
                on c.src_file_id = fs.file_id
             where carr_cd = 'LIBM')
    where rn = 1;
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS
    LIBM    CM_LIBM.TXT          12345678                5          4          2
    LIBM    CM_LIBM.TXT          12345677               10          0          0Using RANK can potentially produce multiple rows to be returned though your data may prevent this. ROW_NUMBER will always prevent duplicates. The ordering of the analytical function is irrelevant in your query if you use ROW_NUMBER. You can remove the outermost query and inspect the data returned by the inner query;
    select fs.carr_cd,
           fs.file_name,
           fs.file_id,
           fs.tot_rec,
           fs.tot_succ,
           count(case
                    when c.status = 'T' then
                     1
                    else
                     null
                  end) over(partition by c.src_file_id) tot_trans,
           row_number() over(partition by c.src_file_id order by null) rn
    from file_status fs
    left join comm c
    on c.src_file_id = fs.file_id
    where carr_cd = 'LIBM';
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS         RN
    LIBM    CM_LIBM.TXT          12345678                5          4          2          1
    LIBM    CM_LIBM.TXT          12345678                5          4          2          2
    LIBM    CM_LIBM.TXT          12345678                5          4          2          3
    LIBM    CM_LIBM.TXT          12345678                5          4          2          4
    LIBM    CM_LIBM.TXT          12345677               10          0          0          1

  • Bug in cursor behaviour with duplicates

    Dear Oracle guys and girls,
    first of all: it sucks that i HAVE to provide business information (company name, address, even phone number) even if i just want to participate in this forum for private reasons. I even have to "unsubscribe" to the newsletters although i never subscribed. Then i have to re-enter my timezone information and email address for the forum, because the settings in my profile are ignored. I think there's some room for improvement in this registration process.
    OK - back to topic. i think i found a bug in the cursor behaviour with duplicate keys. But the behaviour is very consistent, so maybe it's not a bug, but a bad design. (I call it bad because it's totally unexpected and not logical to me).
    I insert some dupes with DB_KEYFIRST; then i create a cursor and iterate over all items in the reverse order (!) with DB_PREV (i also tried DB_PREV|DB_NEXT_DUPE) - no keys are shown.
    Alternatively:
    I insert some dupes with DB_KEYLAST; then i create a cursor and iterate over all items in the reverse order (!) with DB_NEXT (i also tried DB_NEXT|DB_NEXT_DUPE) - no keys are shown.
    cursor->c_get returns the error code -30989 (DB_NOTFOUND).
    Why is it not possible to traverse duplicates in the reverse order? To me it looks like a bug.
    I tested against db 4.5.20.
    Regards
    Chris
    PS: I would love to hear if the bug i reported here: http://groups.google.com/group/comp.databases.berkeley-db/browse_thread/thread/ed471cf6837cb2a6/dd9cda0ad105f401#dd9cda0ad105f401
    will be fixed in the next version.
    Here's a test program:
    int
    main(int argc, char **argv)
    unsigned i;
    int st;
    DB *db;
    DBT key, record;
    DBC cursor, cursor2;
    unlink("test.bdb");
    st=db_create(&db, 0, 0);
    if (st)
    error("db_create", st);
    st=db->set_flags(db, DB_DUP);
    if (st)
    error("db->set_flags", st);
    st=db->open(db, 0, "test.bdb", 0, DB_BTREE, DB_CREATE, 0);
    if (st)
    error("db->open", st);
    memset(&key, 0, sizeof(key));
    memset(&record, 0, sizeof(record));
    st=db->cursor(db, 0, &cursor, 0);
    if (st)
    error("db->cursor", st);
    st=db->cursor(db, 0, &cursor2, 0);
    if (st)
    error("db->cursor", st);
    for (i=0; i<LOOPS; i++) {
    record.data=&i;
    record.size=sizeof(i);
    st=cursor->c_put(cursor, &key, &record, DB_KEYFIRST);
    st=cursor->c_put(cursor, &key, &record, DB_KEYLAST);
    if (st)
    error("cursor->c_put", st);
    while (!(st=cursor2->c_get(cursor, &key, &record, DB_NEXT))) {
    printf("%d\n", *(int *)record.data);
    st=cursor->c_close(cursor);
    if (st)
    error("cursor->c_close", st);
    st=db->close(db, 0);
    if (st)
    error("db->close", st);
    return (0);
    }

    st=cursor->c_put(cursor, &key, &record, DB_KEYFIRST);
    st=cursor->c_put(cursor, &key, &record, DB_KEYLAST);
    if (st)
    error("cursor->c_put", st);
    please delete the first line, it was a cut and paste error. as i said earlier: insert with KEYLAST, query with NEXT.

  • Please suggest a select query / sub query with out using any subprograms or

    source table: Three columns ORIGIN, DESTINATION,MILES
    Origin      Destination Miles
    Sydney      Melbourne      1000
    Perth      Adelaide      3000
    Canberra      Melbounre      700
    Melbourne      Sydney           1000
    Brisbane      Sydney           1000
    Perth      Darwin           4000
    Sydney      Brisbane      1000
    out put :Three columns ORIGIN, DESTINATION,MILES
    Duplicate routes are to be ignored so the output is
    Origin      Destination      Miles
    Sydney      Melbourne      1000
    Perth      Adelaide      3000
    Canberra      Melbounre      700
    Brisbane      Sydney           1000
    Perth      Darwin           4000
    Please suggest a select query / sub query with out using any subprograms or functions/pkgs to get the out put table.

    Hi,
    user9368047 wrote:
    ... Please suggest a select query / sub query with out using any subprograms or functions/pkgs to get the out put table.Why? If the most efficient way to get the results you want involves using a function, why wouldn't you use it?
    Here's one way, without any functions:
    SELECT     a.*
    FROM           source_table  a
    LEFT OUTER JOIN      source_table  b  ON   a.origin          = b.destination
                                          AND  a.destination       = b.origin
                          AND  a.miles          = b.miles
    WHERE   b.origin  > a.origin    -- Not b.origin > b.origin
    OR     b.origin  IS NULL
    ;If you'd care to post CREATE TABLE and INSERT statements for your sample data, then I could test this.
    Edited by: Frank Kulash on Nov 6, 2012 7:39 PM
    Corrected WHERE clause after MLVrown (below)

  • Hierarchical query with where clause

    Hi,
    How can I query hierarchically a query with WHERE clause? I have a table with three fields session_id,id and root_id.
    When I try with the following query,
    select id, level from relation
    where session_id = 79977
    connect by prior id = root_id start with id = 5042;
    It gets duplicate values.
    I want the query to show in the hierarchical manner with a filter condition using WHERE clause. Please help me how can I achieve this. If you know any link that describes more about this, please send it.
    Thanks in Advance.
    Regards,
    -Parmy

    Hi Sridhar Murthy an others,
    Thanks a lot for your/the answer. It's working for me. It saved a lot of other work around without the proper knowledge of hierarchical query. Please send me any link that describes these issues in detail and also I hope as I have mentioned in the other message, same cannot be achieved on views or ( on two different tables ???)
    Any way thanks for your reply,
    It's working for me.
    With happiness,
    -Parmy

  • Need help with SQL Query with Inline View + Group by

    Hello Gurus,
    I would really appreciate your time and effort regarding this query. I have the following data set.
    Reference_No---Check_Number---Check_Date--------Description-------------------------------Invoice_Number----------Invoice_Type---Paid_Amount-----Vendor_Number
    1234567----------11223-------------- 7/5/2008----------paid for cleaning----------------------44345563------------------I-----------------*20.00*-------------19
    1234567----------11223--------------7/5/2008-----------Adjustment for bad quality---------44345563------------------A-----------------10.00------------19
    7654321----------11223--------------7/5/2008-----------Adjustment from last billing cycle-----23543556-------------------A--------------------50.00--------------19
    4653456----------11223--------------7/5/2008-----------paid for cleaning------------------------35654765--------------------I---------------------30.00-------------19
    Please Ignore '----', added it for clarity
    I am trying to write a query to aggregate paid_amount based on Reference_No, Check_Number, Payment_Date, Invoice_Number, Invoice_Type, Vendor_Number and display description with Invoice_type 'I' when there are multiple records with the same Reference_No, Check_Number, Payment_Date, Invoice_Number, Invoice_Type, Vendor_Number. When there are no multiple records I want to display the respective Description.
    The query should return the following data set
    Reference_No---Check_Number---Check_Date--------Description-------------------------------Invoice_Number----------Invoice_Type---Paid_Amount-----Vendor_Number
    1234567----------11223-------------- 7/5/2008----------paid for cleaning----------------------44345563------------------I-----------------*10.00*------------19
    7654321----------11223--------------7/5/2008-----------Adjustment from last billing cycle-----23543556-------------------A--------------------50.00--------------19
    4653456----------11223--------------7/5/2008-----------paid for cleaning------------------------35654765-------------------I---------------------30.00--------------19
    The following is my query. I am kind of lost.
    select B.Description, A.sequence_id,A.check_date, A.check_number, A.invoice_number, A.amount, A.vendor_number
    from (
    select sequence_id,check_date, check_number, invoice_number, sum(paid_amount) amount, vendor_number
    from INVOICE
    group by sequence_id,check_date, check_number, invoice_number, vendor_number
    ) A, INVOICE B
    where A.sequence_id = B.sequence_id
    Thanks,
    Nick

    It looks like it is a duplicate thread - correct me if i'm wrong in this case ->
    Need help with SQL Query with Inline View + Group by
    Regards.
    Satyaki De.

  • From Clause query with form variables

    forms 9.0.4 rdbms 9.2
    Is it possible to create a From Clause query with form variables generated from another block (but in the same form)? I am not having any success.
    I searched Metalink. It appears that according to DOC ID # 69884.1, in Forms 6i, this is not possible. Metalink suggest in DOC ID 104771.1 implementating a dynamic From Clause, but when I duplicate the example on my system, I receive an Oracle error. Further investigation from the web form (DISPLAY ERROR) indicates that the system does not see the dynamic value.
    Has anyone else run into this error? Has this been fixed in forms 9.0.4 and I am just missing something? Does a dynamic from clause query work? Can anyone point me to an example or post an example or offer any advise.
    thanks in advance

    As far as I know it is not possible to use block items in a from clause query in forms 9.0.4. Here is my solution for a From-Clause-Query via the 'Query-Data-Source-Name-Property':
    To use the values of the block items in my from clause query I implemented a database package with getter and setter routines for the block item values I needed for the query.
    In the Key-Exeqry-Trigger of the From-Clause-Query-Block I set the global package variables with values of the block-items I am interested in. In the From-Clause-Query I used the values in the where-clause via package functions which return the global package variables.
    Hope my solution will work for your problem.

  • Query creating duplicate records

    Hi
    I need to create a report that gives the po number, po line item number, material number and the delivery post code.
    I have created a query with the following tables:
    EKKO, EKPO, EBAN, VBEP, VBAK, VBPA, ADRC.
    I have deleted the wrong link between EKKO and EKPO and retained all other links as suggested.
    Could anybody let me know why there are duplicate records in the report pls?
    Thanks
    Desp09

    Desp09 wrote:
    Hi
    >
    > I have connected
    >
    > EKKO-EBELN to EKPO
    >
    >  EKPO -EBELN and EKPO-EBELP  to the same of EBAN
    >
    > also EKPO-BANFN and EKPO-BNFPO to the same of EBAN
    >
    > EBAN-BANFN and EBAN-BNFPO to VBEP
    >
    > VBEP-VBELN to VBAK-VBELn
    >
    > VBAK-VBELN to VBPA-VBELN
    >
    > VBPA-ADRNR to ADRC-ADDRNUMBER.
    >
    > This is the first time I have created a query so if you can kindly explain the logic of the error it would help in my future reports.
    >
    > Thanks
    > Priya
    Hi Priya,
    You can simplify the process if you make use of the LDB, which will fetch the same report with std LDB available in the system.
    You can define multiple infosets and  pull the fields for easier approach.
    Regards
    Shiva

  • CAML Query with 10 AND conditions

    Hello,
    I need some help with a CAML query. This particular query needs to have 10 AND conditions. Quite frankly, with all the nesting it is driving me a little nuts:
    What I have is:
    <Query>
    <Where>
    <And>
    <And>
    <And>
    <And>
    <And>
    <And>
    <And>
    <And>
    <And>
    <Eq><FieldRef Name='Column1' LookupId='TRUE' /><Value Type='Text'>10341</Value></Eq>
    <Eq><FieldRef Name='Column2' LookupId='TRUE' /><Value Type='Text'>9539</Value></Eq>
    </And>
    <Eq><FieldRef Name='Column3' LookupId='TRUE' /><Value Type='Text'>183</Value></Eq>
    </And>
    <Eq><FieldRef Name='Column4' LookupId='TRUE' /><Value Type='Text'>35</Value></Eq>
    </And>
    <IsNull><FieldRef Name='Column5' /></IsNull>
    </And>
    <Eq><FieldRef Name='Column6' LookupId='TRUE' /><Value Type='Text'>4387</Value></Eq>
    </And>
    <Eq><FieldRef Name='Column7' LookupId='TRUE' /><Value Type='Text'>4204</Value></Eq>
    </And>
    <Eq><FieldRef Name='Column8' LookupId='TRUE' /><Value Type='Text'>36</Value></Eq>
    </And>
    <Eq><FieldRef Name='Column9' LookupId='TRUE' /><Value Type='Text'>213</Value></Eq>
    </And>
    <IsNull><FieldRef Name='Column10' /></IsNull>
    </And>
    </Where>
    </Query>
    I have added this into my ItemAdding Event Receiver as it will basically do a check for duplicate items based on the 10 columns. 
    If anyone can help guide me in this, it would be much appreciated. I have been using a CAML Query Builder to help.

    http://webcache.googleusercontent.com/search?q=cache:xji7jOxa5_EJ:aasai-sharepoint.blogspot.com/2013/02/caml-query-with-multiple-conditions.html+&cd=3&hl=en&ct=clnk&gl=in
    http://stackoverflow.com/questions/6203821/caml-query-with-nested-ands-and-ors-for-multiple-fields
    Since you are not allowed to put more than two conditions in one condition group (And | Or) you have to create an extra nested group (MSDN). The expression
    A AND B AND C looks like this:
    <And>
    A
    <And>
    B
    C
    </And>
    </And>
    Your SQL like sample translated to CAML (hopefully with matching XML tags ;) ):
    <Where>
    <And>
    <Or>
    <Eq>
    <FieldRef Name='FirstName' />
    <Value Type='Text'>John</Value>
    </Eq>
    <Or>
    <Eq>
    <FieldRef Name='LastName' />
    <Value Type='Text'>John</Value>
    </Eq>
    <Eq>
    <FieldRef Name='Profile' />
    <Value Type='Text'>John</Value>
    </Eq>
    </Or>
    </Or>
    <And>
    <Or>
    <Eq>
    <FieldRef Name='FirstName' />
    <Value Type='Text'>Doe</Value>
    </Eq>
    <Or>
    <Eq>
    <FieldRef Name='LastName' />
    <Value Type='Text'>Doe</Value>
    </Eq>
    <Eq>
    <FieldRef Name='Profile' />
    <Value Type='Text'>Doe</Value>
    </Eq>
    </Or>
    If this helped you resolve your issue, please mark it Answered

  • How to re-write this big SELECT Query with INNER JOINs?

    Hi Experts
    I have a performance killer SELECT query with an inner join of 3 tables u2013 VBAP, VBAK and VBEP together, which populates records to an internal table INT_COLL_ORD. Based on these records selected, in another SELECT query, records are fetched from VBUK table to the internal table INT_VBUK.
    SELECT A~VBELN A~POSNR A~MATNR A~KWMENG A~KBMENG A~ERDAT A~ERZET A~PSTYV D~AUART E~ETTYP E~EDATU
    INTO TABLE INT_TAB_RES
    FROM VBAP AS A INNER JOIN VBAK AS D
    ON D~VBELN EQ A~VBELN AND D~MANDT EQ A~MANDT
    INNER JOIN VBEP AS E
    ON E~VBELN EQ A~VBELN AND E~POSNR EQ A~POSNR AND E~MANDT EQ A~MANDT
    WHERE  A~VBELN IN s_VBELN AND
           D~auart in s_auart AND
           D~vkorg in s_vkorg AND
           D~vbtyp eq 'C'     AND
           ( ( matnr LIKE c_prefix_sp AND zz_msposnr NE 0 AND kbmeng EQ 0 )
           OR ( matnr LIKE c_prefix_fp AND kwmeng NE A~kbmeng ) ) AND
           A~ABGRU EQ SPACE AND
           A~MTVFP IN R_MTVFP AND
           A~PRCTR IN R_PRCT AND
           E~ETENR EQ '1'.
    SORT INT_COLL_ORD BY VBELN POSNR ETTYP.
    DELETE ADJACENT DUPLICATES FROM INT_TAB_RES COMPARING VBELN POSNR.
    CHECK NOT INT_TAB_RES [] IS INITIAL.
    SELECT VBELN UVALL CMGST INTO TABLE INT_VBUK
    FROM VBUK FOR ALL ENTRIES IN INT_TAB_RES
    WHERE VBELN = INT_TAB_RES-VBELN AND UVALL NE 'A'.
    Now, the requirement is:
    I want to split this query. Like, first join VBAK and VBUK first. With this selection, go to the inner join of VBAP and VBEP (on key VBELN) to get the results. How can I re-write this Query?
    Please help.
    Thx n Rgds

    Hi Nagraj
    As of your suggestion, I have re-written the query as below:
    * Declarations
    TYPES: BEGIN OF TYP_COLL_ORD,
            VBELN  LIKE VBAK-VBELN,
            POSNR  LIKE VBUP-POSNR,
            MATNR  LIKE VBAP-MATNR,
            KWMENG LIKE VBAP-KWMENG,
            KBMENG LIKE VBAP-KBMENG,
            ERDAT  LIKE VBAK-ERDAT,
            ERZET  LIKE VBAK-ERZET,
            PSTYV  LIKE VBAP-PSTYV,
            AUART  LIKE VBAK-AUART, u201Calready exists in type
            ETTYP  LIKE VBEP-ETTYP,
            EDATU  LIKE VBEP-EDATU.
    TYPES: END OF TYP_COLL_ORD.
    DATA: INT_COLL_ORD TYPE TABLE OF TYP_COLL_ORD WITH HEADER LINE.
    TYPES: BEGIN OF TYP_VBUK,
            AUART  LIKE VBAK-AUART, u201Chave added this field
            VBELN  LIKE VBUK-VBELN,
            UVALL  LIKE VBUK-UVALL,
            CMGST  LIKE VBUK-CMGST.
    TYPES: END OF TYP_VBUK.
    DATA: INT_VBUK TYPE TABLE OF TYP_VBUK WITH HEADER LINE.
    *QUERY#1 u2013 for VBAK & VBUK Join
    SELECT A~AUART B~VBELN B~UVALL B~CMGST
    INTO TABLE INT_VBUK
    FROM VBAK AS A INNER JOIN VBUK AS B
    ON A~VBELN EQ B~VBELN
    WHERE A~VBELN IN s_VBELN AND
    A~auart in s_auart AND
    A~vkorg in s_vkorg AND
    A~vbtyp eq 'C' AND
    B~UVALL NE 'A'.
    IF NOT INT_VBUK[] IS INITIAL.
    SORT INT_VBUK BY VBELN.
    DELETE ADJACENT DUPLICATES FROM INT_VBUK COMPARING VBELN.
    *QUERY#2 u2013 for VBAP & VBEP Join
    SELECT A~VBELN A~POSNR A~MATNR A~KWMENG A~KBMENG A~ERDAT A~ERZET A~PSTYV B~ETTYP B~EDATU
    INTO TABLE INT_COLL_ORD
    FROM VBAP AS A INNER JOIN VBEP AS B
    ON B~VBELN EQ A~VBELN AND B~POSNR EQ A~POSNR AND B~MANDT EQ A~MANDT
    FOR ALL ENTRIES IN INT_VBUK
    WHERE A~VBELN = INT_VBUK-VBELN AND
    ( ( matnr LIKE c_prefix_sp AND zz_msposnr NE 0 AND kbmeng EQ 0 )
    OR ( matnr LIKE c_prefix_fp AND kwmeng NE A~kbmeng ) ) AND
    A~ABGRU EQ SPACE AND
    A~MTVFP IN R_MTVFP AND
    A~PRCTR IN R_PRCT AND
    B~ETENR EQ '1'.
    ENDIF.
      SORT INT_COLL_ORD BY  VBELN POSNR ETTYP.
      DELETE ADJACENT DUPLICATES FROM INT_COLL_ORD
        COMPARING VBELN POSNR.
      CHECK NOT INT_COLL_ORD[] IS INITIAL.
      LOOP AT INT_COLL_ORD.
        CLEAR: L_MTART,L_ATPPR,L_ETTYP.
        IF L_PREVIOUS_ETTYP NE INT_COLL_ORD-ETTYP OR
          L_PREVIOUS_AUART NE INT_COLL_ORD-AUART.
          READ TABLE INT_OVRCTL WITH KEY AUART = INT_COLL_ORD-AUART ETTYP = INT_COLL_ORD-ETTYP.
          CHECK SY-SUBRC NE 0.
    Now, the issue is:
    Please note that declaration for INT_COLL_ORD has a field AUART, which is used in further parts of program (see the statement just above)
    But, since neither VBAP nor VBEP contains AUART field, it cannot be fetched through the QUERY#2. So this value is not populated into INT_COLL_ORD through SELECT Query.
    Since this field is used in later part of program & that the internal table has no value for this field, it dumps!!
    How to include this value into the INT_COLL_ORD?
    Plz suggest....

  • Optimal read write performance for data with duplicate keys

    Hi,
    I am constructing a database that will store data with duplicate keys.
    For each key (a String) there will be multiple data objects, there is no upper limit to the number of data objects, but let's say there could be a million.
    Data objects have a time-stamp (Long) field and a message (String) field.
    At the moment I write these data objects into the database in chronological order, as i receive them, for any given key.
    When I retrieve data for a key, and iterate across the duplicates for any given primary key using a cursor they are fetched in ascending chronological order.
    What I would like to do is start fetching these records in reverse order, say just the last 10 records that were written to the database for a given key, and was wondering if anyone had some suggestions on the optimal way to do this.
    I have considered writing data out in the order that i want to retrieve it, by supplying the database with a custom duplicate comparator. If I were to do this then the query above would return the latest data first, and I would be able to iterate over the most recent inserts quickly. but Is there a performance penalty paid on writing to the database if I do this?
    I have also considered using the time-stamp field as the unique primary key for the primary database instead of the String, and creating a secondary database for the String, this would allow me to index into the data using a cursor join, but I'm not certain it would be any more performant, at least not on writing to the database, since it would result in a very flat b-tree.
    Is there a fundamental choice that I will have to make between write versus read performance? Any suggestions on tackling this much appreciated.
    Many Thanks,
    Joel

    Hi Joel,
    Using a duplicate comparator will slow down Btree access (writes and reads) to
    some degree because the comparator is called a lot during searching. But
    whether this is a problem depends on whether your app is CPU bound and how much
    CPU time your comparator uses. If you can avoid de-serializing the object in
    the comparator, that will help. For example, if you keep the timestamp at the
    beginning of the data and only read the one long timestamp field in your
    comparator, that should be pretty fast.
    Another approach is to store the negation of the timestamp so that records
    are sorted naturally in reverse timestamp order.
    Another approach is to read backwards using a cursor. This takes a couple
    steps:
    1) Find the last duplicate for the primary key you're interested in:
      cursor.getSearchKey(keyOfInterest, ...)
      status = cursor.getNextNoDup(...)
      if (status == SUCCESS) {
          // Found the next primary key, now back up one record.
          status = cursor.getPrev(...)
      } else {
          // This is the last primary key, find the last record.
          status = cursor.getLast(...)
      }2) Scan backwards over the duplicates:
      while (status == SUCCESS) {
          // Process one record
          // Move backwards
          status = cursor.getPrev(...)
      }Finally another approach is to use a two-part primary key: {string,timestamp}.
    Duplicates are not configured because every key is unique. I mention this
    because using duplicates in JE has more overhead than using a unique primary
    key. You can combine this with either of the above approaches -- using a
    comparator, negating the timestamp, or scanning backwards.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • MY system folder, including hidden folders are littered with duplicate folders and files with the suffix (from old Mac) How do I remove them

    I have a Macbook Pro running Leopard 10.5.8. I had a problem with my my operating system (my fault, I moved a file I shoudnt have) couldnt boot up but was able to boot up from a backup. I managed to repair my original system except now all the system folders, including hidden folders are littered with duplicate folders and files with the suffix (from old Mac).  For the most part the dupes are an exact copy, but not always.  I want to remove them to free up space and cant imagine duplicate folders in the /system/library are not hindering my computer. But I dont know where to start and am afraid of doing irreparable damage. Any ideas

    pacull,
    Use iCal>View>Show Notifications to choose what to do with the notification.

Maybe you are looking for

  • Is there a way to access Time Machine files associated with an old user name?

    My iMac was running super slow and there was a bunch of programs, etc. that I wanted to get rid of anyway so I thought I would do a "clean install" from my Snow Leopard disc.  I moved items on my desktop to my Seagate external drive and then unplugge

  • Need help please! - CMP mapping

    Hi, After having developped my J2EE program (with SJSE8), I would like to put it on production. For CMP persistence, I did deploy my program one time. Then I used SJSE8 to capture the DB Schema generated and I used it in "sun-cmp-mappings.xml" and I

  • Size of aperture 3 library is way to large

    I am using aperture 3 fully updated.... i have multiple separate libraries... I noticed that in a new library I created I imported 5 gb of photos (no editing yet) and ended up with a 48g aperture file .... i am using managed and imported via a card .

  • Save PSE organizer data before a clean OS install?

    I'm going to do a clean OS install next month when Windows 7 comes out. However, I do not want to lose my organizer data (keywords tags, stars, comments, etc). I understand that PSE stores this kind of data in a database file or 2. Can I save this fi

  • Mac/Unix path differences causing linking problems?

    At our organization, we have a single monster Xserve running Mac OS X 10.6.7 and Aqua Connect where up to 15 users log in concurrently with different UI sessions and work on pages using InDesign CS3 and CS5. Each user mounts the same AFP sharepoints