Select Query where multiple column in multiple values (cant use in clause)

I can use (in clause) with on column like this:
Select code from table where code in(1,2,3)
-------------------------------My case:-------------------------------------------------
I’ve 4 columns PK of table as below 
I need to :
select
where (code, month, year) in ((1,1,2013) and (2,1,2014) and (2,2,2015))
i can't write it this way :
select where code in (1,2) and month in (1,2) and year in (2013,2014,2015)
case i'll get my rows but others included like (1,1,2015) , (1,1,2014),(2,1,2013) .. etc
I’m terribly want to solve this problem
Please help me
Code (pk)
Month (pk)
Year (pk)
emp_code(pk)
1
1
2013
101
1
1
2013
102
2
1
2013
101
2
1
2013
102
1
2
2013
101
1
2
2013
102
2
2
2013
101
2
2
2013
102
1
1
2014
101
1
1
2014
102
2
1
2014
101
2
1
2014
102
1
2
2014
101
1
2
2014
102
2
2
2014
101
2
2
2014
102
1
1
2015
101
1
1
2015
102
2
1
2015
101
2
1
2015
102
1
2
2015
101
1
2
2015
102
2
2
2015
101
2
2
2015
102
thank you

In T-SQL you have to use OR-ed predicates. 
In full ANSI Standard SQL youcan write row comparisons  (a,b,c) = (1,2,3) etc! but not in T-SQL dialect. Ignoring that problem, what you have is a design flaw called attribute splitting; you have put one unit of measurement
in two columns. 
 I like the MySQL convention of using double zeroes for months and years, That is 'yyyy-mm-00' for a month within a year and 'yyyy-00-00' for the whole year. The advantages are that it will sort with the
ISO-8601 data format required by Standard SQL and it is language independent. The pattern for validation is '[12][0-9][0-9][0-9]-00-00' and '[12][0-9][0-9][0-9]-[01][0-9]-00'
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL

Similar Messages

  • Performance of update query for single column vs multiple column

    Hi All,
    I could not find any answer for this, does it ever matter in terms of performance updating single column versus multiple column in a single update query.
    For eg. table consisting of 15 columns, what would be the difference in performance when one column is update in the single query against another update query where 15 columns are updated.
    Please do keep in mind my table columns in actually could be around 150+.
    Thanks for any information provided.

    If updated columns aren´t on where clause , then the only impact of 15 columns will be a increase on redo generation, and a possible chainning row.
    So since the redo is one of the things that have a large impact, the answer is yes.
    The performance will be slower.
    Regards
    Helio Dias.
    http://heliodias.com
    OCE SQL, OCP 9i

  • Indexing multiple columns in multiple tables

    I have a multiple tables in which I want to search. I need to do text search that supports fuzzy logic for which I've currently set up a context index using the user_datastore. I also need to search columns such as numbers/dates/timestamps which from what I understand is not supported with the context search. I'm looking at setting up a second index of type ctxcat for this purpose - but I will need to index multiple columns in multiple tables. Is this possible?
    Can someone advise on the best way to create indexes and search when a table schema such as the following exists. I've tried to keep it simple by just giving a few example columns and tables.
    Order Table
    - Has columns related to the order details - order name (varchar2), description (varchar2), date order placed (timestamp), date order completed (date), order amount (number), customer Id
    Customer Table
    - Has columns related to the customer information - customer name, address, city, state, telephone etc (all varchar2 fields)
    Items Table
    - Has details about the items being ordered - item name (varchar2), item description (varchar2), cost (number) etc
    Order-Item Table
    - Table that maps an order to the items in that order - orderId, itemId, quantity
    Comments Table
    - Logs any comments with the customer - comment description (varchar2), call type (varchar2), comment date (timestamp)
    Currently with the Context index, I have it set up so I can search all text columns in all tables for a search term. This works fine.
    I now need to be able to do more advanced searches, where I can search for a specific text in all orders as well as orders created after a certain date or orders above a certain amount or orders with a item quantity purchase of more that 10. The text has to be searched across the all text columns in all tables. How can I achieve this with Oracle Text?

    There was a similar discussion with various ideas that may help you here:
    How can I make CONTAINS query work for a date range

  • Indexing multiple columns of multiple tables

    Hi,
    I'm trying to index multiple columns of multiple tables.
    As I have seen, the way to do this is using User_Datastore.
    have the tables to share a (foreign)key? My tables have only 2 or 3 similar columns(description, tagnr...)
    I want to get the different tagnr belonging to the same description etc.
    Can I do this?
    Has anyone a Samplecode indexing multiple tables?
    Any suggestion would be helpful.
    Arsineh

    A USER_DATASTORE works like this:
    create table A
    ( id number primary key,
    textA varchar2(100));
    create table B
    ( id number primary key,
    textB varchar2(100));
    procedure foo (rid in rowid, v_document in out varchar2)
    v_textA varchar2(2000);
    v_idA number;
    v_textB varchar2(2000);
    begin
    select id, textA
    into v_idA, v_textA
    from A
    where rowid = rid;
    select textB
    into v_textB
    from B
    where id = v_idA;
    v_document := textA | | ' ' | | textB;
    end;
    create preferences for USER_DATASTORE
    create index ...
    on table A ( text) ...
    you also can build on table B
    This depends where you want the
    trigger to be build to sync the
    documents.
    null

  • Need query to convert Single Row Multiple Columns To Multiple rows

    Hi,
    I have a table with single row like below
    Column0 | Column1 | Column2 | Column3 | Column4|
    Value0    | Value1    | Value2    | Value3    |  Value4  |
    Am looking for a query to convert above table data to multiple rows having column name and its value in each row as shown below
    Column0 | Value0
    Column1 | Value1
    Column2 | Value2
    Column3 | Value3
    Column4 | Value4
    Thanks in advance.
    Mohan

    Hi ykMohan,
    Dynamic UNPIVOT can be applied in this case as well.
    CREATE TABLE dbo.T(ID INT,Column0 VARCHAR(99),Column1 VARCHAR(99),Column2 VARCHAR(99),Column3 VARCHAR(99),Column4 VARCHAR(99))
    INSERT INTO T VALUES
    (1,'Value0','Value1','Value2','Value3','Value4'),
    (2,'Value0','Value1','Value2','Value3','Value4');
    DECLARE @columns VARCHAR(MAX)
    SELECT @columns=
    STUFF(
    SELECT ','+ COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME ='T' AND TABLE_SCHEMA='dbo' AND Column_name NOT IN('ID') FOR XML PATH('')
    ),1,1,'')
    DECLARE @Sql NVARCHAR(MAX)
    SET @Sql =
    'SELECT ID, UPT.col,UPT.val FROM T
    UNPIVOT
    (val FOR col IN('+@columns+')) AS UPT'
    EXEC sp_executeSQL @Sql
    DROP TABLE T
    If you have any feedback on our support, you can click
    here.
    Eric Zhang
    TechNet Community Support

  • Stored procedure in  package return multiple columns from multiple tables

    Hi ,
    Can a single stored procedure return multiple column values from different tables.
    example:
    tabA: col2, tabB:col3,tabC:col4 etc.
    one more question:
    if a stored procedure like to return 10 columns for a particular record from a single table do i need to define a TYPE statement for each colum like
    TYPE col1 is TABLE of varchar
    TYPE col2 is TABLE of varchar
    here i want to return only one row, not many rows.
    thanks

    You can try one procedure with OUT or IN/OUT parameters that collect the values from one or more sql statements.
    CREATE OR REPLACE PROCEDURE P1
    (P_COD IN TABLE.COD%TYPE,
    P_DESC1 OUT TABLE1.DESC1%TYPE,
    P_DESC2 OUT TABLE2.DESC2%TYPE)
    IS
    BEGIN
    SELECT table1.DESC1, table2.DESC2
    INTO P_DESC1, P_DESC2
    FROM TABLE1, table2 WHERE
    table1.COD = P_COD and
    table1.cod = table2.cod ;
    END P1;
    JP

  • Index Multiple Column of Multiple Tables

    Hi All,
    I would like to know how to create a index which can search through all column in my database tables. Eg: I have 30
    tables and every tables have around 10 columns. I want to create a index which can search through the columns in
    these tables.
    I know that User_DataStore can helps in create multiple column search across multiple tables. But in my case the BLOB
    created will be very huge. Any work around? I mean is there any solutions like concatenated datastore?
    Thank You.
    Regards,
    LG Tan

    Hi,
    I figured out how to do this today. The first thing is that the type of index you need is a USER_DATASTORE.
    The idea behind this type of index is pretty straight forward but the documentation does a very good job of not drawing attention to just how powerful it is.
    The idea behind a USER_DATASTORE is that you can write your own stored procedure to extract the data that you want to index and return it to the indexer. Take an example where you have a master table which contains enough information to allow you to find associated data in other tables i.e. a shared key. The idea is that when you set up a USER_DATASTORE index, you specify the name of a stored procedure that the indexer will call for each row in the master table. The stored procedure has one input and one output parameter, rowid (in) and clob (out).
    When the index is created, the stored procedure you specify is, as I said above, called for each row in the master table. Your stored procedure uses this ROWID to extract the shared key (this can be anything you want) from the master table and uses this to build the necessary SELECT statement to retrieve the related data from the other tables. The rest of the stored procedure simply appends the data returned from your select statement to the return CLOB. The indexer then indexes the inforamation in this CLOB and discards the data.
    The index can of course only return hits against the master table. It's up to your application to extract shared key from the returned row(s), bind to the other tables and present the results.
    You will find a basic example of how to implement USER_DATASTORES in the Oracle Text Reference Guide (http://download.oracle.com/otndoc/oracle9i/901_doc/text.901/a90121.pdf). Feel free to email me if you want some example code.
    Dean

  • Writing several / multiple columns in a *.lvm file using write file option

    Hello All,
    I am doing several measurements and till now writing all the measurement in individual files thereby I am forced to use an external program to merge the files into one file of several columns.
    Is there a possibility to write a *.lvm file (or some other possibility say a *.txt file but no excel) with multiple columns where each column stands for a particular data?
    I am attaching a simple example where I have 4 different measurements (simulated using a regulator(I dont know how I can say this vi in english)??) which I am converting into array and trying to write them in a file of *.lvm extension. But the output is still a single column where every measurement is taking a different row which I dont want.
    Thanks in advance.
    Jan
    Attachments:
    Unbenannt 1.vi ‏97 KB

    Instead of using the Build Array, just wire your scalars to the Merge Signals function. This will create 4 separate signals that will be written in 4 separate columns. With the existing 1D array, you could also use the Write to Spreadsheet File instead of Write to Measurement File.

  • Select query where codn

    Hi in my requirement in a select query i have to place a where codn using asset value date.Actually the asset value date has from date and end date.The from date is present in a table and end date is present in another table so now how to put the where codn.Please give some suggestions.

    Hi hema,
    Use subquery as follows;
    Select.....
    .....Where asset_date BETWEEN (Select single asset_fromdate from fromdate_table where key_field = value) AND (Select single asset_todate from todate_table where key_field = value)
    Hope this helps you...
    Regards
    Karthik D

  • Slow Select Query - Where clause contains Seconday index field +other flds

    Hi friends,
    The below query is taking about an Hour to execute in production server when there are about 6 Million records in PLAF table. I have verified the trace in ST05 and the correct secondary index (Material Matnr + Plant Plwrk) is being selected.
    SELECT plnum
                 matnr
                 plwrk
                 pedtr
                 dispo
                 rsnum  FROM  plaf
                INTO TABLE it_orders
                WHERE ( ( matnr  IN r_mat1 )  OR
                                 matnr IN r_mat2  AND dispo IN s_mrp1 ) AND
                 pedtr IN s_date AND   
                 obart = '1'.
    Will it be a good idea to have only MATNR (secondary index field) in the where condition of the select query and delete the internal table entries for the other where conditions ?
    Edited by: Shruthi Seth on Feb 1, 2009 10:10 AM

    Hello.
    Creating a range r_mat = r_mat1 + r_mat2, I would do something like:
    READ TABLE s_mrp1 TRANSPORTING NO FIELDS INDEX 1.
    IF sy-subrc EQ 0.
      SELECT plnum matnr plwrk pedtr dispo rsnum
        FROM plaf
        INTO wa_orders
       WHERE matnr IN r_mat
         AND pedtr IN s_date
         AND obart = '1'.
        IF wa_orders-matnr IN r_mat2.
          CHECK wa_orders-dispo IN s_mrp1.
        ENDIF.
        APPEND wa_orders TO it_orders.
      ENDSELECT.
    ELSE.
      SELECT plnum matnr plwrk pedtr dispo rsnum
        FROM plaf
        INTO TABLE it_orders
       WHERE matnr IN r_mat1
         AND pedtr IN s_date
         AND obart = '1'.
    ENDIF.
    Regards,
    Valter Oliveira.

  • Multiple Column hiding in advance table using Switcher

    Hi All,
    I am having requirement of hiding multiple columns in advance table using swithers.
    Lets says I am searching for the the parties in party search page. If the party is of type person then two columns should be visible one is firstName and LastName.
    and If party is of type organization then firstName and lastName column shpould be hidden and only the PartyName column should be visible.
    Is this possible through switchers if yes please explain?
    Br, 903096

    Hi ,
    This can be done through switcher case , along with switcher you also need to use SPEL binding the each of those attribute
    that you wish to hide .
    Go through delete exercise to understand how to implement switchers case .
    Let me know if you need any help .
    --Keerthi                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Select Query where multiple col1+col2 in ('value1','value2','value3')

    hello,
    i've table :
    month year .. other columns
    1 2012
    2 2012
    1 2013
    2 2013
    1 2014
    2 2014
    I've multi select filter years, then months of these years as
    1-year checkcombobox - all available years on the table. ex(2012,2013..)
    2-month checkcombobox - all months for selected years above. ex(1-2013,2-2013..)
    i want to select from my table where month & year = selected month & year. 
    i used to do it like this : but i think i've problem with performance with this solution:
    i use stored procedure with @monthyear nvarchar(max) parameter of selected months and year as text like '1-2012,2-2012,6-2013'
    -i use 'uf_ParseDelimitedString2' function to extract above string into table of string.
    the query:
    select from mytable where convert(nvarchar(10),mytable.month) + '-' + convert(nvarchar(10),mytable.year) in (select string from uf_ParseDelimitedString2(@monthyear))
    -function i used on above query to Parse Delimited String and return table of string.
    ALTER FUNCTION [dbo].[uf_ParseDelimitedString2](@strToParse VARCHAR(MAX)) RETURNS @tblStrToParse TABLE
    (string nvarchar(max))
    AS
    BEGIN
    DECLARE @pos int
    DECLARE @piece nvarchar(max)
    -- Need to tack a delimiter onto the end of the input string if one doesn't exist
    IF RIGHT(RTRIM(@strToParse),1) <> ','
    SET @strToParse = @strToParse + ','
    SET @pos = PATINDEX('%,%' , @strToParse)
    WHILE @pos <> 0
    BEGIN
    SET @piece = left(@strToParse, @pos - 1)
    -- You have a piece of data, so insert it, print it, do whatever you want to with it.
    INSERT INTO @tblStrToParse VALUES(@piece)
    SET @strToParse = STUFF(@strToParse, 1, @pos, '')
    SET @pos = patindex('%,%' , @strToParse)
    END
    RETURN
    END
    thank your in advance ..

    hi.. 
    review this after i created a function:
    create FUNCTION [dbo].[uf_ParseDelimitedString2Col](@t VARCHAR(MAX))
    RETURNS @tblStrToParse TABLE
    ([1] int ,[2] int)
    AS
    BEGIN
    insert into @tblStrToParse
    SELECT [1], [2]
    FROM (
    SELECT
    t2.id
    , t2.name
    , rn2 = ROW_NUMBER() OVER (PARTITION BY t2.id ORDER BY 1/0)
    FROM (
    SELECT
    id = t.c.value('@n', 'INT')
    , name = t.c.value('@s', 'nvarchar(20)')
    FROM (
    SELECT x = CAST('<t s = "' +
    REPLACE(token + '-', '-', '" n = "' + CAST(rn AS VARCHAR(10))
    + '" /><t s = "') + '" />' AS XML)
    FROM (
    SELECT
    token = t.c.value('.', 'VARCHAR(100)')
    , rn = ROW_NUMBER() OVER (ORDER BY 1/0)
    FROM (
    SELECT x = CAST('<t>' + REPLACE(@t, ',', '</t><t>') + '</t>' AS XML)
    ) r
    CROSS APPLY x.nodes('/t') t(c)
    ) t
    ) d
    CROSS APPLY x.nodes('/t') t(c)
    ) t2
    WHERE t2.name != ''
    ) t3
    PIVOT (
    MAX(name) FOR rn2 IN ([1], [2])
    ) p
    Return
    END
    then 
    i used this code.. 
    select * from mytable st cross apply dbo.uf_ParseDelimitedString2Col('1-2014,3-2014') x where st.month = x.[1] and st.year = x.[2]
    how about that..
    thank you.

  • Count(*) , group by with multiple columns from multiple tables involved

    Hi all,
    I am relatively new to SQL.
    Currently I have these few requirements, to display quite a number of fields from 3 tables for display of report.
    In my query I need to:
    1.) count(*)
    2.) select quite a number of fields from table 1,2,3
    However when count(*) is used, grouped by has to be used to.
    How do I actually use group by with so many columns to be selected?
    I have actually used the query below, but the count(*) returns 1, the correct output should be 3 instead.
    select count(*), table1.col1, table1.col2, table1.col3, table2.col3, table2.col4, table2.col6, table3.col1, table3.col4, table3.col5
    from table1, table2, table3
    where
    <conditions>........................
    group by table1.col1, table1.col2, table1.col3, table2.col3, table2.col4, table2.col6, table3.col1, table3.col4, table3.col5
    I know this group by statement looks very unrefined. How can I select multiple fields from different tables, and yet get the count(*) correctly?
    Thank you so much for your time.

    Hmm yes it actually does return count as 1 for each row. But there are 3 rows returned. E.g.
    ctr table1.col1 table1.col2 ..........
    1 value1 value1
    1 value2 value3
    1 value3 value4
    If I put the count(*) outside, it returns 3 , the correct output
    ctr
    3
    select count(*) from
    select table1.col1, table1.col2, table1.col3, table2.col3, table2.col4, table2.col6, table3.col1, table3.col4, table3.col5
    from table1, table2, table3
    where
    <conditions>
    group by table1.col1, table1.col2, table1.col3, table2.col3, table2.col4, table2.col6, table3.col1, table3.col4, table3.col5
    Thus I was wondering if it was the group by of multiple colns that resulted in the count stucked at value 1.

  • Duplication in SELECT query with XML column

    Oracle 11gR1 RHEL 5
    Hi all,
    I am having a small problem. I am selecting some rows from an XML column with the following query and for each row I get a new set of columns that are displayed.
    select extractvalue(old_row,'/xml/WORK_ITEM_RID') WORK_ITEM_RID,
    extractvalue(old_row,'/xml/PARENT_RID') PARENT_RID,
    extractvalue(old_row,'/xml/ASSIGNED_TO') ASSIGNED_TO
    from audit_trail
    where audit_trail_rid = 177147;
    So instead of getting this:
    WORK_ITEM_RID PARENT_RID ASSIGNED_TO
    4045 4044 2930
    I get:
    WORK_ITEM_RID PARENT_RID ASSIGNED_TO
    4045
    WORK_ITEM_RID PARENT_RID ASSIGNED_TO
    4044
    WORK_ITEM_RID PARENT_RID ASSIGNED_TO
    2930
    How can I get rid of this?
    Thanks.
    Edited by: JrOraDBA on Feb 12, 2010 10:14 AM

    Here is what I got so far...but it keeps telling me that I am missing right parenthesis (ORA-00907) at the assigned_date TO_CHAR(....
    insert into iswrnew.WORK_ITEM (WORK_ITEM_RID, PARENT_RID, ASSIGNED_TO, WORK_ITEM_TYPE_RID, FPRC_APPLICATION_RID, REQUEST_RID, ASSIGNED_DATE, EST_START_DATE, EST_COMPLETION_DATE, SHORT_DESCRIPTION, ASSIGNED_BY, COMPLETED, COMPLETION_DATE, LONG_DESCRIPTION)
    SELECT t1.work_item_rid, t1.parent_rid, t1.assigned_to, t1.work_item_type_rid, t1.fprc_application_rid, t1.request_rid, t1.assigned_date, t1.est_start_date, t1.est_completion_date, t1.short_description, t1.assigned_by, t1.completed, t1.completion_date, t1.long_description
      FROM fprchr.audit_trail,
           XMLTable('/xml'
                    PASSING fprchr.audit_trail.old_row
                    COLUMNS
                    work_item_rid                  NUMBER PATH 'WORK_ITEM_RID',
                    parent_rid                     NUMBER PATH 'PARENT_RID',
                    assigned_to                 NUMBER PATH 'ASSIGNED_TO',
                    work_item_type_rid            NUMBER PATH 'WORK_ITEM_TYPE_RID',
                    fprc_application_rid        NUMBER PATH 'FPRC_APPLICATION_RID',
                    request_rid                    NUMBER PATH    'REQUEST_RID',
                    *assigned_date                TO_CHAR(assigned_date,'YYYY-MM-DD HH24:MI:SS') PATH    'ASSIGNED_DATE',*
                    est_start_date                TO_CHAR(est_start_date,'YYYY-MM-DD HH24:MI:SS') PATH    'EST_START_DATE',
                    est_completion_date          TO_CHAR(est_completion_date,'YYYY-MM-DD HH24:MI:SS') PATH    'EST_COMPLETION_DATE',
                    short_description            VARCHAR2(50) PATH 'SHORT_DESCRIPTION',
                    assigned_by                    NUMBER PATH    'ASSIGNED_BY',
                    completed                    VARCHAR2(1) PATH 'COMPLETED',
                    completion_date                TO_CHAR(completion_date,'YYYY-MM-DD HH24:MI:SS') PATH 'COMPLETION_DATE',
                    long_description            VARCHAR2(4000) PATH 'LONG_DESCRIPTION') t1
    WHERE audit_trail_rid = 177147;I think I'm missing something obvious but I just cannot see it. I need a fresh pair of eyes to look at this :)
    Thanks for all your help

  • Select first field from column in group value

    Hello,
    I need to return the first row of data grouped by the first field (Visit ID) in multi-column table:
    VisitID       AdmitDate            Unit       Room          OrderCode
    001041      2014-08-01         2E          202            SWCC
    001041      2014-08-01         2E          202            NULL
    006811      2014-08-01         2E          204            SWCC
    008815      2014-08-01         2E          206            NULL
    004895      2014-08-01         2E          207            SWFA
    004895      2014-08-01         2E          207            SWCC
    004895      2014-08-01         2E          207            NULL
    To return:
    001041      2014-08-01         2E          202            SWCC
    006811      2014-08-01         2E          204            SWCC
    008815      2014-08-01         2E          206            NULL
    004895      2014-08-01         2E          207            SWFA
    I currently have a group by clause with all field names; and have tried max on the OrderCode in the select that doesn't work, and FIRST isn't recognized in SSL server.  Do I need a subquery (if so, how as I'm newer to writing SQL) or what is another
    solution?  Thank you.

    create table t (VisitID varchar(50), AdmitDate date, Unit varchar(50),Room varchar(50),OrderCode varchar(50))
    insert into t values ('001041' , '2014-08-01' , '2E' , 202 , 'SWCC' ),
    ('001041', '2014-08-01' , '2E' , 202 , NULL),
    ('006811' , '2014-08-01' , '2E' , 204 , 'SWCC'),
    ('008815' , '2014-08-01' , '2E' , 206 , NULL),
    ('004895' , '2014-08-01' , '2E' , 207 , 'SWFA'),
    ('004895' , '2014-08-01' , '2E' , 207 , 'SWCC'),
    ('004895' , '2014-08-01' , '2E' , 207 , NULL)
    --To return:
    --001041 2014-08-01 2E 202 'SWCC'
    --006811 2014-08-01 2E 204 'SWCC'
    --008815 2014-08-01 2E 206 NULL
    --004895 2014-08-01 2E 207 'SWFA'
    select VisitID,AdmitDate,Unit,Room , max(OrderCode) as OrderCode from t
    group by VisitID,AdmitDate,Unit,Room
    Order by VisitID
    --Or
    select VisitID,AdmitDate,Unit,Room ,OrderCode from (
    select VisitID,AdmitDate,Unit,Room ,OrderCode, Row_number() Over(Partition By VisitID Order by OrderCode DESC) rn from t
    )t
    WHERE rn=1
    Order by VisitID
    drop table t

Maybe you are looking for