What is a best way to write SQL ?

Sample Case
drop table t;
drop table b;
create table t ( a varchar2(4), b number, c varchar2(1));
insert into t values ('A00', 10, 'R');
insert into t values ('A01', 11, 'R');
insert into t values ('A02', 12, 'R');
insert into t values ('A03', 13, 'R');
insert into t values ('A00', 10, 'P');
insert into t values ('A01', 11, 'P');
insert into t values ('A02', 12, 'P');
insert into t values ('A03', 13, 'P');
commit;
create table b ( j varchar(4), k varchar2(1), l varchar2(5), m number(3), n varchar2(5), o number(3));
insert into b values ('A00', 'P', 'FIXED', 100, 'FLOAT', 60);
insert into b values ('A01', 'P', 'FIXED', 101, 'FIXED', 30);
insert into b values ('A02', 'R', 'FLOAT', 45, 'FLOAT', 72);
insert into b values ('A03', 'R', 'FIXED', 55, 'FLOAT', 53);
commit;
10:19:13 SQL> select * from t;
A B C
A00 10 R
A01 11 R
A02 12 R
A03 13 R
A00 10 P
A01 11 P
A02 12 P
A03 13 P
8 rows selected.
10:19:19 SQL> select * from b;
J K L M N O
A00 P FIXED 100 FLOAT 60
A01 P FIXED 101 FIXED 30
A02 R FLOAT 45 FLOAT 72
A03 R FIXED 55 FLOAT 53
1/     In table t each reference having 2 records one with P another is with R
2/     In table b each refrence merged into single record and there are many records which are not existing in table t
3/      both t and j tables can be joined using a = j
4/     If from table t for a reference indicator is 'P' then if have to pick up l and m columns, if it is 'R' then I have to pick up n and o columns
5/     I want output in following format
A00     P     FIXED          100
A00     R     FLOAT          60
A01     P     FIXED          101
A01     R     FIXED          30
A02     P     FLOAT          72
A02     R     FLOAT          45
A03     P     FLOAT          53
A03     R     FIXED          55
6/     Above example is a sample ouput, In above example I have picked up only l,m,n,o columns, but in real example there are many columns ( around 40 ) to be selected. ( using "case when" may not be practical )
Kindly suggest me what is a best way to write SQL ?
thanks & regards
pjp

Is this?
select b.j,t.c as k,decode(t.c,'P',l,n) as l,decode(t.c,'P',m,o) as m
from t,b
where t.a=b.j
order by j,k
J K L M
A00 P FIXED 100
A00 R FLOAT 60
A01 P FIXED 101
A01 R FIXED 30
A02 P FLOAT 45
A02 R FLOAT 72
A03 P FIXED 55
A03 R FLOAT 53
8 rows selected.
or is this?
select b.j,t.c as k,decode(t.c,b.k,l,n) as l,decode(t.c,b.k,m,o) as m
from t,b
where t.a=b.j
order by j,k
J K L M
A00 P FIXED 100
A00 R FLOAT 60
A01 P FIXED 101
A01 R FIXED 30
A02 P FLOAT 72
A02 R FLOAT 45
A03 P FLOAT 53
A03 R FIXED 55
8 rows selected.

Similar Messages

  • What is the best way to practice SQL language?

    I’m new in database world and want to practice SQL language. I’ve been playing around with Oracle XE, but I realized it’s not very practical to play around with SQL using XE since its sql editor is not user friendly to debug the script. I’m trying to build schemas from scratch and play around with it using SQL. What is the best way to do this?
    Thanks in advance

    Valerie Debonair wrote:
    I’m new in database world and want to practice SQL language. I’ve been playing around with Oracle XE, but I realized it’s not very practical to play around with SQL using XE since its sql editor is not user friendly to debug the script. I do not think that is a valid criticism at all. The basic tools needed to learn SQL is SQL*Plus and a willingness to learn.
    There is no "+debugging+" for SQL either... except to break it into simpler steps, testing that... and perhaps using "+explain plan+" to get the execution plan.
    Granted that SQL*Plus is not the best tool for displaying data... but then learning SQL should be done using small data sets (not too many columns and few rows) - as even a small data set can represent all the data model complexities needed for learning SQL.
    The examples you use, the test tables and the practical exercises used in the learning process are by far more important how "pretty" the tool being used is.
    FWIW, I do 99% of all my SQL work and PL/SQL development using SQL*Plus - it is a very capable tool.

  • What's the best way to write freehand with InDesign?

    I have a Wacom and want to place some handwriting on my document - what's the best way to do this?

    Try the pen or pencil tools or do it in Photoshop and place it.
    Bob

  • What is the best way to write management pack modules?

    i have written many modules using powershell script but when i deploy that management pack on SCOM it is throwing so many errors saying powershell script has dropped due to timeout.
    My Mp has lot of powershell script which gets the data from the service which executes for each and every instance the mp certainly has around 265 Instances and the powershell have to execute for each and every instance.
    how can i improve the scripts?
    do i need to use someother scripting language like javascript or VBScript in the management pack to execute different modules.
    what is the best practice to write Modules
    i have useed even the cookdown for multi instance data gathering
    Thanks & Regards, Suresh Gaddam

    One thing you have not mentioned is how you are consuming the data after you save it.  Your solution should be compatible with whatever software you are using at both ends.
    Your data rate (40kS/s) is relatively slow.  You can achieve it using just about any format from ASCII, to raw binary and TDMS, provided you keep your file open and close operations out of the write loop.  I would recommend a producer/consumer architecture to decouple the data collection from the data writing.  This may not be necessary at the low rates you are using, but it is good practice and would enable you to scale to hardware limited speeds.
    TDMS was designed for logging and is a safe format (<fullDisclosure> I am a National Instruments employee </fullDisclosure> ).  If you are worried about power failures, you should flush it after every write operation, since TDMS can buffer data and write it in larger chunks to give better performance and smaller file sizes.  This will make it slower, but should not be an issue at your write speeds.  Make sure you read up on the use of TDMS and how and when it buffers data so you can make sure your implementation does what you would like it to do.
    If you have further questions, let us know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • What is the best way to write 10 channels of data each sampled at 4kHz to file?

    Hi everyone,
    I have developed a vi with about 8 AI channels and 2 AO channels... The vi uses a number of parallel while loops to acquire, process, and display continous data.. All data are read at 400 points per loop interation and all synchronously sampled at 4kHz...
    My questions is: Which is the best way of writing the data to file? The "Write Measurement To File.vi" or low-level "open/create file" and "close file" functions? From my understanding there are limitations with both approaches, which I have outlines below..
    The "Write Measurement To File.vi" is simple to use and closes the file after each interation so if the program crashes not all data would necessary be lost; however, the fact it closes and opens the file after each iteration consumes the processor and takes time... This may cause lags or data to be lost, which I absolutely do not want..
    The low-level "open/create file" and "close file" functions involves a bit more coding, but does not require the file to be closed/opened after each iteration; so processor consumption is reduced and associated lag due to continuous open/close operations will not occur.. However, if the program crashes while data is being acquired ALL data in the buffer yet to be written will be lost... This is risky to me...
    Does anyone have any comments or suggestions about which way I should go?... At the end of the day, I want to be able to start/stop the write to file process within a running while loop... To do this can the opn/create file and close file functions even be used (as they will need to be inside a while loop)?
    I think I am ok with the coding... Just the some help to clarify which direction I should go and the pros and cons for each...
    Regards,
    Jack
    Attachments:
    TMS [PXI] FINAL DONE.vi ‏338 KB

    One thing you have not mentioned is how you are consuming the data after you save it.  Your solution should be compatible with whatever software you are using at both ends.
    Your data rate (40kS/s) is relatively slow.  You can achieve it using just about any format from ASCII, to raw binary and TDMS, provided you keep your file open and close operations out of the write loop.  I would recommend a producer/consumer architecture to decouple the data collection from the data writing.  This may not be necessary at the low rates you are using, but it is good practice and would enable you to scale to hardware limited speeds.
    TDMS was designed for logging and is a safe format (<fullDisclosure> I am a National Instruments employee </fullDisclosure> ).  If you are worried about power failures, you should flush it after every write operation, since TDMS can buffer data and write it in larger chunks to give better performance and smaller file sizes.  This will make it slower, but should not be an issue at your write speeds.  Make sure you read up on the use of TDMS and how and when it buffers data so you can make sure your implementation does what you would like it to do.
    If you have further questions, let us know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Best way to write sql

    hi,
    i have a table named as tbl_a
    tbl_a
    filelds     values
    a_name     a     b
    a_desc     s     s
    a_amt     1     -5
    a_amt_in
    now i want to merge this values in a table called as tbl_b which has same structure
    logic required is:
    if value is negative for a_amt then insert this value in a_amt_in
    is this possible to write in single insert..
    or what logic would be best
    regards

    its giving F/WG error
    ORA-00909: invalid number of arguments
    select decode(sign(GOA_EX_IN,-1,GOA_EX_OUT,GOA_EX_IN) ) from EA_DSR_T WHERE TATA_IN IS NULL
    update t_ea set goa_ex_in=(select decode(sign(GOA_EX_IN,-1,GOA_EX_OUT,GOA_EX_IN) ) from EA_DSR_T WHERE TATA_IN IS NULL )

  • What is the best way to write a math book?

    should i use graphic tablets such as wacom,smartpen etc. or should i type it using a program like math type.
    thanks for replies.

    should i type it using a program like math type.
    Yes.
    http://m10lmac.blogspot.com/2008/12/typing-equations-and-formulas.html
    But for serious math publications I think LaTeX is often used.

  • Best way to write SELECT statement

    Hi,
    I am selecting fields from one table, and need to use two fields on that table to look up additional fields in two other tables.
    I do not want to use a VIEW to do this. 
    I need to keep all records in the original selection, yet I've been told that it's not good practice to use LEFT OUTER joins.  What I really need to do is multiple LEFT OUTER joins.
    What is the best way to write this?  Please reply with actual code.
    I could use 3 internal tables, where the second 2 use "FOR ALL ENTRIES" to obtain the additional data.  But then how do I append the 2 internal tables back to the first?  I've been told it's bad practice to use nested loops as well.
    Thanks.

    Hi,
    in your case having 2 internal table to update the one internal tables.
    do the following steps:
    *get the records from tables
    sort: itab1 by key field,  "Sorting by key is very important
          itab2 by key field.  "Same key which is used for where condition is used here
    loop at itab1 into wa_tab1.
      read itab2 into wa_tab2     " This sets the sy-tabix
           with key key field = wa_tab1-key field
           binary search.
      if sy-subrc = 0.              "Does not enter the inner loop
        v_kna1_index = sy-tabix.
        loop at itab2 into wa_tab2 from v_kna1_index. "Avoiding Where clause
          if wa_tab2-keyfield <> wa_tab1-key field.  "This checks whether to exit out of loop
            exit.
          endif.
    ****** Your Actual logic within inner loop ******
       endloop. "itab2 Loop
      endif.
    endloop.  " itab1 Loop
    Refer the link also you can get idea about the Parallel Cursor - Loop Processing.
    http://wiki.sdn.sap.com/wiki/display/Snippets/CopyofABAPCodeforParallelCursor-Loop+Processing
    Regards,
    Dhina..

  • Best way to write an Wrapper class around a POJO

    Hi guys,
    What is the best way to write an Wrapper around a Hibernate POJO, given the latest 2.2 possibilities? The goal is, of course, to map 'regular' Java Bean properties to JavaFX 2 Properties, so that they can be used in GUI.
    Thanks!

    what about this:
    import javafx.beans.property.SimpleStringProperty;
    import javafx.beans.property.StringProperty;
    public class PersonPropertyWrapper {
         private StringProperty firstName;
         private StringProperty lastName;
         private Person _person;
         public PersonPropertyWrapper(Person person) {
              super();
              this._person = person;
              firstName = new SimpleStringProperty(_person.getFirstName()) {
                   @Override
                   protected void invalidated() {
                        _person.setFirstName(getValue());
              lastName = new SimpleStringProperty(_person.getLastName()) {
                   @Override
                   protected void invalidated() {
                        _person.setLastName(getValue());
         public StringProperty firstNameProperty() {
              return firstName;
         public StringProperty lastNameProperty() {
              return lastName;
         public static class Person {
              private String firstName;
              private String lastName;
              public String getFirstName() {
                   return firstName;
              public void setFirstName(String firstName) {
                   this.firstName = firstName;
              public String getLastName() {
                   return lastName;
              public void setLastName(String lastName) {
                   this.lastName = lastName;
         public static void main(String[] args) {
              Person p = new Person();
              p.setFirstName("Jim");
              p.setLastName("Green");
              PersonPropertyWrapper wrapper = new PersonPropertyWrapper(p);
              wrapper.firstNameProperty().setValue("Jerry");
              System.out.println(p.getFirstName());
    }Edited by: 906680 on 2012-7-27 上午10:56

  • What is the best way of returning group-by sql results in Toplink?

    I have many-to-many relationship between Employee and Project; so,
    a Employee can have many Projects, and a Project can be owned by many Employees.
    I have three tables in the database:
    Employee(id int, name varchar(32)),
    Project(id int, name varchar(32)), and
    Employee_Project(employee_id int, project_id int), which is the join-table between Employee and Project.
    Now, I want to find out for each employee, how many projects does the employee has.
    The sql query that achieves what I want would look like this:
    select e.id, count(*) as numProjects
    from employee e, employee_project ep
    where e.id = ep.employee_id
    group by e.id
    Just for information, currently I am using a named ReadAllQuery and I write my own sql in
    the Workbench rather than using the ExpressionBuilder.
    Now, my two questions are :
    1. Since there is a "group by e.id" on the query, only e.id can appear in the select clause.
    This prevent me from returning the full Employee pojo using ReadAllQuery.
    I can change the query to a nested query like this
    select e.eid, e.name, emp.cnt as numProjects
    from employee e,
    (select e_inner.id, count(*) as cnt
    from employee e_inner, employee_project ep_inner
    where e_inner.id = ep_inner.employee_id
    group by e_inner.id) emp
    where e.id = emp.id
    but, I don't like the complication of having extra join because of the nested query. Is there a
    better way of doing something like this?
    2. The second question is what is the best way of returning the count(*) or the numProjects.
    What I did right now is that I have a ReadAllQuery that returns a List<Employee>; then for
    each returned Employee pojo, I call a method getNumProjects() to get the count(*) information.
    I had an extra column "numProjects" in the Employee table and in the Employee descriptor, and
    I set this attribute to be "ReadOnly" on the Workbench; (the value for this dummy "numProjects"
    column in the database is always 0). So far this works ok. However, since the numProjects is
    transient, I need to set the query to refreshIdentityMapResult() or otherwise the Employee object
    in the cache could contain stale numProjects information. What I worry is that refreshIdentityMapResult()
    will cause the query to always hit the database and beat the purpose of having a cache. Also, if
    there are multiple concurrent queries to the database, I worry that there will be a race condition
    of updating this transient "numProjects" attribute. What are the better way of returning this kind
    of transient information such as count(*)? Can I have the query to return something like a tuple
    containing the Employee pojo and an int for the count(*), rather than just a Employee pojo with the
    transient int inside the pojo? Please advise.
    I greatly appreciate any help.
    Thanks,
    Frans

    No I don't want to modify the set of attributes after TopLink returns it to me. But I don't
    quite understand why this matters?
    I understand that I can use ReportQuery to return all the Employee's attributes plus the int count(*)
    and then I can iterate through the list of ReportQueryResult to construct the Employee pojo myself.
    I was hesitant of doing this because I think there will be a performance cost of not being able to
    use lazy fetching. For example, in the case of large result sets and the client only needs a few of them,
    if we use the above aproach, we need to iterate through all of them and wastefully create all the Employee
    pojos. On the other hand, if we let Toplink directly return a list of Employee pojo, then we can tell
    Toplink to use ScrollableCursor and to fetch only the first several rows. Please advise.
    Thanks.

  • Best way to write Pl/Sql

    Dear all,
    Can someone say the best way writing below stored proc:
    procedure missing_authorized_services is
    v_truncate_sql varchar2(200);
    v_sql varchar2(2000);
    BEGIN
    v_truncate_sql := 'truncate table missing_authorized_services';
         execute immediate v_truncate_sql;
         commit;
    v_sql := 'INSERT into missing_authorized_services select distinct trim(service_group_Cd) as service_group_Cd, trim(service_cd) as service_cd from stage_1_mg_service_request
    where (service_group_cd, service_cd) not in (
                        select distinct service_group_cd, service_cd from stage_3_servcd_servgrp_dim)';
    execute immediate v_sql;
         commit;
    END missing_authorized_services;
    /* I am doing select from table and then try to Insert into a different table the result set */
    Please guide,
    Thanks
    J

    Hi,
    The best way to write PL/SQL (or any code) is in very small increments.
    Start with a very simple procedure that does something (anything), just enough to test that it's working.
    Add lots of ouput statments so you can see what the procedure is doing. Remember to remove them after testing is finished.
    For example:
    CREATE OR REPLACE procedure missing_authorized_services IS
            v_truncate_sql  VARCHAR2 (200);
    BEGIN
         v_truncate_sql := 'truncate table missing_authorized_services';
         dbms_output.put_line (  v_truncate_sql
                        || ' = v_truncate_sql inside missing_authorized_services'
    END      missing_authorized_services;If you get any errors (for example, ORA-00955, becuase you're trying to give the same name to a procedure that you're already using for a table), then fix the error and try again.
    When it worls perfectly, then add another baby step. For example, you might add the one line
    EXECUTE IMMEDIATE v_truncate_sql;and test again.
    Don't use dynamic SQL (EXECUTE IMMEDIATE) unless you have to.
    Is there any reason to use dynamic SQL for the INSERT?

  • What is the best way to deal with memory leak issue in sql server 2008 R2

    What is the best way to deal with memory leak issue in sql server 2008 R2.

    What is the best way to deal with memory leak issue in sql server 2008 R2.
    I have heard of memory leak in OS that too because of some external application or rouge drivers SQL server 2008 R2 if patched to latest SP and CU ( may be if required) does not leaks memory.
    Are you in opinion that since SQL is taking lot of memory and then not releasing it is a memory leak.If so this is not a memory leak but default behavior .You need to set proper value for max server memory in sp_configure to limit buffer pool usage.However
    sql can take more memory from outside buffer pool if linked server ,CLR,extended stored procs XML are heavily utilized
    Any specific issue you are facing
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • What is the best way to Optimize a SQL query : call a function or do a join?

    Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?

    Hi,
    If you're even considering a join, then it will probably be faster.  As Justin said, it depends on lots of factors.
    A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
    You might choose to have a user-defined function even though you could get the same result with a join.  That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case.

  • Best way to write this sql ?

    Please let me know best way to write this SQL.
    select col1, count(*)
    from TableA
    group by col1
    having count(*) =
    (select max(vals)
    from
    select col1, count(*) as vals
    from TableA
    group by col1
    having count(*) > 1
    )

    post EXPLAIN PLAN
    SELECT col1,
           COUNT(*)
    FROM   tablea
    GROUP  BY col1
    HAVING COUNT(*) = (SELECT MAX(vals)
                       FROM   (SELECT col1,
                                      COUNT(*) AS vals
                               FROM   tablea
                               GROUP  BY col1
                               HAVING COUNT(*) > 1))

  • What is the best way to match back 3rd party vendor data to our SQL Server Database?

    So we have this 3rd party data that we need to match back to our database. We have determined that the "ID" column that the 3rd party is sending us back data is a concatenated key of our member's SSN, Gender, and CCYYMMDD Birthdate. In 90% of the
    cases, we can match back on this. However, the other 10% we have to try a couple of different ways...using our Member #, using what is called a HFCA #.
    We are talking about 10s and 20s of data here...NOT thousands.
    What is the best way to handle this via SSIS? A SQL Server Stored Procedure to cursor through the 3rd party data or multiple INSERT-SELECT statements trying to marry back the data? My thought process was to cursor through each record, try and match on our
    90% match, and then determine if we have a match or not, and then if we do not, then try our other means. Should I SELECT 1 to see which matching criteria to go with? So in other words, for the first match...
    IF EXISTS(SELECT 1 FROM TableName WHERE ColumnName1 = .....) BEGIN....ELSE...Blah Blah Blah
    or simply continue doing INSERT-SELECTS...
    I guess I am asking about the efficiency of using a cursor within a SQL Server Stored Procedure here.
    Thanks for your review and am hopeful for a reply.

    You are asking a SSIS question but posted in tsql - which is it?  But before you go further, which matching logic should have priority?  Member # or the SSN/gender/birthdate? Note that the priority does not depend on matching success percentage. 
    In other words, you may prefer to match on member # first (even though it has a lower success ratio but a higher confidence ratio), followed by ssn..., followed by whatever. 
    In any case, this sounds much more like a SSIS logic issue.  Your questions regarding cursors and stored procedures seem premature at this point. OTOH it may depend on what you are actually trying to accomplish.  

Maybe you are looking for

  • Help with more than one apple id on same computer

    When my husband tries to do anything in his itunes library he gets the following error message, "This computer is already associated with an apple id  You can download past purchases on this computer with just one apple id every 90 days.  This comput

  • How do you load in text styles with add-in persistent interfaces?

    Hello, I've added in a persistent interface to the kStyleBoss which gives paragraph styles and character styles some added meta data. They are not text attributes, but rather attributes or options for the style itself, kind of like the name of the st

  • ExchangeProfile Webdynpro on non central Adapter Engine

    Hi all, can anybody tell me, if and how it is possible to get the exchangeprofile Webdynpro running on non central Adapter Engine (NW2004s, 7.0). I know, that it is part of SAP XITOOLS 7.0. But i am not sure if the installation of XI Tools has any im

  • Howto write SQL Developer Extensions in JDeveloper?

    Hello, we want to develop an extension for SQL Developer. We want this extension to appear in the context menu at the table node. We did pretty well developing the necessary Java classes but now we have the problem to integrate our extension to the c

  • BAPI_WARRANTYCLAIM_ADD_VERSION

    i wolud like to know how to use this bapi to change decision code. example: old decision code = 10                new decision code = 92 for this how to use this bapi and also which are other mandatory fields that i need to pass sample code which cle