Query suggestions needed
Please help with this complex query, I have been working on a solution for hours now. Here is a simplified version:
I have 3 fields in tableA:field1, field2, field3
I want to return all those records that have both field2 and field3 the same but field1 different.
For example:
Field1 Field2 Field3
======================
Cars Blue 6 liter
Cars Blue 6 liter
Van Blue 6 liter
Cars Green 5 liter
Cars Green 5 liter
I need the first 3 records returned because field2 and field3 are the same but field1 is different.
Anyone have any ideas?
I think the second SELECT is unnessary. In other words, the following should be equivalent:
SELECT
TABLEA.*
FROM
TABLEA
WHERE
(FIELD2, FIELD3)
IN
SELECT
FIELD2,
FIELD3
FROM
SELECT DISTINCT
FIELD1,
FIELD2,
FIELD3
FROM
TABLEA
GROUP BY
FIELD2,
FIELD3
HAVING
COUNT(*) > 1
)Assuming FIELD1 is NOT NULL, it is also possible to eliminate the SELECT DISTINCT by using COUNT(DISTINCT...), as in something like:
SELECT
TABLEA.*
FROM
TABLEA
WHERE
(FIELD2, FIELD3)
IN
SELECT
FIELD2,
FIELD3
FROM
TABLEA
GROUP BY
FIELD2,
FIELD3
HAVING
COUNT(DISTINCT FIELD1) > 1
)No doubt there are yet other solutions using analytics.
Similar Messages
-
Complex Query which needs tuning
Hello :
I have a complex query that needs to be tuned. I have little experience in tuning the sql and hence taking the help of your guys.
The Query is as given below:
Database version 11g
SELECT DISTINCT P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR,
P.PRODUCT_SERIES, P.PRODUCT_CATEGORY AS Category1, SO.REGION_CODE,
SO.STORE_CODE, S.Store_Name, SOL.PRODUCT_CODE, PRI.REPLENISHMENT_TYPE,
PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE,
PRI.INVOICE_COST, SOL.FIFO_COST,
SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '' AS FNAME, '' AS LNAME,
SOL.PRICE_EXCEPTION_CODE, SOL.AS_IS,
SOL.STATUS_DATE,
Sum(SOL.QUANTITY) AS SumOfQUANTITY,
Sum(SOL.EXTENDED_PRICE) AS SumOfEXTENDED_PRICE
--Format([SALES_ORDER].[STATUS_DATE],"mmm-yy") AS [Month]
FROM PRODUCT P,
PRODUCT_MAJORS PM,
SALES_ORDER_LINE SOL,
STORE S,
SALES_ORDER SO,
SALES_ORDER_SPLITS SOS,
PRODUCT_REGIONAL_INFO PRI,
REGION_MAP R
WHERE P.product_major = PM.PRODUCT_MAJOR
and SOL.PRODUCT_CODE = P.PRODUCT_CODE
and SO.STORE_CODE = S.STORE_CODE
AND SO.REGION_CODE = S.REGION_CODE
AND SOL.REGION_CODE = SO.REGION_CODE
AND SOL.DOCUMENT_NUM = SO.DOCUMENT_NUM
AND SOL.DELIVERY_SEQUENCE_NUM = SO.DELIVERY_SEQUENCE_NUM
AND SOL.STATUS_CODE = SO.STATUS_CODE
AND SOL.STATUS_DATE = SO.STATUS_DATE
AND SO.REGION_CODE = SOS.REGION_CODE
AND SO.DOCUMENT_NUM = SOS.DOCUMENT_NUM
AND SOL.PRODUCT_CODE = PRI.PRODUCT_CODE
AND PRI.REGION_CODE = R.CORP_REGION_CODE
AND SO.REGION_CODE = R.DS_REGION_CODE
AND P.PRODUCT_MAJOR In ('STEREO','TELEVISION','VIDEO')
AND SOL.STATUS_CODE = 'D'
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'
AND SO.STORE_CODE NOT IN
('10','20','30','40','70','91','95','93','94','96','97','98','99',
'9V','9W','9X','9Y','9Z','8Z',
'8Y','92','CZ','FR','FS','FT','FZ','FY','FX','FW','FV','GZ','GY','GU','GW','GV','GX')
GROUP BY
P.RESPONSIBILITY, P.PRODUCT_MAJOR, P.PRODUCT_MINOR, P.PRODUCT_SERIES, P.PRODUCT_CATEGORY,
SO.REGION_CODE, SO.STORE_CODE, /*S.Short Name, */
S.Store_Name, SOL.PRODUCT_CODE,
PRI.REPLENISHMENT_TYPE, PRI.SUPPLIER_CODE,
SOL.SOLD_WITH_NIC, SOL.SUGGESTED_PRICE, PRI.INVOICE_COST,
SOL.FIFO_COST, SO.ORDER_TYPE_CODE, SOL.DOCUMENT_NUM,
SOS.SLSP_CD, '', '', SOL.PRICE_EXCEPTION_CODE,
SOL.AS_IS, SOL.STATUS_DATE
Explain Plan:
SELECT STATEMENT, GOAL = ALL_ROWS Cost=583 Cardinality=1 Bytes=253
HASH GROUP BY Cost=583 Cardinality=1 Bytes=253
FILTER
NESTED LOOPS Cost=583 Cardinality=1 Bytes=253
HASH JOIN OUTER Cost=582 Cardinality=1 Bytes=234
NESTED LOOPS
NESTED LOOPS Cost=571 Cardinality=1 Bytes=229
NESTED LOOPS Cost=571 Cardinality=1 Bytes=207
NESTED LOOPS Cost=569 Cardinality=2 Bytes=368
NESTED LOOPS Cost=568 Cardinality=2 Bytes=360
NESTED LOOPS Cost=556 Cardinality=3 Bytes=435
NESTED LOOPS Cost=178 Cardinality=4 Bytes=336
NESTED LOOPS Cost=7 Cardinality=1 Bytes=49
HASH JOIN Cost=7 Cardinality=1 Bytes=39
VIEW Object owner=CORP Object name=index$_join$_015 Cost=2 Cardinality=3 Bytes=57
HASH JOIN
INLIST ITERATOR
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=3 Bytes=57
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMJR_PR_FK_I Cost=1 Cardinality=3 Bytes=57
VIEW Object owner=CORP Object name=index$_join$_016 Cost=4 Cardinality=37 Bytes=740
HASH JOIN
INLIST ITERATOR
INDEX RANGE SCAN Object owner=CORP Object name=PRDMNR1 Cost=3 Cardinality=37 Bytes=740
INDEX FAST FULL SCAN Object owner=CORP Object name=PRDMNR_PK Cost=4 Cardinality=37 Bytes=740
INDEX UNIQUE SCAN Object owner=CORP Object name=PRODMJR_PK Cost=0 Cardinality=1 Bytes=10
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCTS Cost=171 Cardinality=480 Bytes=16800
INDEX RANGE SCAN Object owner=CORP Object name=PRD2 Cost=3 Cardinality=681
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER_LINE Cost=556 Cardinality=1 Bytes=145
BITMAP CONVERSION TO ROWIDS
BITMAP INDEX SINGLE VALUE Object owner=DS Object name=SOL2
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=SALES_ORDER Cost=4 Cardinality=1 Bytes=35
INDEX RANGE SCAN Object owner=DS Object name=SO1 Cost=3 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=DS Object name=REGION_MAP Cost=1 Cardinality=1 Bytes=4
INDEX RANGE SCAN Object owner=DS Object name=REGCD Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=PRODUCT_REGIONAL_INFO Cost=2 Cardinality=1 Bytes=23
INDEX UNIQUE SCAN Object owner=CORP Object name=PRDRI_PK Cost=1 Cardinality=1
INDEX UNIQUE SCAN Object owner=CORP Object name=BI_STORE_INFO_PK Cost=0 Cardinality=1
MAT_VIEW ACCESS BY INDEX ROWID Object owner=CORP Object name=BI_STORE_INFO Cost=1 Cardinality=1 Bytes=22
VIEW Object owner=DS cost=11 Cardinality=342 Bytes=1710
HASH JOIN Cost=11 Cardinality=342 Bytes=7866
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_CORP Cost=5 Cardinality=429 Bytes=3003
NESTED LOOPS Cost=5 Cardinality=478 Bytes=7648
MAT_VIEW ACCESS FULL Object owner=CORP Object name=STORE_GROUP Cost=5 Cardinality=478 Bytes=5258
INDEX UNIQUE SCAN Object owner=CORP Object name=STORE_REGIONAL_INFO_PK Cost=0 Cardinality=1 Bytes=5
INDEX RANGE SCAN Object owner=DS Object name=SOS_PK Cost=2 Cardinality=1 Bytes=19
Regards,
BMPFirst thing that i notice in this query is you are Using Distinct as well as Group by.
Your group by will always give you distinct results ,then again why do you need the Distinct?
For example
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT distinct col1,col2,sum(value) from t
group by col1,col2Is always same as
WITH t AS
(SELECT 'clm1' col1, 'contract1' col2,10 value
FROM DUAL
UNION ALL
SELECT 'clm1' , 'contract1' ,10 value
FROM DUAL
UNION ALL
SELECT 'clm1', 'contract2',10
FROM DUAL
UNION ALL
SELECT 'clm2', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm3', 'contract1',10
FROM DUAL
UNION ALL
SELECT 'clm4', 'contract2',10
FROM DUAL)
SELECT col1,col2,sum(value) from t
group by col1,col2And also
AND SOL.STATUS_DATE BETWEEN '01-JUN-09' AND '30-JUN-09'It would be best to use a to_date when hard coding your dates.
Edited by: user5495111 on Aug 6, 2009 1:32 PM -
Hello everybody.
I am currently working with Sharepoint Online search and am trying to find information about managing Query suggestions (CRUD operations).
Would anybody have any information as to how I could do this in CSOM or PowerShell ? I truely am having a hard time on this.Hi,
Please check below. This should help you. You can REST APIs
Retrieving query suggestions using the Search REST service
https://msdn.microsoft.com/en-us/library/office/dn194079.aspx
Please remember to click 'Mark as Answer' on the answer if it helps you -
How to enable query suggestions
How can the query suggestions be enabled in the Firefox address bar?
Go to the Privacy tab (i.e. \Firefox\Options\Options) and for the Location Bar, choose "History and Bookmarks" (using the drop-down arrow).
Next, go to the Advanced tab and under the General sub-tab, check the box that reads "Search for text when I start typing". Save and restart your browser.
Hope that works. -
Google's preference "Query Suggestions"
I've got two machines, one running 10.4.11 and the current version of Safari
one running 10.5.5 and the current version of Safari
with the 10.4 machine and Safari: i can go to Google home page, then "preferences" when logged in, and i have a category "Query Suggestions"...
on the 10.5 machine and safari, that category is completely missing? it is logged in to exactly the same google account. and Safari is exactly the same version..???
any ideas?JonK.. wrote:
that means there is something wrong with my system if you can see it??? or something wrong with a setting...
That's why caching came to mind as the most likely culprit.
The only other thing I can think to suggest is deleting any Google cookies and seeing if that makes a difference. I don't think that's like to be the answer, but it can't hurt to try. -
Improve Query Suggestions Load time
How can I improve load time for Pre Query Suggestions on search home page when user start typing ??
I notice it was slow in loading when I hit first time in the morning so I try to warm up by adding
""http://SiteName/_api/search/suggest?querytext='sharepoint' ""
in to warm up script but even during the day time after hitting few times it is slower some times . Any reason ?
Do you think moving Query Component to WFE will do any help here?
Pleas let me know - Thanks .Hi,
Query Suggestions should work at a high level overview is:
• You issue a query within a Search Center site and get results..
• When you hover over or click a result.. this gets stored as a “RecordPageClick” method and will be stored in the Web Applications W3WP process…
• Every five minutes ( I believe ) this W3WP will flush these Recorded Page Clicks and pass them over to the Search Proxy…
• This will then store them in a table in the SSA ( Search Service Application ) Admin DB
• By default, once a day, there is a timer job, Prepare Query Suggestions, that will run
• It does a check to see if the same term has been queried for and clicked at least 6 times and then will move them to another table in the same DB..
• If there are successful queries\clicks > 6 times for a term, and they move to the appropriate table, then when you start to type a word, like “share”
• This will fire a “method” over to the Search proxy servers called “GetQuerySuggestions” and will check that Admin DB to see if they have any matches..
• If so, then it will show up in your Search Box as like “SharePoint” for a suggestion…
Other components involved with the Query Suggestions:
Timer Jobs
Prepare query suggestions Daily
Query Classification Dictionary Update for Search Application Search Service Application Minutes
Query Logging Minutes
Database
MSSQLogQuerySuggestion (SearchDB) This gets cleaned up when we run the timer job
MSSQLogQueryString (SearchDB) Info on the Query String
MSSQLogSearchCounts (SearchDB) Info on the click counts
MSSQLogQuerySuggestion Looks like this may be where the hits for suggestions are stored
So the issue might related to timer job, database, connection between SharePoint server and SQL server. There is a similar case caused by DistributedCache.
If you move query component on to another sever, this may improve the process related to Search service, however, it may affect the performance due to networking.
Please collect verbose ULS log per steps below:
Enable Verbose logging via Central Admin> Monitoring> Reporting>Configure diagnostic logging(You can simply set all the categories with Verbose level and I can filter myself).
Reproduce the issue(Try to remove SSRS).
Record the time and get a capture of the error message(including the correlation ID if there is). Collect the log file which is by default located in the folder: <C:\Program files\common files\Microsoft Shared\web server extensions\15\LOGS>.
Stop verbose logging.
Regards,
Rebecca Tu
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected]. -
Hi everyone,
I need some help from you because before I update my View Object I need to know one value using a Query like this: SELECT cp FROM cpostais WHERE cp4=1234 AND cp3 = 123. The result of the Query I need to save it on a variable (let's call CP) so I can use it to update my View Object. Something like this: currentRow.setAttribute("CPostal", CP);
I'm using JDeveloper 10.1.2, struts and UIX pages.
Can anyone tell me how can I do this? I need an example or a tutorial.
This is really importante, can anyone help me??
Thanks,
Atena
Message was edited by:
AtenaI tried to use something like this:
ViewObject thirdView = daContext.getBindingContext().getDefaultDataControl().getApplicationModule().findViewObject("S2CodigosPostaisView1");
thirdView.setWhereClauseParam(0, currentRow1.getAttribute("Localidade"));
thirdView.setWhereClauseParam(1, currentRow1.getAttribute("Cp"));
thirdView.setWhereClauseParam(2, currentRow1.getAttribute("Ext"));
thirdView.executeQuery();
Row currentRow3 = thirdView.getCurrentRow();
currentRow3.getAttribute("Id");
But this is not working. :(
Can anyone help me with this????
Thanks,
Atena
Message was edited by:
Atena -
Suggestions needed please!
I just bought an ipad 3 and when I connected to my computer it said "Not Charging". Also, not able to sync with itunes. Suggestions needed please!
Most usb ports do not provide enough power to charge your ipad, plug into a wall outlet to charge. If you leave it plugged into the computer and the computer goes to sleep, it can actually drain the battery.
http://help.apple.com/itunes/devices/ipad/en/index.html
IOS: Syncing with iTunes
http://support.apple.com/kb/HT1386
Apple - Support - iPad - Syncing
http://www.apple.com/support/ipad/syncing/ -
Query help needed for querybuilder to use with lcm cli
Hi,
I had set up several queries to run with the lcm cli in order to back up personal folders, inboxes, etc. to lcmbiar files to use as backups. I have seen a few posts that are similar, but I have a specific question/concern.
I just recently had to reference one of these back ups only to find it was incomplete. Does the query used by the lcm cli also only pull the first 1000 rows? Is there a way to change this limit somwhere?
Also, since when importing this lcmbiar file for something 'generic' like 'all personal folders', pulls in WAY too much stuff, is there a better way to limit this? I am open to suggestions, but it would almost be better if I could create individual lcmbiar output files on a per user basis. This way, when/if I need to restore someone's personal folder contents, for example, I could find them by username and import just that lcmbiar file, as opposed to all 3000 of our users. I am not quite sure how to accomplish this...
Currently, with my limited windows scripting knowledge, I have set up a bat script to run each morning, that creates a 'runtime' properties file from a template, such that the lcmbiar file gets named uniquely for that day and its content. Then I call the lcm_cli using the proper command. The query within the properties file is currently very straightforward - select * from CI_INFOOBJECTS WHERE SI_ANCESTOR = 18.
To do what I want to do...
1) I'd first need a current list of usernames in a text file, that could be read (?) in and parsed to single out each user (remember we are talking about 3000) - not sure the best way to get this.
2) Then instead of just updating the the lcmbiar file name with a unique name as I do currently, I would also update the query (which would be different altogether): SELECT * from CI_INFOOBJECTS where SI_OWNER = '<username>' AND SI_ANCESTOR = 18.
In theory, that would grab everything owned by that user in their personal folder - right? and write it to its own lcmbiar file to a location I specify.
I just think chunking something like this is more effective and BO has no built in back up capability that already does this. We are on BO 4.0 SP7 right now, move to 4.1 SP4 over the summer.
Any thoughts on this would be much appreciated.
thanks,
MissyJust wanted to pass along that SAP Support pointed me to KBA 1969259 which had some good example queries in it (they were helping me with a concern I had over the lcmbiar file output, not with query design). I was able to tweak one of the sample queries in this KBA to give me more of what I was after...
SELECT TOP 10000 static, relationships, SI_PARENT_FOLDER_CUID, SI_OWNER, SI_PATH FROM CI_INFOOBJECTS,CI_APPOBJECTS,CI_SYSTEMOBJECTS WHERE (DESCENDENTS ("si_name='Folder Hierarchy'","si_name='<username>'"))
This exports inboxes, personal folders, categories, and roles, which is more than I was after, but still necessary to back up.. so in a way, it is actually better because I have one lcmbiar file per user - contains all their 'personal' objects.
So between narrowing down my set of users to only those who actually have saved things to their personal folder and now having a query that actually returns what I expect it to return, along with the help below for a job to clean up these excessive amounts of promotion jobs I am now creating... I am all set!
Hopefully this can help someone else too!
Thanks,
missy -
Information Broadcasting & Archiving: Query reports need to be Archived
Hi All,
The requirement is that , we need to broadcast query reports in Bex 7.0 to be archived as a pdf.
Please note that this is not Data archiving. We need to archive only query reports.
We need to make sure that pdf gets generated in Background and report selections variant can be used in Background.How can we achieve this?
Alternatively ,Please suggest useful articles, that will be of help in this.
Thanks in Advance ,archiving data is plausible -
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/asug-biti-03/sap%20bw%20archiving%20and%20data%20aging%20strategies
Not sure why you want to archive the reports in PDF, also at what point?? is it on the application sever ???
Hope it Helps
Chetan
@CP.. -
Hi
I have a table called Sale
contains a column called quantity_Available_For_Sale numeric(18,0)
Another table called Withdrawal_Operations contains a column called quantity_withdrawan numeric(18,0)
I need a jdbc code that add quantity_withdrawan value when inserted in the Withdrawal_Operations to the sum of the values of quantity_Available_for_Sale field
this is my trial code
String sql= "update Sale set Item_code ='"+jTextField7.getText()+"',quantity_Available_for_Sale='"+jTextField10.getText()+"SELECT SUM"+""+quantity_Available_for_Sale+""+" FROM "+'Sale'+"+"'WHERE withdrawal_id='"+jTextField12.getText()+"'";It generates two errors
unclosed character literal.
not a statement.
thanks
Edited by: VANPERSIE on May 7, 2011 10:50 PMHi,
This is what you currently have, I think. All I have done is some formatting in order to be able to read it.
String sql = "update Sale set Item_code ='" +
jTextField7.getText() +
"',quantity_Available_for_Sale='" +
jTextField10.getText() +
"'SELECT SUM'" +
"" +
"(" +
quantity_Available_for_Sale +
")" +
"" +
" FROM " +
Sale +
"" +
"' WHERE withdrawal_id='" +
jTextField12.getText() +
Statement stmt = con.createStatement(sql);That approach is wrong for several reasons, some being
1. Each time you concatenate a string together, it will be a brand new SQL statement for the server to hard parse.
Not only is this inefficiant, it breaks your SQL cache, causes latching which is definite resource.
Always use bind variables when dealing with databases.
2. You are subject to SQL injection. If your text fields contains SQL or parts of SQL, malicious users could take advantage of that.
3. Your statement is not type safe, and you need to clutter up with single quotes to turn strings into literals.
With this kind of code you are at sub-zero level of writing database applications.
A better approach would be to use PreparedStatement as suggested.
In your case something like:
String sql = "update Sale set Item_code =?" +
",quantity_Available_for_Sale=" +
"SELECT SUM" +
"" +
"(" +
quantity_Available_for_Sale +
")" +
"" +
" FROM " +
Sale +
"" +
" WHERE withdrawal_id=?";
PreparedStatement pstmt = con.prepareStatement(sql);
pstmt.setString(1, jTextField10.getText());
pstmt.setString(2, jTextField12.getText());This should take of the three points listed above.
In my opinion you are now at level 1 of writing database applications.
You still have some problems. Above sql, is not SQL, it is nothing but a Java String.
You will struggle to make it into valid SQL, probably by using a cumbersome trial and error process.
And this is even quite simple SQL.
In your case you seem to be pulling data from the front end, and use this directly in the data access layer.
So, you have no POJO layer (I am not saying that you should have).
This makes the SQL a perfect candidate for what you already mentioned yourself, stored procedures.
Using a stored procedure, you have something like this:
String sql = "{call sales_pck.set_quant_avail(?,?)}";
CallableStatement cstmt = con.prepareCall(sql);
cstmt.setString(1, jTextField10.getText());
cstmt.setString(2, jTextField12.getText());Looks much more appealing doesn't it?
- And not much room for SQL syntax errors.
You will of course have to implement that stored procedure. Something many Java developers do not like to do.
Don't know why - to me it smells like technology anxiety.
There is nothing wrong about stored procedures, and nothing old school about them. In your case it even seems like a sensible thing to do.
The first advantage about it, is that you will get to work in a layer (The database) that actually knows SQL.
Here's an attempt to implement your stored, implmented by using a PL/SQL package:
create or replace package sales_pck as
procedure set_quant_aval (
p_item_code in sale.item_code%type,
p_with_drawal_id in sale.withdrawal_id%type
end sales_pck;
create or replace package body sales_pck as
procedure set_quant_aval (
p_item_code in sale.item_code%type,
p_with_drawal_id in sale.withdrawal_id%type
) is
begin
update sale
set item_code = p_item_code,
quantity_available =
select sum (quantity_available_for_sale)
from sale
where withdrawal_id = p_with_drawal_id;
end set_quant_aval;
end sales_pck;
/Doing this, you will soon realize that you are not quite there.
You SQL is syntactically wrong (At least in Oracle).
But you are now in the right environment to fix it.
It is probably also semantically wrong, it gives little meaning, and you are probably facing mutating table
(At least in Oracle and other databases supporting read consistency)
Note on syntax errors. You probably mean
update sale
set item_code = p_item_code,
quantity_available =
(select sum (quantity_available_for_sale)
from sale
where withdrawal_id = p_with_drawal_id);Or
update sale
set item_code = p_item_code,
quantity_available =
(select sum (quantity_available_for_sale)
from sale)
where withdrawal_id = p_with_drawal_id;I can't say which, they seem equally wrong.
This is what you should do:
1. Write a correct update statment in a query tool of your choice.
2. Put that statment inside a stored procedure
3. Test that procedure from a query tool
4. Implement the procedure call in your application
5. Test it.
Conclusion:
If you intend to do any (handwritten) SQL in your application, the goal is:
- write as little as you can in the Java layer
- write as much as possible in the database layer
One way of achieving this, is by utilizing stored procedures.
Regards
Peter -
Hi
I need help in writing a sql query for the data given below
Market Description Revenue
LARGE CORPORATE 0.0
LARGE CORPORATE 0.0
OTHERS 0.0
OTHERS 1.98
LARGE CORPORATE 5.1299999999999999
LARGE CORPORATE 6.8500000000000005
LARGE CORPORATE 10.98
LARGE CORPORATE 16.490000000000002
LARGE CORPORATE 21.129999999999999
LARGE CORPORATE 28.66
LARGE CORPORATE 38.579999999999998
OTHERS 68.420000000000002
OTHERS 87.590000000000003
LARGE CORPORATE 90.040000000000006
LARGE CORPORATE 511.94
LARGE CORPORATE 625.01999999999998
LARGE CORPORATE 662.75999999999999
LARGE CORPORATE 700.68000000000006
LARGE CORPORATE 2898.6799999999998
LARGE CORPORATE 3273.96
OTHERS 3285.4400000000001
LARGE CORPORATE 3580.0799999999999
LARGE CORPORATE 4089.1900000000001
LARGE CORPORATE 4373.5200000000004
LARGE CORPORATE 16207.550000000001
LARGE CORPORATE 19862.740000000002
LARGE CORPORATE 33186.150000000001
LARGE CORPORATE 107642.79000000001
The output of query should be in the following format
Market Description 1st 10%(revenue) 2nd 10%(revenue) 3rd 10%(revenue) 4th 10%(revenue) 5th 10%(revenue) rest 50%(revenue)
Would appreciate any help on this query.Hi,
What does 1st 10%, 2nd 10% etc. mean?
Is it 0-10%, 11-20%, 21-30%....
If so, combination of floor,decode and division should help to
achieve the result.
For example for the followint table,
NAME SALE
SALES_GROUP1 .2
SALES_GROUP1 1
SALES_GROUP1 9
SALES_GROUP2 2.1
SALES_GROUP2 12.2
SALES_GROUP2 19.9
SALES_GROUP3 22.2
write,
select name,decode(floor(sale/10),0,sale,null) "1st10%",
decode(floor(sale/10),1,sale,null)"11to20%",
decode(floor(sale/10),2,sale,null) "21to30%"
from revenue;
I mean, using floor and division, your rounding all values b/w 0 to 10 as 0
Similarly, 11 to 20 has be rounded to 1 etc...then apply decode function it
to achive the result.
I hope this helps.
Regards,
Suresh
8i OCP. -
Pagination query help needed for large table - force a different index
I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
SELECT members.*
FROM members,
SELECT RID, rownum rnum
FROM
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
The problem I have is this:
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
SELECT /*+ index(members, joindate_idx) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
SELECT /*+ first_rows(100) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
SELECT members.* -- Select all data from members table
FROM members, -- members table added to FROM clause
SELECT RID, rownum rnum
FROM
SELECT /*+ index(members, joindate_idx) */ rowid as RID -- Hint is ignored now that I am joining in the outer query
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowid -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
Thanks!Lakmal Rajapakse wrote:
OK here is an example to illustrate the advantage:
SQL> set autot traceonly
SQL> select * from (
2 select a.*, rownum x from
3 (
4 select a.* from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 )
9 where x >= 1100
10 /
101 rows selected.
Execution Plan
Plan hash value: 3711662397
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 1 | VIEW | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1200 | 506K| 192 (0)| 00:00:03 |
| 4 | TABLE ACCESS BY INDEX ROWID| EVENTS | 253M| 34G| 192 (0)| 00:00:03 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 1200 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("X">=1100)
2 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
443 consistent gets
0 physical reads
0 redo size
25203 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
SQL>
SQL>
SQL> select * from aoswf.events a, (
2 select rid, rownum x from
3 (
4 select rowid rid from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 ) b
9 where x >= 1100
10 and a.rowid = rid
11 /
101 rows selected.
Execution Plan
Plan hash value: 2308864810
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 201K| 261K (1)| 00:52:21 |
| 1 | NESTED LOOPS | | 1200 | 201K| 261K (1)| 00:52:21 |
|* 2 | VIEW | | 1200 | 30000 | 260K (1)| 00:52:06 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 253M| 2895M| 260K (1)| 00:52:06 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 253M| 4826M| 260K (1)| 00:52:06 |
| 6 | TABLE ACCESS BY USER ROWID| EVENTS | 1 | 147 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("X">=1100)
3 - filter(ROWNUM<=1200)
Statistics
8 recursive calls
0 db block gets
117 consistent gets
0 physical reads
0 redo size
27539 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
Lakmal (and OP),
Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter pga
NAME TYPE VALUE
pga_aggregate_target big integer 103M
SQL> create table t nologging as select * from all_objects where 1 = 2 ;
Table created.
SQL> create index t_idx on t(last_ddl_time) nologging ;
Index created.
SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
40617 rows created.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
PL/SQL procedure successfully completed.
SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME CREATED
47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
47672 ALL$OLAP2_CUBE_DIM_USES 28-JUL-2009 08:08:39
47681 ALL$OLAP2_CUBE_MEASURE_MAPS 28-JUL-2009 08:08:39
47682 ALL$OLAP2_FACT_LEVEL_USES 28-JUL-2009 08:08:39
47685 ALL$OLAP2_AGGREGATION_USES 28-JUL-2009 08:08:39
47692 ALL$OLAP2_CATALOGS 28-JUL-2009 08:08:39
47665 ALL$OLAPMR_FACTTBLKEYMAPS 28-JUL-2009 08:08:39
47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS 28-JUL-2009 08:08:39
47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS 28-JUL-2009 08:08:39
47669 ALL$OLAP9I2_HIER_DIMENSIONS 28-JUL-2009 08:08:39
47666 ALL$OLAP9I1_HIER_DIMENSIONS 28-JUL-2009 08:08:39
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> set autotrace traceonly
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
2 ;
11 rows selected.
Execution Plan
Plan hash value: 44968669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 180 (2)| 00:00:03 |
| 1 | SORT ORDER BY | | 1200 | 91200 | 180 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 3 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 6 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 7 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T".ROWID="T1"."RID")
3 - filter("RN">=1190)
4 - filter(ROWNUM<=1200)
Statistics
1 recursive calls
0 db block gets
348 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
343 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
11 rows selected.
Execution Plan
Plan hash value: 168880862
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 1 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 2 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 5 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T".ROWID="T1"."RID")
2 - filter("RN">=1190)
3 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
349 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
175 recursive calls
0 db block gets
388 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> set autotrace off
SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query. -
IPS event query ** Help needed badly**
Greetings all. Apologies for the dramatic headline but I'm in a bit of a time crunch.
I have a 4215 running 6.0(3)E1. The device is inline. Below is an event which triggered,
========================
evIdsAlert: eventId=1184881408377311643 severity=low vendor=Cisco
originator:
hostId: xyz
appName: sensorApp
appInstanceId: 380
time: 2007/09/24 15:11:25 2007/09/24 15:11:25 UTC
signature: description=Recognized content type id=12673 version=S149
subsigId: 0
sigDetails: Recognized content type
marsCategory: Info/Misc
interfaceGroup: vs0
vlan: 0
participants:
attacker:
addr: locality=any a.a.a.a
port: 80
target:
addr: locality=any b.b.b.b
port: 51095
os: idSource=unknown relevance=relevant type=unknown
actions:
deniedFlow: true
context:
fromAttacker: <stuff>
riskRatingValue: attackRelevanceRating=relevant targetValueRating=medium 50
threatRatingValue: 15
interface: fe2_1
protocol: tcp
========================
I have an external application which pull this same event from the sensor using a query *like* the following,
wget --user foo --password hoo http://a.b.c.d/cgi-bin/event-server?events=evAlert
I'm able to pull most of the event information but not all. What I can't seem to get from query is the " deniedFlow: true" value. I'm seeing something like,
></attack></participants><actions></actions></evAlert>
Notice the "deniedFlow: true" information missing between action.
Is my wget-ish query missing some arguments which is preventing me from pulling all the same information I can see from the CLI?
Thanks in advance.The problem is that you are using the 5.x-style event-server and so you do not see all of the event fields. You need to change the app to pull from the "sdee-server" and then you will see all of the event fields:
http://a.b.c.d/cgi-bin/sdee-server?events=evAlert -
I must not be in the right mode today, because this is giving
me a hard time...
I have a SQL Server database with 250,000 records. Each
record represents a medical visit.
Each record has 25 fields that contain patient diagnosis. A
visit may show one diagnosis, or it may show as many as 25. Any
specific diagnosis (250.00 for example) may appear in a field
called Code1, or a field called Code2.... Code25.
There are a total of about 500 patients that make up the
250,000 visits.
What I need is a report that shows each DISTINCT diagnosis,
and a listing (and count) of people that have been assigned that
diagnosis.
My thought is to first query the table to arrive at the
unique list of diagnosis codes. Then, in a nested loop, query the
database to see who has been assigned that code, and how many
times.
This strikes me as an incredibly database intensive query,
however.
What is teh correct way to do this?The correct way to do this is to normalize your database.
Rather than having 25 separate colums that essentially
contain the same data (different codes), break that out into a
separate table.
Patient
patientid
Visit
patientid (foreign key referencing Patient)
visitid
Diagnosis
visitid (foreign key referencing visitid in Visit)
diagnosiscode
order (if the 1-25 ordering of diagnosis codes is important)
Once you correct your datamodel, what you're trying to do
becomes simple.
-- get all diagnosis codes
select distinct(diagnosiscode) from diagnosis
-- get a breakdown of diagnosis codes, patients, and counts
for that diagnosis
select diagnosiscode, patient.name, count(diagnosiscode) from
patient left join visit on (patient.patientid =
visit.patientid)
left join diagnosis on (visit.visitid = diagnosis.visitid)
group by diagnosiscode, patient.name
Maybe you are looking for
-
Hi All, First ever post on here but I feel that my options are slowly running out. I've just bought a second hand iPod Touch 32GB 4th Gen. It was reset to factory settings meaning that that software was 4.3.5 As soon as I plug it into iTunes it asks
-
Hi. Yesterday I deleted my entire iPhoto library (82 GB). All I did was move the library to the trash. But when I emptied the trash the harddrive storage hadn't changed at all. And I really those 80 GB. Anyone here who knows how to fix it?
-
Preview.app- PLEASE HELP!
Hi Guys, So I wasn't sure on where to post this... but I'm sure anyone can help me. I've lost preview.app So now I can't open photos I was wondering if someone could make a copy of preview (obviously for Snow Leopard) and give me a link to a download
-
RESULT_CACHE_MODE option definition missing (11g doc)
Hello, Definition of RESULT_CACHE_MODE, Oracle® Database Reference 11g Release 1 (11.1) Part Number B28320-01 Only two values are describe : MANUAL and FORCE. But AUTO is allowed : Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6
-
Object Oriented vs Context Oriented Programming
A few weeks ago, I submitted a Paradigm Shift: the WDP Model & the Power to Bind in which I tried to argue for a more object oriented approach to the structuring of WDP Components by using the javabean model as an abstraction layer for the WDP Contex