SQL Query efficiency
Can anyone tell me , is there any difference between count(*) and count(1)
I believe count(1) is faster than count(*), as it is taking count for 1st column.
Is there any other difference or my belief is not right.
Can anyone plz explain it with example....
1) There is no difference in performance for any vaguely recent version of Oracle.
2) If there ever is a difference, COUNT(*) would be the form that would be optimized. So if there ever is a difference COUNT(*) will be faster.
3) COUNT(1) does not count the first column. It counts the literal number 1 for every row. You'd get the same behavior if you did COUNT('BadgerBadgerBadger') or COUNT( date '2012-01-15' ).
4) If I see code that has a bunch of COUNT(1)'s, I generally assume that whoever wrote it is prone to believing random myths they've found on the internet rather than testing thing for themselves so I assume that the surrounding code is more likely to have bugs. Particularly if I'm doing a code review.
Justin
Similar Messages
-
Sql query efficiency problem.
Hello,
I have a table - Users: | id | name | manager_id |
The manager_id references the User.id,
I need to find an employee by an id or name and this employee has to be a manager to someone else.
Creating a sub-select that checks if employee's id is present within manager_id column takes a bit of time, is there a way to, for example, inner join the table to it self leaving only the rows that are managers.Thank you both for your quick answers.
@ Rene Argento
The self-join is something I am interested in, but I don't exactly know how to write it so that it returns the same result set as the sub-select query you wrote.
@ Frank Kulash
Thanks for the query, but is there a possibility to re-write Rene's query to a JOIN query which excludes all the employees who are not managers to someone.
Or maybe there is another way to create the needed query which would be faster than using sub-select for each user ?
Thanks in advance.
Edited by: 909522 on 2012.22.1 10:09
Edited by: 909522 on 2012.22.1 10:10 -
Hi,
I need to write an efficient query because the query involves 4 checks to be performed on rows before returning a single row. Here is a sample query script for creating the table:
CREATE TABLE "myschema"."Complaint"
"Compalint_ID" NUMBER(20,0),
"ReplyTime" NUMBER, -- this would be in minutes
"CREATION_TIME" TIMESTAMP (6),
"STATUS" NUMBER,
"TYPE" NUMBER,
CONSTRAINT "CH_PKRTBL_Complaint_ID" PRIMARY KEY ("Compalint_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "myspace" ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "myspace" ;
CREATE UNIQUE INDEX "myschema"."CH_PKRTBL_Complaint_ID" ON "myschema"."Complaint" ("Compalint_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "myspace" ;
ALTER TABLE "myschema"."Complaint" ADD CONSTRAINT "CH_PKRTBL_TKTID" PRIMARY KEY ("Compalint_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "myspace" ENABLE;
I want to write an efficient sql/pl-sql query with least time to return a row based on following four checks/priorities in order spcified below:
1. Retun any row which has TYPE = 11
2. If no row found for 1st check then
calculate a time difference for all rows as: time_difference (CREATION_TIME - ReplyTime)
and return the row which has largest differene in -ve (that is the row which has expired first)
3. If now row found for check 2 then
return row where SATUS = 22
4. If no row found for check 3 then
calculate a time difference for all rows as: time_difference (CREATION_TIME - ReplyTime)
and return the row which has smallest differene in +ve (that is the row which is going to expire next)
Kindly help me in achieving this task.
Thanks.Just an idea (not sure about the definitions first expired and next to expire )
select complaint_id,reply_time,creation_time,complaint_status,complaint_type,
from (select complaint_id,reply_time,creation_time,complaint_status,complaint_type,
case complaint_type when 11 then 11 end check1,
systimestamp - (creation_time + reply_time / 60/24) check2,
case complaint_type when 22 then 22 end check3,
systimestamp - (creation_time + reply_time / 60/24) check4,
max(case complaint_type when 11 then 11 end) over
(order by null rows between unbounded preceding and unbounded following) check1_max,
min(case when systimestamp - (creation_time + reply_time / 60/24) < 0
then systimestamp - (creation_time + reply_time / 60/24)
end
) over
(order by null rows between unbounded preceding and unbounded following) check2_max,
max(case complaint_type when 22 then 22 end) over
(order by null rows between unbounded preceding and unbounded following) check3_max,
min(case when systimestamp - (creation_time + reply_time / 60/24) > 0
then systimestamp - (creation_time + reply_time / 60/24)
end
) over
(order by null rows between unbounded preceding and unbounded following) check4_min
from complaint
where coalesce(check1,check2,check3,check4) = coalesce(check1_max,check2_max,check3_max,check4_min)Regards
Etbin -
XML Generation using a sql query in an efficient way -Help needed urgently
Hi
I am facing the following issue while generating xml using an sql query. I get the below given table using a query.
CODE ID MARK
==================================
1 4 2331 809
2 4 1772 802
3 4 2331 845
4 5 2331 804
5 5 2331 800
6 5 2210 801
I need to generate the below given xml using a query
<data>
<CODE>4</CODE>
<IDS>
<ID>2331</ID>
<ID>1772</ID>
</IDS>
<MARKS>
<MARK>809</MARK>
<MARK>802</MARK>
<MARK>845</MARK>
</MARKS>
</data>
<data>
<CODE>5</CODE>
<IDS>
<ID>2331</ID>
<ID>2210</ID>
</IDS>
<MARKS>
<MARK>804</MARK>
<MARK>800</MARK>
<MARK>801</MARK>
</MARKS>
</data>
Can anyone help me with some idea to generate the above given CLOB messagenot sure if this is the right way to do it but
/* Formatted on 10/12/2011 12:52:28 PM (QP5 v5.149.1003.31008) */
WITH data AS (SELECT 4 code, 2331 id, 809 mark FROM DUAL
UNION
SELECT 4, 1772, 802 FROM DUAL
UNION
SELECT 4, 2331, 845 FROM DUAL
UNION
SELECT 5, 2331, 804 FROM DUAL
UNION
SELECT 5, 2331, 800 FROM DUAL
UNION
SELECT 5, 2210, 801 FROM DUAL)
SELECT TO_CLOB (
'<DATA>'
|| listagg (xml, '</DATA><DATA>') WITHIN GROUP (ORDER BY xml)
|| '</DATA>')
xml
FROM ( SELECT '<CODE>'
|| code
|| '</CODE><IDS><ID>'
|| LISTAGG (id, '</ID><ID>') WITHIN GROUP (ORDER BY id)
|| '</ID><IDS><MARKS><MARK>'
|| LISTAGG (mark, '</MARK><MARK>') WITHIN GROUP (ORDER BY id)
|| '</MARK></MARKS>'
xml
FROM data
GROUP BY code) -
Improve Efficiency of SQL Query (reducing Hash Match cost)
I have the following SQL query that only takes 6 seconds to run, but I am trying to get it down to around 3 seconds if possible. I've noticed that there are 3 places in the Execution Plan that have pretty high costs. 1: Hash Match (partial aggregate) - 12%.
2: Hash Match (inner join) - 36%. 3: Index Scan - 15%.
I've been researching Hash Match for a couple days now, but I just don't seem to be getting it. I can't seem to figure out how to decrease the cost. I've read that OUTER APPLY is really inefficient and I have two of those in my query, but I haven't been
able to figure out a way to get the results I need without them.
I am fairly new to SQL so I am hoping I can get some help with this.
SELECT wi.WorkItemID,
wi.WorkQueueID as WorkQueueID,
wi.WorkItemTypeID,
wi.WorkItemIdentifier,
wi.DisplayIdentifier,
wi.WorkItemStatusID,
wi.SiteID,
wi.AdditionalIdentifier,
wi.WorkQueueDescription,
wi.WorkItemTypeDescription,
wi.WorkQueueCategoryDescription,
wi.Active,
wi.CheckedOutOn,
wi.CheckedOutBy_UserID,
wi.CheckedOutBy_UserName,
wi.CheckedOutBy_FirstName,
wi.CheckedOutBy_LastName,
wi.CheckedOutBy_FullName,
wi.CheckedOutBy_Alias,
b.[Description] as BatchDescription,
bt.[Description] as BatchType,
bs.[Description] as PaymentBatchStatus,
b.PostingDate AS PostingDate,
b.DepositDate,
b.BatchDate,
b.Amount as BatchTotal,
PostedAmount = ISNULL(PostedPayments.PostedAmount, 0),
TotalPayments = ISNULL(PostedPayments.PostedAmount, 0), --Supporting legacy views
TotalVariance = b.Amount - ISNULL(PostedPayments.PostedAmount, 0), -- ISNULL(Payments.TotalPayments, 0),
PaymentsCount = ISNULL(Payments.PaymentsCount, 0),
ISNULL(b.ReferenceNumber, '') AS PaymentBatchReferenceNumber,
b.CreatedOn,
b.CreatedBy_UserID,
cbu.FirstName AS CreatedBy_FirstName,
cbu.LastName AS CreatedBy_LastName,
cbu.DisplayName AS CreatedBy_DisplayName,
cbu.Alias AS CreatedBy_Alias,
cbu.UserName AS CreatedBy_UserName,
b.LastModifiedOn,
b.LastModifiedBy_UserID,
lmbu.FirstName AS LastModifiedBy_FirstName,
lmbu.LastName AS LastModifiedBy_LastName,
lmbu.DisplayName AS LastModifiedBy_DisplayName,
lmbu.Alias AS LastModifiedBy_Alias,
lmbu.UserName AS LastModifiedBy_UserName,
0 AS VisitID, --Payment work items are not associated with VisitID, but it is a PK field on all the Work Queue view models...for now...
0 AS RCMPatientID, --Payment work items are not associated with RCMPatientID, but it is a PK field on all the Work Queue view models...for now...
0 AS PatientID
FROM Account.PaymentBatch AS b (NOLOCK)
INNER JOIN ViewManager.WorkItems AS wi (NOLOCK)
ON wi.WorkitemIdentifier = b.PaymentBatchID
AND wi.WorkItemTypeID = 3
INNER JOIN Account.PaymentBatchStatus AS bs (NOLOCK)
ON b.PaymentBatchStatusID = bs.PaymentBatchStatusID
LEFT JOIN Account.PaymentBatchType bt (NOLOCK)
ON b.PaymentBatchTypeID = bt.PaymentBatchTypeID
INNER JOIN ViewManager.[User] AS cbu (NOLOCK)
ON b.CreatedBy_UserID = cbu.UserID
INNER JOIN ViewManager.[User] AS lmbu (NOLOCK)
ON b.LastModifiedBy_UserID = lmbu.UserID
LEFT JOIN (
SELECT p.PaymentBatchID
, SUM(p.Amount) AS TotalPayments
, COUNT(0) AS PaymentsCount
FROM Account.Payment AS p (NOLOCK)
WHERE p.PaymentTypeID = 1
AND ISNULL(p.Voided, 0) = 0
GROUP BY p.PaymentBatchID
) AS Payments ON b.PaymentBatchID = Payments.PaymentBatchID
LEFT JOIN (
SELECT p.PaymentBatchID
, SUM(pa.Amount) AS PostedAmount
FROM Account.Payment AS p (NOLOCK)
INNER JOIN Account.PaymentAllocation AS PA (NOLOCK)
ON p.PaymentID = pa.PaymentID
AND (pa.AllocationTypeID = 101 OR pa.AllocationTypeID = 111)
WHERE p.PaymentTypeID = 1
AND ISNULL(p.Voided, 0) = 0
GROUP BY p.PaymentBatchID
) AS PostedPayments ON b.PaymentBatchID = PostedPayments.PaymentBatchID
OUTER APPLY (
SELECT
P.PaymentBatchID,
SUM(CASE WHEN P.PaymentTypeID = 1 THEN 1 ELSE 0 END) as PaymentsCount --only count regular payments not adjustments
FROM
Account.Payment p (NOLOCK)
WHERE
p.PaymentBatchID = b.PaymentBatchID
AND p.PaymentTypeID IN (1,2) AND ISNULL(p.Voided, 0)= 0
GROUP BY
P.PaymentBatchID
) payments
OUTER APPLY (
SELECT
P.PaymentBatchID,
SUM(pa.Amount) as PostedAmount
FROM
Account.PaymentAllocation pa (NOLOCK)
INNER JOIN
Account.Payment p (NOLOCK) ON pa.PaymentID = p.PaymentID
INNER JOIN
Account.AllocationType t (NOLOCK) ON pa.AllocationTypeID = t.AllocationTypeID
WHERE
p.PaymentBatchID = b.PaymentBatchID
AND p.PaymentTypeID IN (1,2)
AND ISNULL(p.Voided, 0)= 0
--AND (t.Credit = 0
--OR (t.Credit <> 0 And Offset_PaymentAllocationID IS NULL AND (SELECT COUNT(1) FROM Account.paymentAllocation pa2 (NOLOCK)
-- WHERE pa2.PaymentID = pa.PaymentID AND pa2.AllocationTypeID IN (101, 111)
-- AND pa2.Offset_PaymentAllocationID IS NULL) > 0))
GROUP BY
P.PaymentBatchID
) PostedPaymentsThe percentages you see are only estimates and may not necessarily not reflect where the real bottleneck is. Particularly, if it is due to a misestimate somewhere.
To be able to help you improve the performance, we need to see the CREATE TABLE and CREATE INDEX statements for the tables. We also need to see the actual query plan. (In XML format, not a screen shot.) Posting all this here is not practical, but you could
upload it somewhere. (Dropbox, Google Drive etc.)
Be very careful with the NOLOCK hint. Using the NOLOCK hint casually lead to transient erratic behaviour which is very difficult to understand. Using it consistenly through a query like you do, is definitely bad practice.
Erland Sommarskog, SQL Server MVP, [email protected] -
How to measure the performance of sql query?
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i...
It ll be useful for me to write efficient query....
Thanks & Regardspsram wrote:
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
You can use the following approach to get a deeper understanding of the operations performed by each row source:
alter session set statistics_level=all;
alter session set timed_statistics = true;
select /* findme */ ... <your query here>
SELECT
SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
OBJECT_NAME,
CARDINALITY,
LAST_OUTPUT_ROWS,
LAST_CR_BUFFER_GETS,
LAST_DISK_READS,
LAST_DISK_WRITES,
FROM V$SQL_PLAN_STATISTICS_ALL P,
(SELECT *
FROM (SELECT *
FROM V$SQL
WHERE SQL_TEXT LIKE '%findme%'
AND SQL_TEXT NOT LIKE '%V$SQL%'
AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
ORDER BY LAST_LOAD_TIME DESC)
WHERE ROWNUM < 2) S
WHERE S.HASH_VALUE = P.HASH_VALUE
AND S.CHILD_NUMBER = P.CHILD_NUMBER
ORDER BY ID
/Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How do you obtain SEQ_ID of a collection based on a SQL QUERY
I don't know if my brain is not working or what, but how can I access the seq_id of a collection that I display as a SQL Query? I have this SQL Query:
select
htmldb_item.checkbox(1,b.asset_id) " ",
htmldb_item.select_list_from_lov(2,
'SHARE-UNRESTRICTED','LOV_RESERVATION_MODE',null,'YES') "mode",
a.seq_id,
b.asset_no,
b.device_name,
b.device_name_attached_to
from
htmldb_collections a,
asset_report_vw b
where
a.collection_name = 'CART'
and a.c001 = b.asset_id
The user can select the reservation mode from the select list for the Mode field and hit a button to update the cart. I have reviewed documentation and the Collections Showcase demo that showed how to do this with form items, but not from a report. I can't figure out how to get the seq_id to pass to the HTMLDB_COLLECTION.UPDATE_MEMBER_ATTRIBUTE procedure. I tried to create a hidden region item called P9_SEQID but I can't figure out how to set it to the seq_id returned from a query. Using #SEQ_ID# syntax doesn't work.I am including the seq_id in my SQL Query report. The sticky point was how to reference the seq_id in an AFTER SUBMIT process to update the collection returned by the SQL Query when the user hit the Apply Changes button (they can edit one of the fields).
So, I assume then this is how I need to do it. I thought there might be a more efficient way to do it by somehow referencing the seq_id column in the SQL Query report instead of having to requery to get the seq id for each row in the SQL Query:
-- 'update_cart' AFTER SUBMIT process
DECLARE
v_seqid number := null;
BEGIN
FOR I in 1..HTMLDB_APPLICATION.G_F02.COUNT LOOP
select x.seq_id into v_seqid from htmldb_collections x
where x.collection_name = 'CART'
and c001 = htmldb_application.g_f01(i);
HTMLDB_COLLECTION.UPDATE_MEMBER_ATTRIBUTE(
p_collection_name => 'CART',
p_seq => v_seq_id
p_attr_number => 2,
p_attr_value => htmldb_application.g_f02(i));
END LOOP;
END; -
Creating an Hour field in an SQL Query CUIC v10.0
I am trying to create a report that shows the calls offered etc by Hour, instead of by half hour or 15 minutes which comes with the usual DateTime (interval) query.
I am using DATEPART (Hour, DateTime) As Hour and whilst this is giving me the hour number in the new column, I an showing 2 rows per hour as per the half hour interval.
I have included it in the Group By and this makes no difference, Whilst I can group them by hour and that would in effect solve the issue, I then loos the Date Field!
Ideally what I would like to see in 1 column is
Call Type 1 4/2/15 10:00 AM
4/2/15 11:00 AM
4/2/25 12:00 AM
etc.....
Instead what I have is
Call Type 1 4/2/15 10:00 AM 10
4/2/15 10:30 AM 10
4/2/25 11:00 AM 11
4/2/25 11:30AM 11
etc...
If there is anyone out there that can help I would be most grateful
Thanks SarahSarah,
You will need to make sure you remove "DateTime" from your GROUP BY clause, and from the SELECT statement.
Instead of using DATEPART, I would use something like:
DateTimeHour = DATEADD(h,DATEDIFF(h,0,DateTime),0)
The above will give you a full DateTime value, rounded down to the nearest hour. Don't forget to include it in the GROUP BY (and probably SORT BY).
You can add "DateTime" back as Filter Field in CUIC after you do the initial Create Fields step (look on the Fields page). Filtering on DateTime instead of DATEADD(h,DATEDIFF(h,0,DateTime),0) should make for a more efficient SQL query.
-Jameson -
SQL Query (challenge)
Hello,
I have 2 tables of events E1 and E2
E1: (Time, Event), E2: (Time, Event)
Where the columns Time in both tables are ordered.
Ex.
E1: ((1, a) (2, b) (4, d) (6, c))
E2: ((2, x) (3, y) (6, z))
To find the events of both tables at the same time it is obvious to do & join between E1 and E2
Q1 -> select e1.Time, e1.Event, e2.Event from E1 e1, E2 e2 where e1.Time=e2.Time;
The result of the query is:
((2, b, x) (6, c, z))
Given that there is no indexes for this tables, an efficient execution plan can be a hash join (under conditions mentioned in Oracle Database Performance Tuning Guide Ch 14).
Now, the hash join suffers from locality problem if the hash table is large and does not fit in memory; it may happen that one block of data is read in memory and swaped out frequently.
Given that the Time columns are sorted is ascending order, I find the following algorithm, known idea in the literature, apropriate to this problem; The algorithm is in pseudocode close to pl/sql, for simplicity (I home the still is clear):
-- start algorithm
open cursors for e1 and e2
loop
if e1.Time = e2.Time then
pipe row (e1.Time, e1.Event, e2.Event);
fetch next e1 record
exit when notfound
fetch next e2 record
exit when notfound
else
if e1.Time < e2.Time then
fetch next e1 record
exit when notfound
else
fetch next e2 record
exit when notfound
end if;
end if;
end loop
-- end algorithm
As you can see the algorithm does not suffer from locality issue since it iterates sequentially over the arrays.
Now the problem: The algorithm shown below hints the use of pipelined function to implement it in pl/sql, but it is slow compared to hash join in the implicit cursor of the query shown above (Q1).
Is there an implicit SQL query to implement this algorithm? The objective is to beat the hash join of the query (Q1), so queries that use sorting are not accepted.
A difficulty I foound is that the explicit cursor are much slower that implict ones (SQL queries)
Example: for a large table (2.5 million records)
create table mytable (x number);
declare
begin
open c for 'select 1 from mytable';
fetch c bulk collect into l_data;
close c;
dbms_output.put_line('couont = '||l_data.count);
end;
is 5 times slower then
select count(*) from mytable;
I do not understand why it should be the case, I read that this may be explained because pl/sql is interpreted, but I think this does not explain the whole issue. May be because the fetch copies data from one space to your space and this takes a long time.Hi
A correction in the algorithm:
-- start algorithm
open cursors for e1 and e2
fetch next e1 record
fetch next e2 record
loop
exit when e1%notfound
exit when e2%notfound
if e1.Time = e2.Time then
pipe row (e1.Time, e1.Event, e2.Event);
fetch next e1 record
fetch next e2 record
else
if e1.Time < e2.Time then
fetch next e1 record
else
fetch next e2 record
end if;
end if;
end loop
-- end algorithm
Best regards
Taoufik -
SQL query to count number of VMs
In OVM 2.2 I could run a SQL query against the OVM database to count the number of VMs that I had created, for automated reporting purposes.
I've looked at the 3.1 OVM database and it looks like most of the data is kept in BLOB columns. So does anyone know how I might query the 3.1 database to count the number of VMs?
Thanks.Hi,
Combine the relevant data from the three tables (using UNION ALL) in a sub-query. without the GROUP BY.
Your main query can select from the sub-query's result set, as if it were a table. Do the GROUP BY in the main query.
That is:
WITH u AS
SELECT TO_CHAR (emp_start, 'YYYY') AS yr FROM Adminstaff
UNION ALL
SELECT TO_CHAR (emp_start, 'YYYY') AS yr FROM Clerk
UNION ALL
SELECT TO_CHAR (emp_start, 'YYYY') AS yr FROM ITstaff
SELECT COUNT (*) AS cnt
, yr
FROM u
GROUP BY yr
ORDER BY cnt DESC
;Since empno is the primary key, it's easier not to mention empno at all in the sub-query, and use "COUNT (*)" instead of "COUNT (empno)" in the main query. The results are the same.
It might be a little more efficient to GROUP BY in all three branches of the sub-query, and then to SUM the results in another GROUP BY in the main query. That means more code, of course, which I'll leave as an exercise for you.
Why do you have three separate tables? I can understand that nobody else wants to be near the IT Staff, but it seems like lots of things (including this job) would be a lot simpler if all employees were all in one table, with a column to designate to which group (Adminstaff, Clerk or ITStaff) each one belongs. -
Hello All,
I have one problem regarding sql query.
I have one internal table which contains equnr and bis as fields. There are two database tables egerr and eastl. The structure for tables are as follows:
Fields for egerr:
equnr, bis, logiknr in which first two fields form key.
Field for eastl:
anlage, bis, logiknr in which all fields form primary key.
I want to select records from internal table which does not have record in eastl.
For the reference we can extract logiknr from egerr by using intarnal table and then use this logiknr to check entry in table eastl. but i want those equnr which are in internal table but not mapped in eastl.
I want the most efficient solution for this as there are many records.
Thanks..... and if you have any queries then let me know.
Jignesh.hi,
as per ur statement, u want the field equnr which exists in the internal table but not in eastl. now for comparing with eastl u will need to check for all the three fields as they form the key...
get data from egerr for matching equnr and bis in your internal table
i.e. assuming ur table is itab and itab_logiknr contains a single field logiknr
select logiknr from egerr
into table itab_logiknr
for all entries in itab
where equnr eq itab-equnr
and bis eq itab-bis.
now from this data (itab_egerr), compare the data with that in eastl for matching (or non matching) values of logiknr
assuming data from eastl lies in itab_eastl
select anlage bis logiknr from eastl into itab_eastl
for all entries in itab_logiknr
where logiknr eq itab_logiknr-logiknr.
for non matching entries u can select data from eastl which is not present in itab_eastl now....
(but mind you....since all fields of eastl form the key, u might not be getting the correct data) so if possible study ur scenario again and see if u can search the eastl table comparing all fields in the primary key)
try this....get back in case of any clarifications
hope it gives u some pointers...
regards,
PJ -
Need Help with Creating the SQl query
Hi,
SQL query gurus...
INFORMATION:
I have two table, CURRENT and PREVIOUS.(Table Defs below).
CURRENT:
Column1 - CURR_PARENT
Column2 - CURR_CHILD
Column3 - CURR_CHILD_ATTRIBUTE 1
Column4 - CURR_CHILD_ATTRIBUTE 2
Column5 - CURR_CHILD_ATTRIBUTE 3
PREVIOUS:
Column1 - PREV_PARENT
Column2 - PREV_CHILD
Column3 - PREV_CHILD_ATTRIBUTE 1
Column4 - PREV_CHILD_ATTRIBUTE 2
Column5 - PREV_CHILD_ATTRIBUTE 3
PROBLEM STATEMENT
Here the columns 3 to 5 are the attributes of the Child. Lets assume that I have two loads, One Today which goes to the CURRENT table and one yesterday which goes to the PREVIOUS table. Between these two loads there is a CHANGE in the value for Columns either 3/4/5 or all of them(doesnt matter if one or all).
I want to determine what properties for the child have changed with the help of a MOST efficient SQL query.(PARENT+CHILD is unique key). The Database is ofcourse ORACLE.
Please help.
Regards,
ParagHi,
The last message was not posted by the same user_name that started the thread.
Please don't do that: it's confusing.
Earlier replies give you the information you want, with one row of output (maximum) per row in current_tbl. There may be 1, 2 or 3 changes on a row.
You just have to unpivot that data to get one row for every change, like this:
WITH single_row AS
SELECT c.curr_parent
, c.curr_child
, c.curr_child_attribute1
, c.curr_child_attribute2
, c.curr_child_attribute3
, DECODE (c.curr_child_attribute1, p.prev_child_attribute1, 0, 1) AS diff1
, DECODE (c.curr_child_attribute2, p.prev_child_attribute2, 0, 2) AS diff2
, DECODE (c.curr_child_attribute3, p.prev_child_attribute3, 0, 3) AS diff3
FROM current_tbl c
JOIN previous_tbl p ON c.curr_parent = p.prev_parent
AND c.curr_child = p.prev_child
WHERE c.curr_child_attribute1 != p.prev_child_attribute1
OR c.curr_child_attribute2 != p.prev_child_attribute2
OR c.curr_child_attribute3 != p.prev_child_attribute3
, cntr AS
SELECT LEVEL AS n
FROM dual
CONNECT BY LEVEL <= 3
SELECT s.curr_parent AS parent
, s.curr_child AS child
, CASE c.n
WHEN 1 THEN s.curr_child_attribute1
WHEN 2 THEN s.curr_child_attribute2
WHEN 3 THEN s.curr_child_attribute3
END AS attribute
, c.n AS attribute_value
FROM single_row s
JOIN cntr c ON c.n IN ( s.diff1
, s.diff2
, s.diff3
ORDER BY attribute_value
, parent
, child
; -
Having troubles passing values of Shuttle control to SQL Query of Report and Chart Region
Hello,
I am very new to APEX and need help for one of the Pa.I have a shuttle control on my page which populates Categories. Once user selects Categories from this control, I wish to pass the values to following SQL query :
select * from emp_class where category IN ( LIST of VALUES FROM RIGHT SIDE SHUTTLE).
I tried various ways of doing this including writing a java scripts which reads shuttle value, converts it into below string
'Category1',Category2',Category3'. Then I set this value to a text box. And then I was expecting that below trcik would work
select * from emp_class where category IN (:TXT_VALUES)
I am sure this is not right way and hence its not working. Can you please guide me here with options?
Many Thanks,
Tushb96402b4-56f7-44ba-8952-fb82a61eeb2c wrote:
Please update your forum profile with a real handle instead of "b96402b4-56f7-44ba-8952-fb82a61eeb2c".
I am very new to APEX and need help for one of the Pa.
Don't understand what this means. What is "Pa"?
select * from emp_class where category IN (:TXT_VALUES)
I am sure this is not right way and hence its not working. Can you please guide me here with options?
This is a common fallacy. In
select * from table where columnvalue in (7788, 7839, 7876)
(7788, 7839, 7876) is an expression list and the predicate is evaluated as a membership condition.
In
select * from table where columnvalue in :P1_X
:P1_X is a scalar string, incapable of containing multiple values.
In an APEX standard report, a PL/SQL function body returning an SQL query report source with lexical substitution can be used to produce a "varying IN-list":
return 'select * from table where columnvalue in (' || :P1_X || ')';
where P1_X contains fewer than 1000 values, has been sanitized for SQL injection, and string values are properly quoted.
Some people suggest the following approach, which will also work in APEX interactive reports:
select * from table where instr(':' || :P1_X || ':', ':' || columnvalue || ':') > 0
However this is non-performant as it eliminates the possibility of the optimizer using indexes or partition pruning in the execution plan.
See varying elements in IN list on Ask Tom, and emulating string-to-table functionality using sql for efficient solutions. -
How to compare same SQL query performance in different DB servers.
We have Production and Validation Environment of Oracle11g DB on two Solaris OSs.
H/W and DB,etc configurations of two Oracle DBs are almost same in PROD and VAL.
But we detected large SQL query performace difference in PROD DB and VAL DB in same SQL query.
I would like to find and solve the cause of this situation.
How could I do that ?
I plan to compare SQL execution plan in PROD and VAL DB and index fragmentations.
Before that I thought I need to keep same condition of DB statistics information in PROD and VAL DB.
So, I plan to execute alter system FLUSH BUFFER_CACHE;
But I am worring about bad effects of alter system FLUSH BUFFER_CACHE; to end users
If we did alter system FLUSH BUFFER_CACHE; and got execution plan of that SQL query in the time end users do not use that system ,
there is not large bad effect to end users after those operations?
Could you please let me know the recomendation to compare SQL query performace ?Thank you.
I got AWR report for only VAL DB server but it looks strange.
Is there any thing wrong in DB or how to get AWR report ?
Host Name
Platform
CPUs
Cores
Sockets
Memory (GB)
xxxx
Solaris[tm] OE (64-bit)
.00
Snap Id
Snap Time
Sessions
Cursors/Session
Begin Snap:
xxxx
13-Apr-15 04:00:04
End Snap:
xxxx
14-Apr-15 04:00:22
Elapsed:
1,440.30 (mins)
DB Time:
0.00 (mins)
Report Summary
Cache Sizes
Begin
End
Buffer Cache:
M
M
Std Block Size:
K
Shared Pool Size:
0M
0M
Log Buffer:
K
Load Profile
Per Second
Per Transaction
Per Exec
Per Call
DB Time(s):
0.0
0.0
0.00
0.00
DB CPU(s):
0.0
0.0
0.00
0.00
Redo size:
Logical reads:
0.0
1.0
Block changes:
0.0
1.0
Physical reads:
0.0
1.0
Physical writes:
0.0
1.0
User calls:
0.0
1.0
Parses:
0.0
1.0
Hard parses:
W/A MB processed:
16.7
1,442,472.0
Logons:
Executes:
0.0
1.0
Rollbacks:
Transactions:
0.0
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:
Redo NoWait %:
Buffer Hit %:
In-memory Sort %:
Library Hit %:
96.69
Soft Parse %:
Execute to Parse %:
0.00
Latch Hit %:
Parse CPU to Parse Elapsd %:
% Non-Parse CPU:
Shared Pool Statistics
Begin
End
Memory Usage %:
% SQL with executions>1:
34.82
48.31
% Memory for SQL w/exec>1:
63.66
73.05
Top 5 Timed Foreground Events
Event
Waits
Time(s)
Avg wait (ms)
% DB time
Wait Class
DB CPU
0
100.00
Host CPU (CPUs: Cores: Sockets: )
Load Average Begin
Load Average End
%User
%System
%WIO
%Idle
Instance CPU
%Total CPU
%Busy CPU
%DB time waiting for CPU (Resource Manager)
Memory Statistics
Begin
End
Host Mem (MB):
SGA use (MB):
46,336.0
46,336.0
PGA use (MB):
713.6
662.6
% Host Mem used for SGA+PGA:
Time Model Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Operating System Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Operating System Statistics - Detail
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Foreground Wait Class
s - second, ms - millisecond - 1000th of a second
ordered by wait time desc, waits desc
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Captured Time accounts for % of Total DB time .00 (s)
Total FG Wait Time: (s) DB CPU time: .00 (s)
Wait Class
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
%DB time
DB CPU
0
100.00
Back to Wait Events Statistics
Back to Top
Foreground Wait Events
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Background Wait Events
ordered by wait time desc, waits desc (idle events last)
Only events with Total Wait Time (s) >= .001 are shown
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
Waits /txn
% bg time
log file parallel write
527,034
0
2,209
4
527,034.00
db file parallel write
381,966
0
249
1
381,966.00
os thread startup
2,650
0
151
57
2,650.00
latch: messages
125,526
0
89
1
125,526.00
control file sequential read
148,662
0
54
0
148,662.00
control file parallel write
41,935
0
28
1
41,935.00
Log archive I/O
5,070
0
14
3
5,070.00
Disk file operations I/O
8,091
0
10
1
8,091.00
log file sequential read
3,024
0
6
2
3,024.00
db file sequential read
1,299
0
2
2
1,299.00
latch: shared pool
722
0
1
1
722.00
enq: CF - contention
4
0
1
208
4.00
reliable message
1,316
0
1
1
1,316.00
log file sync
71
0
1
9
71.00
enq: CR - block range reuse ckpt
36
0
0
13
36.00
enq: JS - queue lock
459
0
0
1
459.00
log file single write
414
0
0
1
414.00
enq: PR - contention
5
0
0
57
5.00
asynch descriptor resize
67,076
100
0
0
67,076.00
LGWR wait for redo copy
5,184
0
0
0
5,184.00
rdbms ipc reply
1,234
0
0
0
1,234.00
ADR block file read
384
0
0
0
384.00
SQL*Net message to client
189,490
0
0
0
189,490.00
latch free
559
0
0
0
559.00
db file scattered read
17
0
0
6
17.00
resmgr:internal state change
1
100
0
100
1.00
direct path read
301
0
0
0
301.00
enq: RO - fast object reuse
35
0
0
2
35.00
direct path write
122
0
0
1
122.00
latch: cache buffers chains
260
0
0
0
260.00
db file parallel read
1
0
0
41
1.00
ADR file lock
144
0
0
0
144.00
latch: redo writing
55
0
0
1
55.00
ADR block file write
120
0
0
0
120.00
wait list latch free
2
0
0
10
2.00
latch: cache buffers lru chain
44
0
0
0
44.00
buffer busy waits
3
0
0
2
3.00
latch: call allocation
57
0
0
0
57.00
SQL*Net more data to client
55
0
0
0
55.00
ARCH wait for archivelog lock
78
0
0
0
78.00
rdbms ipc message
3,157,653
40
4,058,370
1285
3,157,653.00
Streams AQ: qmn slave idle wait
11,826
0
172,828
14614
11,826.00
DIAG idle wait
170,978
100
172,681
1010
170,978.00
dispatcher timer
1,440
100
86,417
60012
1,440.00
Streams AQ: qmn coordinator idle wait
6,479
48
86,413
13337
6,479.00
shared server idle wait
2,879
100
86,401
30011
2,879.00
Space Manager: slave idle wait
17,258
100
86,324
5002
17,258.00
pmon timer
46,489
62
86,252
1855
46,489.00
smon timer
361
66
86,145
238628
361.00
VKRM Idle
1
0
14,401
14400820
1.00
SQL*Net message from client
253,909
0
419
2
253,909.00
class slave wait
379
0
0
0
379.00
Back to Wait Events Statistics
Back to Top
Wait Event Histogram
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (64 msec to 2 sec)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 sec to 2 min)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 min to 1 hr)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Wait Class Stats
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by User I/O Wait Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Physical Reads (UnOptimized)
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Complete List of SQL Text
Back to Top
SQL ordered by Elapsed Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by CPU Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by User I/O Wait Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Gets
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Reads
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Physical Reads (UnOptimized)
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Executions
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Parse Calls
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Sharable Memory
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Version Count
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
Complete List of SQL Text
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
Instance Activity Statistics
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Back to Top
Instance Activity Stats
No data exists for this section of the report.
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Absolute Values
No data exists for this section of the report.
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Thread Activity
Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic
Total
per Hour
log switches (derived)
69
2.87
Back to Instance Activity Statistics
Back to Top
IO Stats
IOStat by Function summary
IOStat by Filetype summary
IOStat by Function/Filetype summary
Tablespace IO Stats
File IO Stats
Back to Top
IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
ordered by (Data Read + Write) desc
Function Name
Reads: Data
Reqs per sec
Data per sec
Writes: Data
Reqs per sec
Data per sec
Waits: Count
Avg Tm(ms)
Others
28.8G
20.55
.340727
16.7G
2.65
.198442
1803K
0.01
Direct Reads
43.6G
57.09
.517021
411M
0.59
.004755
0
LGWR
19M
0.02
.000219
41.9G
21.87
.496493
2760
0.08
Direct Writes
16M
0.00
.000185
8.9G
1.77
.105927
0
DBWR
0M
0.00
0M
6.7G
4.42
.079670
0
Buffer Cache Reads
3.1G
3.67
.037318
0M
0.00
0M
260.1K
3.96
TOTAL:
75.6G
81.33
.895473
74.7G
31.31
.885290
2065.8K
0.51
Back to IO Stats
Back to Top
IOStat by Filetype summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
Small Read and Large Read are average service times, in milliseconds
Ordered by (Data Read + Write) desc
Filetype Name
Reads: Data
Reqs per sec
Data per sec
Writes: Data
Reqs per sec
Data per sec
Small Read
Large Read
Data File
53.2G
78.33
.630701
8.9G
7.04
.105197
0.37
21.51
Log File
13.9G
0.18
.164213
41.9G
21.85
.496123
0.02
2.93
Archive Log
0M
0.00
0M
13.9G
0.16
.164213
Temp File
5.6G
0.67
.066213
8.1G
0.80
.096496
5.33
3713.27
Control File
2.9G
2.16
.034333
2G
1.46
.023247
0.05
19.98 -
Concatenate results SQL query and CASE use Report Builder Reporting Services
I need to concatenate the results from a SQL query that is using CASE. The query is listed below. I do not need permitsubtype but I need to concatenate the results from the permittype.
I tried deleting the permitsubtype query and it would not run correctly. Please see the query and diagram below. Any help is appreciated.
select PERMIT_NO
,(case when
ISNULL(PERMITTYPE,'') = ''
then 'Unassigned'
else (select LTRIM(RTRIM(PERMITTYPE)))
END) AS PERMITTYPE
,(case when
ISNULL(PERMITSUBTYPE,'') = ''
then 'Unassigned'
else (select LTRIM(RTRIM(PERMITSUBTYPE)))
END) AS PERMITSUBTYPE
,ISSUED
,APPLIED
,STATUS
,SITE_ADDR
,SITE_APN
,SITE_SUBDIVISION
,OWNER_NAME
,CONTRACTOR_NAME
,ISNULL(JOBVALUE,0) AS JOBVALUE
,FEES_CHARGED
,FEES_PAID
,BLDG_SF
from Permit_Main
where ISSUED between @FromDate and @ToDateHi KittyCat101,
As per my understanding, you used case when statement in the query, you do not need to display permitsubtype in the report, but when you tried to delete permitsubtype from the query, it could not run correctly. In order to improve the efficiency of troubleshooting,
I need to ask several questions:
“I tried deleting the permitsubtype query and it would not run correctly.” As we can see, it has no effect to delete permitsubtype from the query you provided, could you please provide complete sql query for the report?
Could you please provide detailed information about the report? I would be appreciated it if you could provide sample data and screenshot of the report.
Please provide some more detailed information of your requirements.
This may be a lot of information to ask for at one time. However, by collecting this information now, it will help us move more quickly toward a solution.
Thanks,
Wendy Fu
Maybe you are looking for
-
I keep my itunes on two external harddrives. And I sync one or the other to my computer when changing up my playlists on my iPod. Well I usually download my music to my computer and then move it over to the harddrives and then I erase the music from
-
Cannot update or uninstall iTunes
When I try to update iTunes, i receive the following message: The feature you are trying to use is on a network resource that is unavailable. Click OK to try again, or enter an alternate path to a folder containing the installation package 'itunes64.
-
Entire Library Gone!!!!!
I am fairly new to Aperture and am hoping there is a logical explanation for this..... Over the past few days I have been building my Aperture library, first by importing my existing iPhoto library, and then building new projects. I've chosen to make
-
Statistical WBS setting.
Dear All, We have a WBS which is account assigned and have also some actual postings on It,the project is defined as statistical(tick set at project definition),as we can undo the statistical tick at the WBS level(control tab), the WBS does not have
-
IPhone/iPodTouch Question: How can I read out the actual Wifi/Hotspots?
Is it possible to read out the actual WiFi/Hotspots? Which hotspots are now available? Over which Objects/Classes i have to do this? (i need the data as input for a game and a music program) thanks for any help t00cg@la1n