Comparing 2 queries
SQL> desc hr_pay_slip_mst
Name Null? Type
SEQ_NO NOT NULL NUMBER
EMPNO VARCHAR2(5)
DATED DATE
ENT_BY VARCHAR2(3)
ENT_DT DATE
CHQ_NO VARCHAR2(15)
SQL> desc hr_pay_slip_dlt
ERROR:
ORA-04043: object hr_pay_slip_dlt does not exist
SQL> desc hr_pay_slip_dtl
Name Null? Type
SEQ_NO NUMBER
PAY_CODE VARCHAR2(3)
AMOUNT NUMBER(12,3)
ENT_BY VARCHAR2(3)
ENT_DT DATE
SQL> desc hr_pay_heads_setup
Name Null? Type
PAY_CODE NOT NULL VARCHAR2(3)
PAY_HEAD_NAME VARCHAR2(52)
PAY_HEAD_ABRV VARCHAR2(10)
ACCODE VARCHAR2(7)
PAY_TYPE VARCHAR2(1)
STATUS VARCHAR2(1)
PERCENT_OF_BASIC NUMBER(4,2)
ENT_BY VARCHAR2(3)
ENT_DT DATE
SQL> desc emp
Name Null? Type
EMPNO NOT NULL VARCHAR2(5)
ENAME NOT NULL VARCHAR2(100)
ENT_DT DATE
ENT_BY VARCHAR2(3)
STATUS VARCHAR2(1)My query
SELECT ALL a.ENAME, a.bank_cd, c.DATED,BNK_ACCT_NUMBER,sum(decode(pay_type,'E',amount)) - NVL(sum(decode(pay_type,'D',amount)),0)as Amount,grp_cd,desig_code
FROM EMP a, HR_PAY_SLIP_MST c, HR_PAY_SLIP_DTL d, HR_PAY_HEADS_SETUP b
WHERE ((c.EMPNO = a.EMPNO)
AND (d.SEQ_NO = c.SEQ_NO)
AND (d.PAY_CODE = b.PAY_CODE))
and dated='30-jun-2010'
group by a.ENAME, a.BANK_CD, c.DATED,BNK_ACCT_NUMBER,grp_cd,desig_code
order by grp_cd,desig_codeBy this query I will get Employee with his Current Salary of the month i.e. june 2010
Now I want to compare Salary of an employee with his last month salary if there is any difference.other wise not show.
i want this Result
Ename Amount(current) Amount(Last Month)
xxxx 10000 12000
abc 7908 12345
and so on
Hi,
you can also do something like this:
WITH salmonth AS
(SELECT ALL a.empno,
a.ENAME,
a.bank_cd,
c.DATED,
BNK_ACCT_NUMBER,
SUM(DECODE(pay_type,
'E', amount
- NVL(SUM(DECODE(pay_type,
'D', amount
)), 0) AS Amount,
grp_cd,
desig_code
FROM EMP a,
HR_PAY_SLIP_MST c,
HR_PAY_SLIP_DTL d,
HR_PAY_HEADS_SETUP b
WHERE ( (c.EMPNO = a.EMPNO)
AND (d.SEQ_NO = c.SEQ_NO)
AND (d.PAY_CODE = b.PAY_CODE)
GROUP BY a.empno,
a.ENAME,
a.BANK_CD,
c.DATED,
BNK_ACCT_NUMBER,
grp_cd,
desig_code
ORDER BY grp_cd,
desig_code)
SELECT sm1.ENAME,
sm1.bank_cd,
sm1.BNK_ACCT_NUMBER,
sm1.grp_cd,
sm1.desig_code,
sm1.Amount,
sm2.Amount
FROM salmonth sm1,
salmonth sm2
WHERE sm1.dated = '30-jun-2010'
AND sm2.dated = ADD_MONTHS(sm1.dated, -1)
AND sm2.empno = sm1.empno
AND sm2.Amount != sm1.Amount
Similar Messages
-
Problem of full table scan on a partitioned table
hi all
There is a table called "si_sync_operation" that have 171040 number of rows.
I partitioned that table into a new table called "si_sync_operation_par" with 7 partitoins.
I issued the following statements
SELECT * FROM si_sync_operation_par.
SELECT * FROM si_sync_operation.
The explain plan show that the cost of the first statement is 1626 and that of the second statments is 1810.
The "cost" of full table scan on partitioned table is lower than the that of non-partitioned table.That's fine.
But the "Bytes" of full table scan on partitioned table is 5761288680 and that of the non-partitioned table is 263743680.
Why full table scan on partitioned table access more bytes than non-partitioned table?
And how could a statment that access more bytes results a lower cost?
Thank u very muchAs Hemant metioned bytes reported are approximate number of bytes. As far as Cost is concerned, according to Tom its just a number and we should not compare queries by their cost. (search asktom.oracle.com for more information)
SQL> drop table non_part purge;
Table dropped.
SQL> drop table part purge;
Table dropped.
SQL>
SQL> CREATE TABLE non_part
2 (id NUMBER(5),
3 dt DATE);
Table created.
SQL>
SQL> CREATE TABLE part
2 (id NUMBER(5),
3 dt DATE)
4 PARTITION BY RANGE(dt)
5 (
6 PARTITION part1_jan2008 VALUES LESS THAN(TO_DATE('01/02/2008','DD/MM/YYYY')),
7 PARTITION part2_feb2008 VALUES LESS THAN(TO_DATE('01/03/2008','DD/MM/YYYY')),
8 PARTITION part3_mar2008 VALUES LESS THAN(TO_DATE('01/04/2008','DD/MM/YYYY')),
9 PARTITION part4_apr2008 VALUES LESS THAN(TO_DATE('01/05/2008','DD/MM/YYYY')),
10 PARTITION part5_may2008 VALUES LESS THAN(TO_DATE('01/06/2008','DD/MM/YYYY'))
11 );
Table created.
SQL>
SQL>
SQL> insert into non_part select rownum, trunc(sysdate) - rownum from dual connect by level <= 140;
140 rows created.
Execution Plan
Plan hash value: 1731520519
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | COUNT | | | | |
|* 2 | CONNECT BY WITHOUT FILTERING| | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(LEVEL<=140)
SQL>
SQL> insert into part select * from non_part;
140 rows created.
Execution Plan
Plan hash value: 1654070669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 140 | 3080 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| NON_PART | 140 | 3080 | 3 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement
SQL>
SQL> commit;
Commit complete.
SQL>
SQL> set line 10000
SQL> set autotrace traceonly exp
SQL> select * from non_part;
Execution Plan
Plan hash value: 1654070669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 140 | 3080 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| NON_PART | 140 | 3080 | 3 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement
SQL> select * from part;
Execution Plan
Plan hash value: 3392317243
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 140 | 3080 | 9 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE ALL| | 140 | 3080 | 9 (0)| 00:00:01 | 1 | 5 |
| 2 | TABLE ACCESS FULL | PART | 140 | 3080 | 9 (0)| 00:00:01 | 1 | 5 |
Note
- dynamic sampling used for this statement
SQL>
SQL> exec dbms_stats.gather_table_stats(user, 'non_part');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats(user, 'part');
PL/SQL procedure successfully completed.
SQL>
SQL>
SQL> select * from non_part;
Execution Plan
Plan hash value: 1654070669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 140 | 1540 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| NON_PART | 140 | 1540 | 3 (0)| 00:00:01 |
SQL> select * from part;
Execution Plan
Plan hash value: 3392317243
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 140 | 1540 | 9 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE ALL| | 140 | 1540 | 9 (0)| 00:00:01 | 1 | 5 |
| 2 | TABLE ACCESS FULL | PART | 140 | 1540 | 9 (0)| 00:00:01 | 1 | 5 |
SQL>After analyzing the tables, notice that the Bytes column has changed value -
Multiple Query / Fact Table question
I'm a bit new to obiee, and am trying to accomplish something that should be pretty easy.
Here is my basic table structure:
Month table (dimension)
Month Key (primary Key)
aTable (fact table)
MonthKey
NameA
totalA
bTable (fact table)
MonthKey
totalB
The goal of the report is (4 columns):
MonthKey, NameA, and totalA.
The final Column should be TotalA / TotalB
I have 2 subject areas, but since they are related I can use them together.
The problem I'm running into is that TotalB basically becomes TotalA (which it shouldn't) because of the inclusion of "NameA" from aTable.
Any ideas? Is there a way to compare queries like this, when they are not necessarily related?Ok, behavior is different but EVENT_NO column always store the value 1, so the final result at the report is the same.
I'm interesting to know the total events by month, department and event type.
Each record at fact table represent only one event, so I can query with SUM or COUNT statment because the result is the same, but ... the performance?
Thanks -
Comparing Top-n queries for performance
I would like to know which version of the query is performance efficient.
Query1: Using analytic functions:
select * from (
select pv.vendor_name, aia.invoice_num
, aia.invoice_date, aia.invoice_amount
,pvs.vendor_site_code,att.name terms
*,row_number() OVER (ORDER BY aia.invoice_id) AS row_num*
from ap_invoices_all aia, po_vendors pv
,po_vendor_sites_all pvs,ap_terms_tl att
where aia.vendor_id = pv.vendor_id
and pv.vendor_id= pvs.vendor_id
and pvs.vendor_site_id = aia.vendor_site_id
and pvs.terms_id=att.term_id
and att.language=userenv('LANG')
and aia.invoice_date between to_date('01-Jan-96','DD-MM-RR')
and to_date('31-Dec-09','DD-MM-RR')
where row_num<1000
Query2: Using Sort and Filter:
select * from (
select pv.vendor_name, aia.invoice_num
, aia.invoice_date, aia.invoice_amount
,pvs.vendor_site_code,att.name terms
from ap_invoices_all aia, po_vendors pv
,po_vendor_sites_all pvs,ap_terms_tl att
where aia.vendor_id = pv.vendor_id
and pv.vendor_id= pvs.vendor_id
and pvs.vendor_site_id = aia.vendor_site_id
and pvs.terms_id=att.term_id
and att.language=userenv('LANG')
and aia.invoice_date between to_date('01-Jan-96','DD-MM-RR')
and to_date('31-Dec-09','DD-MM-RR')
order by aia.invoice_id
where rownum<1000
I have tried to get the 'Explain plan', I dont see cost differences. But there is a difference in the way the operation is performed.
I would like to know which one is efficient, analytic functions or the other way.
Thanks.Read Hoek's response and the link he posted carefully
If you want to know which of two queries is more efficent you will have to do some work: get execution plans and run metrics to compare the two. The fact that the operations are different suggests one may be more efficient than the other but you need the run metrics to tell.
Don't rely on cost. The cost, bytes, and other execution plan metrics are estimates and may not perform as expected.
The metrics I use to measure queries are run time, disk reads, memory reads, and cpu. The run time is okay but needs the other metrics to support - lower run time consistently is good but the other metrics help explain why. I get the disk reads and memory reads from AUTOTRACE (easier) or tkprof (more work), and CPU usage can come from V$SQL (if you can find the query you're running; sometimes its hard) or again tkprof reports. -
How to compare the query definiation of two or more queries?
HI Folks.
I am attempting to determine if there is a method by which I can compare query definination of two or more queries or shows me the differences between queries.
Transaction RSRTQ only give the query details of single query at a time and doesnot do a comparison.
Please let me know a method to undertake this?
Thanks
UdayThere is no straight way of doing this:
I normally open the definitions in 2 sessions and compare them manually.
Hope this helps -
Comparing 2 SQL queries in TOAD
Hi,
Is there any way to compare 2 sql queries using TOAD?
I.e. if there are 2 sql queries and i want to know where exactly is the difference in the queries, then is it possible to know using TOAD?
Thanks in advance
--kumarOracle has more than enough on its plate dealing with its existing product line and integrating the product suites.
There is no real need to use thrid party tools.
Re: Oracle Third Party Tools and Oracle Database
Adith -
Compare two queries of two different system.
Hello all,
I need to certify the some queries, the process we follow is as below.
1. The query will be created in Production system, we will transport back to Development server.
2. Compare the query (Characterstics, Key Figures and Definition), In other words I need to compare and validate each and every aspect of Query with the Development and Production.
3. I need to change the query if I find anything difference in Prod and Dev system.
4. Transport to the Production.
Here my problem is I'm comparing manually by opening RRMX in each of the system and each variable, its time consuming as it is a manual process there is a chance of missing some important. So I would like to know is there any way to compare the queries Prod and Dev server in easier way than manually.
I hope requirement is clear to you or else just let me know your concerns and thoughts on this.
Thanks in advance,
UmashankarUmashankar,
I dont think there is an easy way to do this as you are asking for - however you can do a basic level comparison as to number of query ellements , variables that are there etc - this can be done using the table rszcompdir which has all the query elements of the query but then a direct one to onw comparison may not be possible since the query elements are identified by GUIDs which usually will be different across systems. -
Comparing two queries with VBA macro
Hi,
I have a workbook woth 2 sheets. Every sheet has a BEx query (so 2 queries).
I need, for every column in the first query, to check if it is present in the second query, to generate a list with the NOT PRESENT values (or simply, marking the not present values).
I tried with Searching excel functions by column, but every query hs more that 30.000 rows, so it takes a long time.
I think I have to use an excel macro. I have been lookong for in the forum but my doubt is that I have to execute when the two queries has been refreshed.
How can I know when the two queries has been refreshed?
Is there any other way to do it?
Any idea of how to reference a query from the other?
I think the code shoud be in this way:
for every row in the first colum of 1st query
look in every row in the first column of 2nd query
mark in first query if not present.
If I put the macro in 1st query, how can I reference the 2nd one?Hi Oscar, there are several ways to do this.
The easiest way would be:
1. in the SAPBEXonRefresh subroutine (which you will find in the SAPBEX Module attached to your query-containing workbook), add code that looks something like this:
Set ws = resultArea.parent
if ws is nothing then exit sub
resultArea.name = ws.codename & "_results"
ws.Range("D2") = Date
if queryID = "SAPBEXq0002" then Call CompareQueries(queryID)
'make the query ID that of the second query
This will create named ranges in Excel so that you can easily locate the result tables later. And, ensure that you know the date that each query was refreshed. And, after the second query is refreshed, it will call your query comparison routine.
2. In the comparison routine, you will need something like this:
set ws1 = Query1 'assuming that you have set the code name of the first query sheet as Query1
set ws2 = Query2 'assuming that you have set the code name of the second query sheet as Query2
'set ranges for the two result tables
set Range1 = ws1.codename & "_results"
set Range2 = ws2.codename & "_results"
'locate first and last Row and Column for result tables
firstRow1 = Range1.cells(1).row
firstCol1 = Range1.cells(1).column
numCells = Range1.cells.count
lastRow1 = Range1.cells(numCells).row
lastCol1 = Range1.cells(numCells).column
'do same for Range2
'locate the columns you want to compare
searchCol = 0
for j = firstCol1 to lastCol1
if cells(firstRow1, j) like "some text here" then
searchCol = j
exit for
end if
next j
if searchCol = 0 then
msgbox "Did not find ""some text here"".", vbCritical
exit sub
Else:
set SearchRange = ws1.cells(1, searchCol).entireColumn
end if
for j = firstCol2 to lastCol2
if cells(firstRow2, j) like "some text here" then
searchCol = j
exit for
end if
next j
'do the search
for i = firstRow2 to lastRow2
searchFor = cells(i, searchCol)
matchRow = 0
on error resume next
matchRow = Application.worksheetfunction.match(searchFor, SearchRange, 0)
if matchRow > 0 then
cells(i, lastCol2+1) = "match found on row " & matchRow
Else:
cells(i, lastCol2+1) = "NOT PRESENT"
End if
next i
- Pete -
Comparing two queries yields one result too many
I have a problem that I'm pretty sure is resident in the
structure of a loop, but I'm not quite sure how to fix it.
All of this is being done within a cfc. The cfc calls the
first method for Query1, then calls the second method for Query2.
Query1 has 173 records, Query2 has 117 records. Technically the
difference should be 56 records.
However, the result of myquery (below) is giving me 57
records. And every one of them is a real record. Code as follows:
<CFSET myquery = QueryNew("var1, var2, var3, var4,
var5")>
<CFLOOP INDEX="i" FROM="1" TO="#Query1.recordcount#">
<CFQUERY NAME="checkJob" DBTYPE="query">
SELECT var1
FROM Query2
WHERE var1 = <cfqueryparam cfsqltype="cf_sql_varchar"
value="#Query1.var1
#">
</CFQUERY>
<CFIF checkJob.recordcount lte 0>
<cfset newRow = QueryAddRow(myQuery, 1)>
<cfset temp = QuerySetCell(myQuery, "var1",
Query1.var1)>
<cfset temp = QuerySetCell(myQuery, "var2", Query1.var2
)>
<cfset temp = QuerySetCell(myQuery, "var3",
Query1.var3)>
<cfset temp = QuerySetCell(myQuery, "var4", Query1.var4
)>
<cfset temp = QuerySetCell(myQuery, "var5",
Query1.var5)>
</CFIF>
</CFLOOP>
<CFRETURN myQuery>
So if all is done correctly, I should be getting the results
from Query1 that are NOT in Query2.
It's *almost* right.
Since var1 exists in all of the records returned by myquery,
one presumes that there's an extra record being returned that *IS*
in Query2.
I'm not sure why, though.
The resultant screen needs to print out all of the variables
from query1 that do not exist in query2. Hence, myquery.
Anyone have a better recommendation on how to fix this
problem? I feel like a goober for even asking it, but it's been
annoying me all afternoon.
RizadosSimon, I would appreciate not being called lazy and yet
overly complicated in the same sentence. Ignorant, perhaps, overly
complicated, very likely, but lazy isn't the word I'd use.
When it comes to MSSQL, I was actually unaware I could use a
query of queries approximation without writing the whole thing out.
Thank you so much for correcting my ignorance. My apologies if this
isn't the type of database I've been using for the last few years,
and if my familiarity with the database is not as great as yours.
I have been trying to componentize this application where
applicable, and the queries I speak of were already componentized
prior to me coming into this. Both the primary and secondary
queries are being used by multiple applications on the same server,
and multiple times in the same page with different parameters (the
same page is being used several times). This third query is
obviously the difference between the two. If the second query has
to change at any time, then changing it only once within a
component makes more sense than having to find it multiple times
within the code to change it again, since the business rules on
said query have had to change a few times before. Better to have it
in one place than to change it in multiple places and risk having
one be changed while the others are not.
Lazy has nothing to do with it so much as accuracy. Multiple
instances of the same code means having to find it multiple times
when the code needs to change, and risking that you may miss one.
If they allowed me to use a view to do it as recommended by
cf_dev2, I would do so, and it would make my life a lot less
complicated. However, that option is not available to me, therefore
I have to go the "more complicated" route. -
Benefits of SAP Business Warehouse Reporting compared with the Query-tools?
Hello experts,
I've been creating reports with SAP Query tools, but been faced with few problems. First of all, there are restricted possibilities in terms of joining tables, and thus I cannot include all the data required for the report.
One solution for this would have been to create separate queries and join the data together in Excel. The problem with this is that there exists no common field that would make sense as for a selection criterion. If the selection criterion is different in different queries, there is the danger of combining false data together, which would eventually result with distorted data.
So I was wondering, could anyone briefly tell me if SAP Business Warehouse reporting would solve these problems? And what other benefits would it provide compared with the SAP Query? I'd urgently need to know if it would be a beneficial investment for the company, since I haven't found solutions for the problems occurred in the creation of reports.
Thank you in advance for you help!
MariaThe answers are yes - and thousands of companies have gone down this route
Puttingin BW is a strategic aim of the comapny and not to be thought about and discussed in a BI forum such as this
The costs of implementation and hardware will no doubt make your eyes water.
To be quite honest SAP BI is a "no brainer" as most of the new e-SOA and new R3 modules reply on BW for their reporting needs -
How to ignore blank/null key figure value in BI Queries
Reports on Multiprovider - we see some cells of a Key figure as blanks. These blanks are interpreted as zeros by the system and calculated accordingly resulting in incorrect values. As per our requirement, we need a count of all hard/real zeros only, not the blanks. For example, if there are 10 rows of which 6 are real zeros and 4 are blanks - our count should be 6 and not 10.
How to ignore the blanks in BEx queries please?
Thanks for your help.
UpenderRakesh,
It is not possible to find a pattern because the report is on a MultiProvider with 2 InfoProviders- Purchasing documents DSO and Material Movements InfoCube.
Every Purchasing Document has several materials associated with it. These materials are compared with materials in Materials Movement. Not all materials in Purchasing Document are found in Materials Movement. For those Materials found in Materials Movement, the Quantity is obtained. For these found rows, the correct value is showing up - if the quantity is zero, it is showing in reports as zero. If the material is not found in Material Movements then Quantity shows up as blank values.
My requirement is ignore such blank quantities and not count them. Only Quantities with 0 values should be counted. Currently both blanks and zero values are counted showing inflated count.
Thanks,
Upender -
Two similar queries and different result.
Hi! I have a problem and
with
sc as (select * from nc_objects where object_type_id = 9122942307013185081 and project_id=9062345122013900768),
cid as (select sccid.value AS CIRCUIT_ID,sc.description AS DESCRIPTION
from sc, nc_params sccid
where sccid.object_id = sc.object_id and sccid.attr_id = 9122948792013185590),
caloc as ( select
(*select value from nc_params sccid where sccid.object_id = sc.object_id and sccid.attr_id = 9122948792013185590*) as CIRCUIT_ID,
(select sl.name from nc_objects sl join nc_references scr on sl.object_id = scr.reference
where scr.attr_id = 3090562190013347600 and scr.object_id = sc.object_id ) as ALOCATION
from sc),
cbloc as ( select
(select value from nc_params sccid where sccid.object_id = sc.object_id and sccid.attr_id = 9122948792013185590) as CIRCUIT_ID,
(select sl.name from nc_objects sl join nc_references scr on sl.object_id = scr.reference
where scr.attr_id = 3090562190013347601 and scr.object_id = sc.object_id ) as BLOCATION
from sc)
select cid.CIRCUIT_ID,cid.DESCRIPTION,ALOCATION,BLOCATION from (
cid
join caloc on cid.CIRCUIT_ID = caloc.CIRCUIT_ID and ALOCATION is not null
join cbloc on cid.CIRCUIT_ID = cbloc.CIRCUIT_ID and BLOCATION is not null
it` returns and`s all ok!
ID desc aloc bloc
101 TEST1 AHAS AGUS
102 TEST2 AKRE AMJY
103 TEST3 AMJS ASSE
109 TEST9 BAIA AKIB
5 (null) WELA AGUS
We have "sc as (select * from nc_objects where object_type_id = 9122942307013185081 and project_id=9062345122013900768)"
and identical subquery on caloc and cbloc
"select value from nc_params sccid where sccid.object_id = sc.object_id and sccid.attr_id = 9122948792013185590"
If i change query on
with
sc as (select * from nc_objects where object_type_id = 9122942307013185081 and project_id=9062345122013900768),
cid as (select sccid.value AS CIRCUIT_ID,sc.description AS DESCRIPTION
from sc, nc_params sccid
where sccid.object_id = sc.object_id and sccid.attr_id = 9122948792013185590),
caloc as ( select
*(select CIRCUIT_ID from cid) as CIRCUIT_ID,*
(select sl.name from nc_objects sl join nc_references scr on sl.object_id = scr.reference
where scr.attr_id = 3090562190013347600 and scr.object_id = sc.object_id ) as ALOCATION
from sc),
cbloc as ( select
(select value from nc_params sccid where sccid.object_id = sc.object_id and sccid.attr_id = 9122948792013185590) as CIRCUIT_ID,
(select sl.name from nc_objects sl join nc_references scr on sl.object_id = scr.reference
where scr.attr_id = 3090562190013347601 and scr.object_id = sc.object_id ) as BLOCATION
from sc)
select cid.CIRCUIT_ID,cid.DESCRIPTION,ALOCATION,BLOCATION from (
cid
join caloc on cid.CIRCUIT_ID = caloc.CIRCUIT_ID and ALOCATION is not null
join cbloc on cid.CIRCUIT_ID = cbloc.CIRCUIT_ID and BLOCATION is not null
query result will be:
ORA-01427: single-row subquery returns more than one row
01427. 00000 - "single-row subquery returns more than one row"
*Cause:
*Action:
Can you explain why so ?
Edited by: user12031606 on 07.05.2010 2:31
Edited by: user12031606 on 07.05.2010 2:32Hi,
Welcome to the forum!
Whenever you post code, format it to show the extent of sub-queries, and the clauses in each one.
Type these 6 characters:
\(all small letters, inside curly brackets) before and after each section of formatted test; if you don't, this site will compress the spaces.
It also helps it you reduce your query as much as possible. For example, I think you're only asking about the sub-query called caloc, so just post caloc as if that were the entire query:select ( select CIRCUIT_ID
from cid
) as CIRCUIT_ID,
( select sl.name
from nc_objects sl
join nc_references scr on sl.object_id = scr.reference
where scr.attr_id = 3090562190013347600
and scr.object_id = sc.object_id
) as ALOCATION
from sc
This makes it much cleared that the query will produce 2 columns, called circuit_id and alocation.
Compare the query above with the query below:SELECT object_id,
'Okay'
FROM sc
The basic structure is the same: both queries produce two columns, and both queries produce one row of output for every row that is in the sc table.
The only difference is the two items in the SELECT clause.
The second query has a column from the table as its first column, and a literal for its second column; those are just two of the kinds of things you can have in a SELECT clause. another thing you can have there is a +Scalar Sub-Query+ , a complete query enclosed in parentheses that produces exactly one column and at most one row. If a scalar sub-query produces more than one row, then you get the run-time error: "ORA-01427: single-row subquery returns more than one row", as you did.
A scalar sub-query always takes the place of a single value: "scalar" means "having only one value". In the first example above, the main query is supposed to produce one row of output for every row in sc. How can it do that if some of the columns themselves contain multiple rows?
I don't know what your tables are like, or what output yu want to get from thiose tables.
If you'd like help getting certain results from your tables, then post CREATE TABLE and INSERT statements for a little sample data, and the resutls you want to get from that sample data. A scalar sub-query may help getting those results, or it may not. -
Why is it only possible to run queries on a Distributed cache?
I found by experiementation that if you put a NearCache (only for the benefit of its QueryMap functions) on top of a ReplicatedCache, it will throw a runtime exception saying that the query operations are not supported on the ReplicatedCache.
I understand that the primary goal of the QueryMap interface is to be able to do large, distributed queries on the data across machines in the cluster. However, there are definitely situations where it is useful (such as in my application) to be able to run a local query on the cache to take advantage of the index APIs, etc, for your searches.Kris,
I believe the only API that is currently not supported for ReplicatedCache(s) is "addIndex" and "removeIndex". The query methods "keySet(Filter)" and "entrySet(Filter, Comparator)" are fully implemented.
The reason the index functionality was "pushed" out of 2.x timeframe was an assumption that ReplicatedCache would hold a not-too-big number of entries and since all the data is "local" to the querying JVM the performance of non-indexed iterator would be acceptable. We do, however, plan to fully support the index functionality for ReplicatedCache in our future releases.
Unless I misunderstand your design, since the com.tangosol.net.NamedCache interface extends com.tangosol.util.QueryMap there is no reason to wrap the NamedCache created by the ReplicatedCache service (i.e. returned by CacheFactory.getReplicatedCache method) using the NearCache construct.
Gene -
Database Performace Is Very Poor On IBM AIX Compared To Windows NT
Hi,
Recently we have migrated Our Oracle 10g DataBase from Windows NT to IBM AIX Box. Unfortunately, the Database Performance is gone down when compared to Windows NT environment. Since been a week we are working to pick the problem. We have altered the init.ora parameters to see the database behaviour., But there no Improvement is been observerd.
Below are the Init.Ora Parameters ,
Name Value Description
tracefile_identifier null trace file custom identifier
lock_name_space null lock name space used for generating lock names for standby/clone database
processes 395 user processes
sessions 439 user and system sessions
timed_statistics TRUE maintain internal timing statistics
timed_os_statistics 0 internal os statistic gathering interval in seconds
resource_limit TRUE master switch for resource limit
license_max_sessions 0 maximum number of non-system user sessions allowed
license_sessions_warning 0 warning level for number of non-system user sessions
cpu_count 16 number of CPUs for this instance
instance_groups null list of instance group names
event null debug event control - default null string
sga_max_size 15032385536 max total SGA size
pre_page_sga FALSE pre-page sga for process
shared_memory_address 0 SGA starting address (low order 32-bits on 64-bit platforms)
hi_shared_memory_address 0 SGA starting address (high order 32-bits on 64-bit platforms)
use_indirect_data_buffers FALSE Enable indirect data buffers (very large SGA on 32-bit platforms)
lock_sga TRUE Lock entire SGA in physical memory
shared_pool_size 0 size in bytes of shared pool
large_pool_size 0 size in bytes of large pool
java_pool_size 0 size in bytes of java pool
streams_pool_size 50331648 size in bytes of the streams pool
shared_pool_reserved_size 84724940 size in bytes of reserved area of shared pool
java_soft_sessionspace_limit 0 warning limit on size in bytes of a Java sessionspace
java_max_sessionspace_size 0 max allowed size in bytes of a Java sessionspace
spfile /oracle/app/product/10.2.0.3.0/dbs/spfileCALMDB.ora server parameter file
instance_type RDBMS type of instance to be executed
trace_enabled FALSE enable KST tracing
nls_language AMERICAN NLS language name
nls_territory AMERICA NLS territory name
nls_sort null NLS linguistic definition name
nls_date_language null NLS date language name
nls_date_format null NLS Oracle date format
nls_currency null NLS local currency symbol
nls_numeric_characters null NLS numeric characters
nls_iso_currency null NLS ISO currency territory name
nls_calendar null NLS calendar system name
nls_time_format null time format
nls_timestamp_format null time stamp format
nls_time_tz_format null time with timezone format
nls_timestamp_tz_format null timestampe with timezone format
nls_dual_currency null Dual currency symbol
nls_comp null NLS comparison
nls_length_semantics BYTE create columns using byte or char semantics by default
nls_nchar_conv_excp FALSE NLS raise an exception instead of allowing implicit conversion
fileio_network_adapters null Network Adapters for File I/O
filesystemio_options asynch IO operations on filesystem files
disk_asynch_io FALSE Use asynch I/O for random access devices
tape_asynch_io TRUE Use asynch I/O requests for tape devices
dbwr_io_slaves 0 DBWR I/O slaves
backup_tape_io_slaves FALSE BACKUP Tape I/O slaves
resource_manager_plan null resource mgr top plan
cluster_interconnects null interconnects for RAC use
file_mapping FALSE enable file mapping
gcs_server_processes 0 number of background gcs server processes to start
active_instance_count null number of active instances in the cluster database
sga_target 15032385536 Target size of SGA
control_files /oradata10/oradata/CALMDB/control/CONTROL02.CTL control file names list
db_file_name_convert null datafile name convert patterns and strings for standby/clone db
log_file_name_convert null logfile name convert patterns and strings for standby/clone db
control_file_record_keep_time 0 control file record keep time in days
db_block_buffers 0 Number of database blocks cached in memory
db_block_checksum TRUE store checksum in db blocks and check during reads
db_block_size 8192 Size of database block in bytes
db_cache_size 2147483648 Size of DEFAULT buffer pool for standard block size buffers
db_2k_cache_size 0 Size of cache for 2K buffers
db_4k_cache_size 0 Size of cache for 4K buffers
db_8k_cache_size 0 Size of cache for 8K buffers
db_16k_cache_size 0 Size of cache for 16K buffers
db_32k_cache_size 0 Size of cache for 32K buffers
db_keep_cache_size 0 Size of KEEP buffer pool for standard block size buffers
db_recycle_cache_size 0 Size of RECYCLE buffer pool for standard block size buffers
db_writer_processes 6 number of background database writer processes to start
buffer_pool_keep null Number of database blocks/latches in keep buffer pool
buffer_pool_recycle null Number of database blocks/latches in recycle buffer pool
db_cache_advice ON Buffer cache sizing advisory
max_commit_propagation_delay 0 Max age of new snapshot in .01 seconds
compatible 10.2.0.3.0 Database will be completely compatible with this software version
remote_archive_enable TRUE remote archival enable setting
log_archive_config null log archive config parameter
log_archive_start FALSE start archival process on SGA initialization
log_archive_dest null archival destination text string
log_archive_duplex_dest null duplex archival destination text string
log_archive_dest_1 null archival destination #1 text string
log_archive_dest_2 null archival destination #2 text string
log_archive_dest_3 null archival destination #3 text string
log_archive_dest_4 null archival destination #4 text string
log_archive_dest_5 null archival destination #5 text string
log_archive_dest_6 null archival destination #6 text string
log_archive_dest_7 null archival destination #7 text string
log_archive_dest_8 null archival destination #8 text string
log_archive_dest_9 null archival destination #9 text string
log_archive_dest_10 null archival destination #10 text string
log_archive_dest_state_1 enable archival destination #1 state text string
log_archive_dest_state_2 enable archival destination #2 state text string
log_archive_dest_state_3 enable archival destination #3 state text string
log_archive_dest_state_4 enable archival destination #4 state text string
log_archive_dest_state_5 enable archival destination #5 state text string
log_archive_dest_state_6 enable archival destination #6 state text string
log_archive_dest_state_7 enable archival destination #7 state text string
log_archive_dest_state_8 enable archival destination #8 state text string
log_archive_dest_state_9 enable archival destination #9 state text string
log_archive_dest_state_10 enable archival destination #10 state text string
log_archive_max_processes 2 maximum number of active ARCH processes
log_archive_min_succeed_dest 1 minimum number of archive destinations that must succeed
standby_archive_dest ?/dbs/arch standby database archivelog destination text string
log_archive_trace 0 Establish archivelog operation tracing level
log_archive_local_first TRUE Establish EXPEDITE attribute default value
log_archive_format %t_%s_%r.dbf archival destination format
fal_client null FAL client
fal_server null FAL server list
log_buffer 176918528 redo circular buffer size
log_checkpoint_interval 0 # redo blocks checkpoint threshold
log_checkpoint_timeout 0 Maximum time interval between checkpoints in seconds
archive_lag_target 0 Maximum number of seconds of redos the standby could lose
db_files 200 max allowable # db files
db_file_multiblock_read_count 128 db block to be read each IO
read_only_open_delayed FALSE if TRUE delay opening of read only files until first access
cluster_database FALSE if TRUE startup in cluster database mode
parallel_server FALSE if TRUE startup in parallel server mode
parallel_server_instances 1 number of instances to use for sizing OPS SGA structures
cluster_database_instances 1 number of instances to use for sizing cluster db SGA structures
db_create_file_dest null default database location
db_create_online_log_dest_1 null online log/controlfile destination #1
db_create_online_log_dest_2 null online log/controlfile destination #2
db_create_online_log_dest_3 null online log/controlfile destination #3
db_create_online_log_dest_4 null online log/controlfile destination #4
db_create_online_log_dest_5 null online log/controlfile destination #5
db_recovery_file_dest null default database recovery file location
db_recovery_file_dest_size 0 database recovery files size limit
standby_file_management MANUAL if auto then files are created/dropped automatically on standby
gc_files_to_locks null mapping between file numbers and global cache locks
thread 0 Redo thread to mount
fast_start_io_target 0 Upper bound on recovery reads
fast_start_mttr_target 0 MTTR target of forward crash recovery in seconds
log_checkpoints_to_alert FALSE log checkpoint begin/end to alert file
recovery_parallelism 0 number of server processes to use for parallel recovery
logmnr_max_persistent_sessions 1 maximum number of threads to mine
db_flashback_retention_target 1440 Maximum Flashback Database log retention time in minutes.
dml_locks 1000 dml locks - one for each table modified in a transaction
ddl_wait_for_locks FALSE Disable NOWAIT DML lock acquisitions
replication_dependency_tracking TRUE tracking dependency for Replication parallel propagation
instance_number 0 instance number
transactions 482 max. number of concurrent active transactions
transactions_per_rollback_segment 5 number of active transactions per rollback segment
rollback_segments null undo segment list
undo_management AUTO instance runs in SMU mode if TRUE, else in RBU mode
undo_tablespace UNDOTBS1 use/switch undo tablespace
undo_retention 10800 undo retention in seconds
fast_start_parallel_rollback LOW max number of parallel recovery slaves that may be used
resumable_timeout 0 set resumable_timeout
db_block_checking FALSE header checking and data and index block checking
recyclebin off recyclebin processing
create_stored_outlines null create stored outlines for DML statements
serial_reuse disable reuse the frame segments
ldap_directory_access NONE RDBMS's LDAP access option
os_roles FALSE retrieve roles from the operating system
rdbms_server_dn null RDBMS's Distinguished Name
max_enabled_roles 150 max number of roles a user can have enabled
remote_os_authent FALSE allow non-secure remote clients to use auto-logon accounts
remote_os_roles FALSE allow non-secure remote clients to use os roles
O7_DICTIONARY_ACCESSIBILITY FALSE Version 7 Dictionary Accessibility Support
remote_login_passwordfile NONE password file usage parameter
license_max_users 0 maximum number of named users that can be created in the database
audit_sys_operations TRUE enable sys auditing
global_context_pool_size null Global Application Context Pool Size in Bytes
db_domain null directory part of global database name stored with CREATE DATABASE
global_names TRUE enforce that database links have same name as remote database
distributed_lock_timeout 60 number of seconds a distributed transaction waits for a lock
commit_point_strength 1 Bias this node has toward not preparing in a two-phase commit
instance_name CALMDB instance name supported by the instance
service_names CALMDB service names supported by the instance
dispatchers (PROTOCOL=TCP) (SERVICE=CALMDB) specifications of dispatchers
shared_servers 1 number of shared servers to start up
max_shared_servers null max number of shared servers
max_dispatchers null max number of dispatchers
circuits null max number of circuits
shared_server_sessions null max number of shared server sessions
local_listener null local listener
remote_listener null remote listener
cursor_space_for_time FALSE use more memory in order to get faster execution
session_cached_cursors 200 Number of cursors to cache in a session.
remote_dependencies_mode TIMESTAMP remote-procedure-call dependencies mode parameter
utl_file_dir null utl_file accessible directories list
smtp_out_server null utl_smtp server and port configuration parameter
plsql_v2_compatibility FALSE PL/SQL version 2.x compatibility flag
plsql_compiler_flags INTERPRETED, NON_DEBUG PL/SQL compiler flags
plsql_native_library_dir null plsql native library dir
plsql_native_library_subdir_count 0 plsql native library number of subdirectories
plsql_warnings DISABLE:ALL PL/SQL compiler warnings settings
plsql_code_type INTERPRETED PL/SQL code-type
plsql_debug FALSE PL/SQL debug
plsql_optimize_level 2 PL/SQL optimize level
plsql_ccflags null PL/SQL ccflags
job_queue_processes 10 number of job queue slave processes
parallel_min_percent 0 minimum percent of threads required for parallel query
create_bitmap_area_size 8388608 size of create bitmap buffer for bitmap index
bitmap_merge_area_size 1048576 maximum memory allow for BITMAP MERGE
cursor_sharing FORCE cursor sharing mode
parallel_min_servers 10 minimum parallel query servers per instance
parallel_max_servers 320 maximum parallel query servers per instance
parallel_instance_group null instance group to use for all parallel operations
parallel_execution_message_size 4096 message buffer size for parallel execution
hash_area_size 62914560 size of in-memory hash work area
shadow_core_dump partial Core Size for Shadow Processes
background_core_dump partial Core Size for Background Processes
background_dump_dest /oradata28/oradata/CALMDB/bdump Detached process dump directory
user_dump_dest /oradata28/oradata/CALMDB/udump User process dump directory
max_dump_file_size 10M Maximum size (blocks) of dump file
core_dump_dest /oradata28/oradata/CALMDB/cdump Core dump directory
use_sigio TRUE Use SIGIO signal
audit_file_dest /oracle/app/product/10.2.0.3.0/rdbms/audit Directory in which auditing files are to reside
audit_syslog_level null Syslog facility and level
object_cache_optimal_size 102400 optimal size of the user session's object cache in bytes
object_cache_max_size_percent 10 percentage of maximum size over optimal of the user session's object cache
session_max_open_files 20 maximum number of open files allowed per session
open_links 4 max # open links per session
open_links_per_instance 4 max # open links per instance
commit_write null transaction commit log write behaviour
optimizer_features_enable 10.2.0.3 optimizer plan compatibility parameter
fixed_date null fixed SYSDATE value
audit_trail DB enable system auditing
sort_area_size 31457280 size of in-memory sort work area
sort_area_retained_size 3145728 size of in-memory sort work area retained between fetch calls
db_name TESTDB database name specified in CREATE DATABASE
db_unique_name TESTDB Database Unique Name
open_cursors 2000 max # cursors per session
ifile null include file in init.ora
sql_trace FALSE enable SQL trace
os_authent_prefix ops$ prefix for auto-logon accounts
optimizer_mode ALL_ROWS optimizer mode
sql92_security FALSE require select privilege for searched update/delete
blank_trimming FALSE blank trimming semantics parameter
star_transformation_enabled FALSE enable the use of star transformation
parallel_adaptive_multi_user TRUE enable adaptive setting of degree for multiple user streams
parallel_threads_per_cpu 2 number of parallel execution threads per CPU
parallel_automatic_tuning TRUE enable intelligent defaults for parallel execution parameters
optimizer_index_cost_adj 250 optimizer index cost adjustment
optimizer_index_caching 0 optimizer percent index caching
query_rewrite_enabled TRUE allow rewrite of queries using materialized views if enabled
query_rewrite_integrity enforced perform rewrite using materialized views with desired integrity
sql_version NATIVE sql language version parameter for compatibility issues
pga_aggregate_target 3221225472 Target size for the aggregate PGA memory consumed by the instance
workarea_size_policy AUTO policy used to size SQL working areas (MANUAL/AUTO)
optimizer_dynamic_sampling 2 optimizer dynamic sampling
statistics_level TYPICAL statistics level
skip_unusable_indexes TRUE skip unusable indexes if set to TRUE
optimizer_secure_view_merging TRUE optimizer secure view merging and predicate pushdown/movearound
aq_tm_processes 1 number of AQ Time Managers to start
hs_autoregister TRUE enable automatic server DD updates in HS agent self-registration
dg_broker_start FALSE start Data Guard broker framework (DMON process)
drs_start FALSE start DG Broker monitor (DMON process)
dg_broker_config_file1 /oracle/app/product/10.2.0.3.0/dbs/dr1CALMDB.dat data guard broker configuration file #1
dg_broker_config_file2 /oracle/app/product/10.2.0.3.0/dbs/dr2CALMDB.dat data guard broker configuration file #2
olap_page_pool_size 0 size of the olap page pool in bytes
asm_diskstring null disk set locations for discovery
asm_diskgroups null disk groups to mount automatically
asm_power_limit 1 number of processes for disk rebalancing
sqltune_category DEFAULT Category qualifier for applying hintsets pls suggest
Thanks
KrWe have examined the AWR Reports, That shows ,
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 1074 27-Jul-09 13:00:03 147 16.7
End Snap: 1075 27-Jul-09 14:01:00 150 22.3
Elapsed: 60.96 (mins)
DB Time: 9.63 (mins)
Report Summary
Cache Sizes
Begin End
Buffer Cache: 12,368M 12,368M Std Block Size: 8K
Shared Pool Size: 1,696M 1,696M Log Buffer: 178,172K
Load Profile
Per Second Per Transaction
Redo size: 12,787.87 24,786.41
Logical reads: 7,409.85 14,362.33
Block changes: 61.17 118.57
Physical reads: 0.51 0.98
Physical writes: 4.08 7.90
User calls: 60.11 116.50
Parses: 19.38 37.56
Hard parses: 0.36 0.69
Sorts: 7.87 15.25
Logons: 0.07 0.14
Executes: 50.34 97.57
Transactions: 0.52
% Blocks changed per Read: 0.83 Recursive Call %: 74.53
Rollback per transaction %: 3.29 Rows per Sort: 292.67
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.99 In-memory Sort %: 100.00
Library Hit %: 98.40 Soft Parse %: 98.15
Execute to Parse %: 61.51 Latch Hit %: 99.96
Parse CPU to Parse Elapsd %: 24.44 % Non-Parse CPU: 98.99
Shared Pool Statistics
Begin End
Memory Usage %: 72.35 72.86
% SQL with executions>1: 98.69 96.86
% Memory for SQL w/exec>1: 96.72 87.64
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 535 92.5
db file parallel write 596 106 177 18.3 System I/O
log file parallel write 3,844 40 10 6.9 System I/O
control file parallel write 1,689 29 17 5.0 System I/O
log file sync 2,357 29 12 5.0 Commit
Time Model Statistics
Total time in database user-calls (DB Time): 578s
Statistics including the word "background" measure background process time, and so do not contribute to the DB time statistic
Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 560.61 96.99
DB CPU 534.91 92.55
parse time elapsed 24.16 4.18
hard parse elapsed time 17.90 3.10
PL/SQL execution elapsed time 7.65 1.32
connection management call elapsed time 0.89 0.15
repeated bind elapsed time 0.49 0.08
hard parse (sharing criteria) elapsed time 0.28 0.05
sequence load elapsed time 0.05 0.01
PL/SQL compilation elapsed time 0.03 0.00
failed parse elapsed time 0.02 0.00
hard parse (bind mismatch) elapsed time 0.00 0.00
DB time 577.98
background elapsed time 190.39
background cpu time 15.49
Wait Class
s - second
cs - centisecond - 100th of a second
ms - millisecond - 1000th of a second
us - microsecond - 1000000th of a second
ordered by wait time desc, waits desc
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
System I/O 8,117 0.00 175 22 4.30
Commit 2,357 0.00 29 12 1.25
Network 226,127 0.00 7 0 119.83
User I/O 1,004 0.00 4 4 0.53
Application 91 0.00 2 27 0.05
Other 269 0.00 1 4 0.14
Concurrency 32 0.00 0 7 0.02
Configuration 59 0.00 0 3 0.03
Wait Events
s - second
cs - centisecond - 100th of a second
ms - millisecond - 1000th of a second
us - microsecond - 1000000th of a second
ordered by wait time desc, waits desc (idle events last)
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
db file parallel write 596 0.00 106 177 0.32
log file parallel write 3,844 0.00 40 10 2.04
control file parallel write 1,689 0.00 29 17 0.90
log file sync 2,357 0.00 29 12 1.25
SQL*Net more data from client 4,197 0.00 7 2 2.22
db file sequential read 689 0.00 4 5 0.37
enq: RO - fast object reuse 32 0.00 2 50 0.02
rdbms ipc reply 32 0.00 1 34 0.02
db file scattered read 289 0.00 1 2 0.15
enq: KO - fast object checkpoint 47 0.00 1 14 0.02
control file sequential read 1,988 0.00 0 0 1.05
SQL*Net message to client 218,154 0.00 0 0 115.61
os thread startup 6 0.00 0 34 0.00
SQL*Net break/reset to client 12 0.00 0 15 0.01
log buffer space 59 0.00 0 3 0.03
latch free 10 0.00 0 8 0.01
SQL*Net more data to client 3,776 0.00 0 0 2.00
latch: shared pool 5 0.00 0 5 0.00
reliable message 79 0.00 0 0 0.04
LGWR wait for redo copy 148 0.00 0 0 0.08
buffer busy waits 19 0.00 0 0 0.01
direct path write temp 24 0.00 0 0 0.01
latch: cache buffers chains 2 0.00 0 0 0.00
direct path write 2 0.00 0 0 0.00
SQL*Net message from client 218,149 0.00 136,803 627 115.61
PX Idle Wait 18,013 100.06 35,184 1953 9.55
virtual circuit status 67,690 0.01 3,825 57 35.87
Streams AQ: qmn slave idle wait 130 0.00 3,563 27404 0.07
Streams AQ: qmn coordinator idle wait 264 50.76 3,563 13494 0.14
class slave wait 3 0.00 0 0 0.00
Back to Wait Events Statistics
Back to Top
Background Wait Events
ordered by wait time desc, waits desc (idle events last)
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
db file parallel write 596 0.00 106 177 0.32
log file parallel write 3,843 0.00 40 10 2.04
control file parallel write 1,689 0.00 29 17 0.90
os thread startup 6 0.00 0 34 0.00
log buffer space 59 0.00 0 3 0.03
control file sequential read 474 0.00 0 0 0.25
log file sync 1 0.00 0 11 0.00
events in waitclass Other 148 0.00 0 0 0.08
rdbms ipc message 32,384 54.67 49,367 1524 17.16
pmon timer 1,265 100.00 3,568 2821 0.67
Streams AQ: qmn slave idle wait 130 0.00 3,563 27404 0.07
Streams AQ: qmn coordinator idle wait 264 50.76 3,563 13494 0.14
smon timer 63 11.11 3,493 55447 0.03
SQL ordered by Gets
Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
Total Buffer Gets: 27,101,711
Captured SQL account for 81.1% of Total
Buffer Gets Executions Gets per Exec %Total CPU Time (s) Elapsed Time (s) SQL Id SQL Module SQL Text
11,889,257 3 3,963,085.67 43.87 145.36 149.62 8hr7mrcqpvw7n Begin Pkg_Pg_consolidation.Pro...
5,877,417 17,784 330.49 21.69 59.94 62.30 3mw7tf64wzgv4 SELECT TOTALVOL.PERIOD_NUMBER ...
5,877,303 17,784 330.48 21.69 62.01 63.54 g3vhvg8cz6yu3 SELECT TOTALVOL.PERIOD_NUMBER ...
3,423,336 0 12.63 200.67 200.67 6jrnq2ua8cjnq SELECT ROWNUM , first , sec...
2,810,100 2,465 1,140.00 10.37 19.29 19.29 7f4y1a3k1tzjn SELECT /*+CLUSTER(VA_STATIC_CC...
1,529,253 230 6,648.93 5.64 15.92 16.97 6trp3txn7rh1q SELECT /*+ index(va_gap_irlc_P...
1,523,043 230 6,621.93 5.62 16.22 17.18 3fu81ar131nj9 SELECT /*+ index(va_gap_irla_P...
855,620 358 2,390.00 3.16 11.49 13.31 a3g12c11x7yd0 SELECT FX_DATE, FX_RATE, CCY...
689,979 708 974.55 2.55 4.37 4.43 b7znr5szwjrtx SELECT /*+RULE*/ YIELD_CURVE_C...
603,631 2,110 286.08 2.23 11.03 13.40 3c2gyz9fhswxx SELECT ASSET_LIABILITY_GAP, AL...
554,080 5 110,816.00 2.04 2.37 2.44 9w1b11p6baqat SELECT DISTINCT consolidation_...
318,378 624 510.22 1.17 3.20 3.45 1auhbw1rd5yn2 SELECT /*+ index(va_gap_irla_P...
318,378 624 510.22 1.17 3.19 3.42 6gq9rj96p9aq0 SELECT /*+ index(va_gap_irlc_P...
313,923 3 104,641.00 1.16 2.38 2.38 7vsznt4tvh1b5 ...
SQL ordered by Reads
Total Disk Reads: 1,857
Captured SQL account for 2.1% of Total
Physical Reads Executions Reads per Exec %Total CPU Time (s) Elapsed Time (s) SQL Id SQL Module SQL Text
57 36 1.58 3.07 3.55 5.81 c6vdhsbw1t03d BEGIN citidba.proc_analyze_tab...
32 507 0.06 1.72 0.22 0.40 c49tbx3qqrtm4 insert into dependency$(d_obj#...
28 8 3.50 1.51 0.76 3.02 4crh3z5ya2r27 BEGIN PROC_DELETE_PACK_TABLES(...
20 3 6.67 1.08 145.36 149.62 8hr7mrcqpvw7n Begin Pkg_Pg_consolidation.Pro...
10 1 10.00 0.54 6.21 18.11 4m9ts1b1b27sv BEGIN domain.create_tables(:1,...
7 23 0.30 0.38 1.56 2.22 4vw03w673b9k7 BEGIN PROC_CREATE_PACK_TABLES(...
4 4 1.00 0.22 0.29 1.06 1vw6carbvp4z0 BEGIN Proc_ReCreate_Gap_temp_t...
2 182 0.01 0.11 0.06 0.08 2h0gb24h6zpnu insert into access$(d_obj#, or...
2 596 0.00 0.11 0.26 0.29 5fbmafvm27kfm insert into obj$(owner#, name,...
1 1 1.00 0.05 0.01 0.02 7jsrvff8hnqft UPDATE VA_PRR_IRUT_POL_IBCB_R...
SQL ordered by Executions
Total Executions: 184,109
Captured SQL account for 71.6% of Total
Executions Rows Processed Rows per Exec CPU per Exec (s) Elap per Exec (s) SQL Id SQL Module SQL Text
43,255 43,255 1.00 0.00 0.00 4m94ckmu16f9k JDBC Thin Client select count(*) from dual
25,964 24,769 0.95 0.00 0.00 2kxdq3m953pst SELECT SURROGATE_KEY FROM TB_P...
17,784 54,585 3.07 0.00 0.00 3mw7tf64wzgv4 SELECT TOTALVOL.PERIOD_NUMBER ...
17,784 54,585 3.07 0.00 0.00 g3vhvg8cz6yu3 SELECT TOTALVOL.PERIOD_NUMBER ...
2,631 2,631 1.00 0.00 0.00 60uw2vh6q9vn2 insert into col$(obj#, name, i...
2,465 924,375 375.00 0.01 0.01 7f4y1a3k1tzjn SELECT /*+CLUSTER(VA_STATIC_CC...
2,202 36 0.02 0.00 0.00 96g93hntrzjtr select /*+ rule */ bucket_cnt,...
2,110 206,464 97.85 0.01 0.01 3c2gyz9fhswxx SELECT ASSET_LIABILITY_GAP, AL...
2,043 2,043 1.00 0.00 0.00 28dvpph9k610y SELECT COUNT(*) FROM TB_TECH_S...
842 35 0.04 0.00 0.00 04xtrk7uyhknh select obj#, type#, ctime, mti...
SQL ordered by Parse Calls
Total Parse Calls: 70,872
Captured SQL account for 69.7% of Total
Parse Calls Executions % Total Parses SQL Id SQL Module SQL Text
17,784 17,784 25.09 3mw7tf64wzgv4 SELECT TOTALVOL.PERIOD_NUMBER ...
17,784 17,784 25.09 g3vhvg8cz6yu3 SELECT TOTALVOL.PERIOD_NUMBER ...
2,110 2,110 2.98 3c2gyz9fhswxx SELECT ASSET_LIABILITY_GAP, AL...
786 786 1.11 2s6amyv4qz2h2 exp@PSLDB03 (TNS V1-V3) SELECT INIEXT, SEXT, MINEXT,...
596 596 0.84 5fbmafvm27kfm insert into obj$(owner#, name,...
590 590 0.83 2ym6hhaq30r73 select type#, blocks, extents,...
550 550 0.78 7gtztzv329wg0 select c.name, u.name from co...
512 512 0.72 9qgtwh66xg6nz update seg$ set type#=:4, bloc...
480 480 0.68 6x2cz59yrxz3a exp@PSLDB03 (TNS V1-V3) SELECT NAME, OBJID, OWNER, ...
457 457 0.64 bsa0wjtftg3uw select file# from file$ where ...
Instance Activity Stats
Statistic Total per Second per Trans
CPU used by this session 54,051 14.78 28.64
CPU used when call started 53,326 14.58 28.26
CR blocks created 1,114 0.30 0.59
Cached Commit SCN referenced 755,322 206.51 400.28
Commit SCN cached 29 0.01 0.02
DB time 62,190 17.00 32.96
DBWR checkpoint buffers written 3,247 0.89 1.72
DBWR checkpoints 79 0.02 0.04
DBWR object drop buffers written 118 0.03 0.06
DBWR parallel query checkpoint buffers written 0 0.00 0.00
DBWR revisited being-written buffer 0 0.00 0.00
DBWR tablespace checkpoint buffers written 169 0.05 0.09
DBWR thread checkpoint buffers written 3,078 0.84 1.63
DBWR transaction table writes 0 0.00 0.00
DBWR undo block writes 11,245 3.07 5.96
DFO trees parallelized 0 0.00 0.00
DML statements parallelized 0 0.00 0.00
IMU CR rollbacks 29 0.01 0.02
IMU Flushes 982 0.27 0.52
IMU Redo allocation size 1,593,112 435.57 844.26
IMU commits 991 0.27 0.53
IMU contention 3 0.00 0.00
IMU ktichg flush 3 0.00 0.00
IMU pool not allocated 0 0.00 0.00
IMU recursive-transaction flush 1 0.00 0.00
IMU undo allocation size 3,280,968 897.05 1,738.72
IMU- failed to get a private strand 0 0.00 0.00
Misses for writing mapping 0 0.00 0.00
OS Integral shared text size 0 0.00 0.00
OS Integral unshared data size 0 0.00 0.00
OS Involuntary context switches 0 0.00 0.00
OS Maximum resident set size 0 0.00 0.00
OS Page faults 0 0.00 0.00
OS Page reclaims 0 0.00 0.00
OS System time used 0 0.00 0.00
OS User time used 0 0.00 0.00
OS Voluntary context switches 0 0.00 0.00
PX local messages recv'd 0 0.00 0.00
PX local messages sent 0 0.00 0.00
Parallel operations downgraded to serial 0 0.00 0.00
Parallel operations not downgraded 0 0.00 0.00
SMON posted for dropping temp segment 0 0.00 0.00
SMON posted for undo segment shrink 0 0.00 0.00
SQL*Net roundtrips to/from client 266,339 72.82 141.14
active txn count during cleanout 677 0.19 0.36
application wait time 243 0.07 0.13
background checkpoints completed 0 0.00 0.00
background checkpoints started 0 0.00 0.00
background timeouts 17,769 4.86 9.42
branch node splits 0 0.00 0.00
buffer is not pinned count 11,606,002 3,173.19 6,150.50
buffer is pinned count 65,043,685 17,783.53 34,469.36
bytes received via SQL*Net from client 27,009,252 7,384.57 14,313.33
bytes sent via SQL*Net to client ############### 69,310,703.02 134,343,168.92
calls to get snapshot scn: kcmgss 382,084 104.47 202.48
calls to kcmgas 15,558 4.25 8.24
calls to kcmgcs 1,886 0.52 1.00
change write time 488 0.13 0.26
cleanout - number of ktugct calls 628 0.17 0.33
cleanouts and rollbacks - consistent read gets 3 0.00 0.00
cleanouts only - consistent read gets 53 0.01 0.03
cluster key scan block gets 77,478 21.18 41.06
cluster key scans 41,479 11.34 21.98
commit batch/immediate performed 550 0.15 0.29
commit batch/immediate requested 550 0.15 0.29
commit cleanout failures: block lost 0 0.00 0.00
commit cleanout failures: buffer being written 0 0.00 0.00
commit cleanout failures: callback failure 29 0.01 0.02
commit cleanout failures: cannot pin 0 0.00 0.00
commit cleanouts 19,562 5.35 10.37
commit cleanouts successfully completed 19,533 5.34 10.35
commit immediate performed 550 0.15 0.29
commit immediate requested 550 0.15 0.29
commit txn count during cleanout 396 0.11 0.21
concurrency wait time 23 0.01 0.01
consistent changes 1,803 0.49 0.96
consistent gets 26,887,134 7,351.18 14,248.61
consistent gets - examination 1,524,222 416.74 807.75
consistent gets direct 0 0.00 0.00
consistent gets from cache 26,887,134 7,351.18 14,248.61
cursor authentications 773 0.21 0.41
data blocks consistent reads - undo records applied 1,682 0.46 0.89
db block changes 223,743 61.17 118.57
db block gets 214,573 58.67 113.71
db block gets direct 74 0.02 0.04
db block gets from cache 214,499 58.65 113.67
deferred (CURRENT) block cleanout applications 9,723 2.66 5.15
dirty buffers inspected 5,106 1.40 2.71
enqueue conversions 1,130 0.31 0.60
enqueue releases 49,151 13.44 26.05
enqueue requests 49,151 13.44 26.05
enqueue timeouts 0 0.00 0.00
enqueue waits 79 0.02 0.04
exchange deadlocks 0 0.00 0.00
execute count 184,109 50.34 97.57
failed probes on index block reclamation 1 0.00 0.00
free buffer inspected 6,521 1.78 3.46
free buffer requested 8,656 2.37 4.59
global undo segment hints helped 0 0.00 0.00
global undo segment hints were stale 0 0.00 0.00
heap block compress 457 0.12 0.24
hot buffers moved to head of LRU 5,016 1.37 2.66
immediate (CR) block cleanout applications 56 0.02 0.03
immediate (CURRENT) block cleanout applications 4,230 1.16 2.24
index crx upgrade (found) 0 0.00 0.00
index crx upgrade (positioned) 8,362 2.29 4.43
index fast full scans (full) 3,845 1.05 2.04
index fast full scans (rowid ranges) 0 0.00 0.00
index fetch by key 842,761 230.42 446.61
index scans kdiixs1 376,413 102.91 199.48
leaf node 90-10 splits 42 0.01 0.02
leaf node splits 89 0.02 0.05
lob reads 6,759,932 1,848.23 3,582.37
lob writes 11,788 3.22 6.25
lob writes unaligned 11,788 3.22 6.25
logons cumulative 272 0.07 0.14
messages received 133,602 36.53 70.80
messages sent 133,602 36.53 70.80
no buffer to keep pinned count 219 0.06 0.12
no work - consistent read gets 18,462,318 5,047.76 9,783.95
opened cursors cumulative 77,042 21.06 40.83
parse count (failures) 57 0.02 0.03
parse count (hard) 1,311 0.36 0.69
parse count (total) 70,872 19.38 37.56
parse time cpu 542 0.15 0.29
parse time elapsed 2,218 0.61 1.18
physical read IO requests 821 0.22 0.44
physical read bytes 15,212,544 4,159.25 8,061.76
physical read total IO requests 2,953 0.81 1.56
physical read total bytes 48,963,584 13,387.08 25,947.85
physical read total multi block requests 289 0.08 0.15
physical reads 1,857 0.51 0.98
physical reads cache 1,857 0.51 0.98
physical reads cache prefetch 1,036 0.28 0.55
physical reads direct 0 0.00 0.00
physical reads direct (lob) 0 0.00 0.00
physical reads direct temporary tablespace 0 0.00 0.00
physical reads prefetch warmup 0 0.00 0.00
physical write IO requests 6,054 1.66 3.21
physical write bytes 122,142,720 33,394.92 64,728.52
physical write total IO requests 11,533 3.15 6.11
physical write total bytes 199,223,808 54,469.58 105,577.00
physical write total multi block requests 5,894 1.61 3.12
physical writes 14,910 4.08 7.90
physical writes direct 74 0.02 0.04
physical writes direct (lob) 0 0.00 0.00
physical writes direct temporary tablespace 72 0.02 0.04
physical writes from cache 14,836 4.06 7.86
physical writes non checkpoint 14,691 4.02 7.79
pinned buffers inspected 4 0.00 0.00
prefetch clients - default 0 0.00 0.00
prefetch warmup blocks aged out before use 0 0.00 0.00
prefetch warmup blocks flushed out before use 0 0.00 0.00
prefetched blocks aged out before use 0 0.00 0.00
process last non-idle time 2,370 0.65 1.26
queries parallelized 0 0.00 0.00
recovery blocks read 0 0.00 0.00
recursive aborts on index block reclamation 0 0.00 0.00
recursive calls 643,220 175.86 340.87
recursive cpu usage 15,900 4.35 8.43
redo blocks read for recovery 0 0.00 0.00
redo blocks written 96,501 26.38 51.14
redo buffer allocation retries 0 0.00 0.00
redo entries 115,246 31.51 61.07
redo log space requests 0 0.00 0.00
redo log space wait time 0 0.00 0.00
redo ordering marks 3,605 0.99 1.91 -
How to improve response time of queries?
Although it looks that that this question relates to Reports but actually the problem is related to SQL.
I have a Catalogue type report which retrieves data and prints. That is not much calculations involved.
It uses six tables.
The data model is such that I have a separate query for each table and then all
these tables are liked thru link tool.
Each table contains 3000 to 9000 rows but one table contains 35000 rows.
The problem is that the report is taking too much time - about 3-1/2 hours while
expectation is that it should take 20 to 40 minutes max.
What can I do to reduce the time it takes to produce report.
I mean should I modify data model to make a single query with equi-join?
A)Specially I want to know what is traffic between client and server when
1) we have multiple quieries and LINK tool is used
2) Single query and equi-join is used
B)Which activity is taking most of time ?
1) Retrieving data from server to client
2) Sorting according to groups (at client) and formating data and saving in file
Pl. guide.
Every body is requested to contribute as per his/her experience and knowledge.
M TariqGenerally speaking, your server is faster than your PC (if it is not, then you have bigger problems than a slow query), let the server do as much of the work as possible, particularly things like sorting, and grouping. Any calculations that can be pushed off onto the server (e.g. aggregate functions, cola + colb etc.) should be.
A single query will always be faster than multiple queries. Let the server do the linking.
The more rows you return from the server to your PC, the more bytes the network and your PC have to deal with. Network traffic is "expensive", so get the result set as small as possible on the server before sending it back over the network. PC's generally have little RAM and slow disks compared to servers. Large datasets cause swapping on the PC, this is really expensive.
Unless you are running on a terribly underpowered server, I think even 30 - 40 minutes would be slow for the situation you describe.
Maybe you are looking for
-
Anyone else having problems with bluetooth on iPhone 6 or is it just me?
I have an iPhone 6 Plus, 128 GB, iOS 8.1 and the hands free is driving me crazy. It does connect to the car bluetooth, but sometimes the call goes to the phone and then comes back to the car, or the call starts on the phone and takes too long to go t
-
Two years before I buy G570 from Fry's. Today I need connect my computer to HDTV by HDMI output but alas there is no HDMI port . There is three side by side USB (one of them different appearance) , one RJ45 and one VGA connector at the left side.
-
I am trying to get my new computer up and running. When I tried to transfer my itunes library to the itunes on the new computer, I lost most of the music. Anyone have a easy way to make the transfer?
-
i tried rebooting, nothing. it won't let me do anything. i can't turn it on or off, can't reboot it. it had at least 60 per cent power left last time it was off. i only turned it off because it would not connect to my wi-fi. now all that it does is
-
Help on saving back from CS4 to CS3 with the crop mark issue.
I am not asking how to create crop marks I needing to know how to save back from CS4 to has already has been created in 3. See issue below. I am the only one with CS4 in this office. I have to save back to CS3 for everyone else to read. When looking