I can't use this query on 10G, but can use on 9i
I have ever used bellow query and no error
SQL>select so.* from bsowner.sales_orders so left join bsowner.relation_address rla on (so.customerid = rla.relationid and so.delivery_addressid = rla.addressid ) and rla.addresstype = (select code from bsowner.tab_addtyp where config1=3)
When I use on 10G, I can't query
1* select so.* from bsowner.sales_orders so left join bsowner.relation_address rla on (so.customerid = rla.relationid and so.delivery_addressid = rla.addressid ) and rla.addresstype = (select code from bsowner.tab_addtyp where config1=3)
SQL> /
select so.* from bsowner.sales_orders so left join bsowner.relation_address rla on (so.customerid = rla.relationid and so.delivery_addressid = rla.addressid ) and rla.addresstype = (select code from bsowner.tab_addtyp where config1=3)
ERROR at line 1:
ORA-01799: a column may not be outer-joined to a subquery
Help me,Please
Why I can't use this on 10G?
when i change = to in , i can use
but i don't want to change
help me for the reason
I don't this gonna work on 9i either.
Chang your query to
select so.* from bsowner.sales_orders so left join bsowner.relation_address rla on (so.customerid = rla.relationid and so.delivery_addressid = rla.addressid )
Where
rla.addresstype = (select code from bsowner.tab_addtyp where config1=3)
Similar Messages
-
Posted this query several times but no reply. Please help me out
Hi all,
How to know whether a servlet had completely sent its response to the request.
The problem what i am facing is, I send response to a client's request as a file.
The byte by byte transfer happens, now if the client interrupts or cancels the operation, the response is not completed.
At this stage how to know the response status. I am handling all exception, even then I am unable to trap the status.
Please help me out.
I have posted this query several times but no convincing reply till now. Hence please help me out.
If somebody wants to see the code. I'll send it through mail.
regards
venkatHi,
thanks for the reply,
Please check the code what I have written. The servlet if I execute in Javawebserver2.0 I could trap the exception. If the same servlet is running in weblogic 6.0 I am unable to get the exception.
Please note that I have maintained counter and all necessary exception handling even then unable to trap the exception.
//code//
import java.io.*;
import java.net.*;
import java.util.*;
import javax.servlet.*;
import javax.servlet.http.*;
import java.security.*;
public class TestServ extends HttpServlet
Exception exception;
public void doGet(HttpServletRequest req, HttpServletResponse res)throws ServletException, IOException, UnavailableException
int bytesRead=0;
int count=0;
byte[] buff=new byte[1];
OutputStream out=res.getOutputStream ();
res.setContentType( "application/pdf"); // MIME type for pdf doc
String fileURL ="http://localhost:7001/soap.pdf";
BufferedInputStream bis = null;
BufferedOutputStream bos = null;
boolean download=false;
int fsize=0;
res.setHeader("Content-disposition", "attachment; filename="+"soap.pdf" );
try
URL url=new URL(fileURL);
bis = new BufferedInputStream(url.openStream());
fsize=bis.available();
bos = new BufferedOutputStream(out);
while(-1 != (bytesRead = bis.read(buff, 0, buff.length)))
try
bos.write(bytesRead);
count +=bytesRead;
bos.flush();
}//end of try for while loop
catch(StreamCorruptedException ex)
System.out.println("Exception ="+ex);
setError(ex);
break;
catch(SocketException e)
setError(e);
break;
catch(Exception e)
System.out.println("Exception in while of TestServlet is " +e.getMessage());
if(e != null)
System.out.println("File not downloaded properly"+e);
setError(e);
break;
}//if ends
}//end of catch for while loop
}//while ends
Exception eError=getError();
if(eError!=null)
System.out.println("\n\n\n\nFile Not DownLoaded properly\n\n\n\n");
else if(bytesRead == -1)
System.out.println("\n\n\n\ndownload \nsuccessful\n\n\n\n");
else
System.out.println("\n\n\n\ndownload not successful\n\n\n\n");
catch(MalformedURLException e)
System.out.println ( "Exception inside TestServlet is " +e.getMessage());
catch(IOException e)
System.out.println ( "IOException inside TestServlet is " +e.getMessage());
finally
try
if (bis != null)
bis.close();
if (bos != null)
{bos.close();}
catch(Exception e)
System.out.println("here ="+e);
}//doPost ends
public void setError(Exception e)
exception=e;
System.out.println("\n\n\nException occurred is "+e+"\n\n\n");
public Exception getError()
return exception;
}//class ends
//ends
regards,
venakt -
Hi,
I have written a query in 9i and now i am learning 10G using rgexp. Please can anyone give me a hint how to re-write this query in 10G. Query and output is given below:
select a,SUBSTR(ADDRESS_FORM,INSTR(a,' ')+1) b
FROM TEMP
Input
A
79 RACECOURSE ROAD
65 REDCLIFFE PARADE
65 GOLF LINKS RD
B
RACECOURSE ROAD
REDCLIFFE PARADE
GOLF LINKS RD
Best Regards,I have written a query in 9i and now i am learning 10G using rgexp.At the risk of being blunt, are you actually learning anything about regular expressions? Because you keep posting question after question on the same topic:
I know how to do it in Oracle 9i but i am trying to learn using regexp.
I have to do this using regexp thats why i have asked.
I have following data and want to write a query using regexp to extract
I have to use Oracle 10g and regexp
i have to use regexp as i am using 10g
I want to use regexp expression to extract data between two spaces
I have to use regexp as now i am in 10g i know how to perform this task in 9i.
How can i select the last character from string using regex :Maybe it's time you read this book:
http://www.amazon.com/Mastering-Regular-Expressions-Jeffrey-Friedl/dp/0596528124/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1208914879&sr=1-1 -
hi all,
i have to identify which vouchers are invalid thru a query,the voucher details are entered into a detail table having the voucher no voucher type voucher date debit/credit flag and the amount,,the traiol balance is displaying some problem i.e sum(debit)<>sum(credit),,so i used this query to examine but no results are displayed which condition is proving false?
SELECT FIN_TVOUCH_VOU_TYPE,FIN_TVOUCH_VOU_NO,FIN_TVOUCH_VOU_DATE,
DECODE(DR_CR_FLAG,'D',SUM(AMOUNT)) DEBIT,DECODE(DR_CR_FLAG,'C',SUM(AMOUNT)) CREDIT
FROM GEN_LED_FIN_TVOUCH
WHERE
FIN_TVOUCH_VOU_DATE BETWEEN '01-APR-2005' AND '30-APR-2005'
HAVING DECODE(DR_CR_FLAG,'C',SUM(AMOUNT))<>DECODE(DR_CR_FLAG,'D',SUM(AMOUNT))
GROUP BY FIN_TVOUCH_VOU_TYPE,FIN_TVOUCH_VOU_NO,FIN_TVOUCH_VOU_DATE,DR_CR_FLAGYou need to lose the dr_cr_flag in the group by since you are decoding it away in the select.
TTFN
John -
Can I refactor this query to use an index more efficiently?
I have a members table with fields such as id, last name, first name, address, join date, etc.
I have a unique index defined on (last_name, join_date, id).
This query will use the index for a range scan, no sort required since the index will be in order for that range ('Smith'):
SELECT members.*
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate, idIs there any way I can get something like the following to use the index (with no sort) as well:
SELECT members.*
FROM members
WHERE last_name like 'S%'
ORDER BY joindate, idI understand the difficulty is probably; even if it does a range scan on every last name 'S%' (assuming it can?), they're not necessarily in order. Case in point:
Last_Name: JoinDate:
Smith 2/5/2010
Smuckers 1/10/2010An index range scan of 'S%' would return them in the above order, which is not ordered by joindate.
So is there any way I can refactor this (query or index) such that the index can be range scanned (using LIKE 'x%') and return rows in the correct order without performing a sort? Or is that simply not possible?xaeryan wrote:
I have a members table with fields such as id, last name, first name, address, join date, etc.
I have a unique index defined on (last_name, join_date, id).
This query will use the index for a range scan, no sort required since the index will be in order for that range ('Smith'):
SELECT members.*
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate, idIs there any way I can get something like the following to use the index (with no sort) as well:
SELECT members.*
FROM members
WHERE last_name like 'S%'
ORDER BY joindate, idI understand the difficulty is probably; even if it does a range scan on every last name 'S%' (assuming it can?), they're not necessarily in order. Case in point:
Last_Name: JoinDate:
Smith 2/5/2010
Smuckers 1/10/2010An index range scan of 'S%' would return them in the above order, which is not ordered by joindate.
So is there any way I can refactor this (query or index) such that the index can be range scanned (using LIKE 'x%') and return rows in the correct order without performing a sort? Or is that simply not possible?Come on. Index column order does matter. "LIKE 'x%'" actually is full table scan. The db engine accesses contiguous index entries and then uses the ROWID values in the index to retrieve the table rows. -
How can I use lead/lag in this query
I have written this query which gives me the comparative data based on this week and next week. How can I use Lead/Lag in this query.
WITH CURRENT_WEEK
AS ( SELECT QPAQ.YEAR YEAR,
QPAQ.SEASON SEASON,
REGEXP_SUBSTR (QPAQ.SERIES_NAME, '[^/]+') ACC_SERIES,
TO_NUMBER (QPAQ.WEEK) WEEK,
MAX (QPAQ.FAILURES) FAILURES
FROM QAR_PLAN_ACC_QTY QPAQ, QAR_PLAN_THRESHOLD_LST QPTL
WHERE QPTL.CATEGORY_ID = 7
AND QPAQ.YEAR = QPTL.YEAR
AND QPAQ.SEASON = QPTL.SEASON
AND QPAQ.SERIES_NAME = QPTL.MODEL_SERIES
GROUP BY QPAQ.YEAR,
QPAQ.SEASON,
REGEXP_SUBSTR (QPAQ.SERIES_NAME, '[^/]+'),
TO_NUMBER (QPAQ.WEEK)
ORDER BY REGEXP_SUBSTR (QPAQ.SERIES_NAME, '[^/]+')),
LAST_WEEK
AS ( SELECT QPAQ.YEAR YEAR,
QPAQ.SEASON SEASON,
REGEXP_SUBSTR (QPAQ.SERIES_NAME, '[^/]+') ACC_SERIES,
TO_NUMBER (QPAQ.WEEK + 1) WEEK,
MAX (QPAQ.FAILURES) FAILURES
FROM QAR_PLAN_ACC_QTY QPAQ, QAR_PLAN_THRESHOLD_LST QPTL
WHERE QPTL.CATEGORY_ID = 7
AND QPAQ.YEAR = QPTL.YEAR
AND QPAQ.SEASON = QPTL.SEASON
AND QPAQ.SERIES_NAME = QPTL.MODEL_SERIES
GROUP BY QPAQ.YEAR,
QPAQ.SEASON,
REGEXP_SUBSTR (QPAQ.SERIES_NAME, '[^/]+'),
TO_NUMBER (QPAQ.WEEK)
ORDER BY REGEXP_SUBSTR (QPAQ.SERIES_NAME, '[^/]+'))
SELECT CURRENT_WEEK.YEAR,
CURRENT_WEEK.SEASON,
CURRENT_WEEK.ACC_SERIES,
CURRENT_WEEK.WEEK,
CURRENT_WEEK.FAILURES,
(CURRENT_WEEK.FAILURES - LAST_WEEK.FAILURES) FAILURES_COMPARE
FROM LAST_WEEK, CURRENT_WEEK
WHERE CURRENT_WEEK.WEEK = LAST_WEEK.WEEK(+)
ORDER BY CURRENT_WEEK.WEEK;Output is like this.
YEAR SEASON MODEL WEEK FAILURES Failures_COMPARE
1 2011 SUMMER VGP-BMS15 49 10
2 2011 SUMMER VGP-BMS15 50 28 18
3 2011 SUMMER VGP-BMS15 51 30 2
4 2011 SUMMER VGP-BMS15 52 40 10Edited by: BluShadow on 06-Jan-2012 13:26
added {noformat}{noformat} tags. Please read {message:id=9360002} to learn to do this yourself in future.You would jettison the entire LAST_WEEK subquery. Then replace your failure calculation with
current_week.failure - lag(current_week.failure) over (order by current_week.year, current_week.week) as failures_compareI suppose you might want to think about renaming the sub-query as well ....
Cheers, APC
Edited by: APC on Jan 6, 2012 1:41 PM -
Query can run in Oracle 10g but very slow in 11g
Hi,
We've just migrated to Oracle 11g and we noticed that some of our view are very slow (it takes seconds in 10g and takes 30 minutes in 11g), and the tables are using the local table.
Do any of you face the same issue?
This is our query:
SELECT
A.wellbore
,a.depth center
,d.MD maxbc
,d.XDELT xbc
,d.YDELT ybc
,e.MD minac
,e.XDELT xac
,e.YDELT yac
from
table_A d,table_A e, table_B a
where a.wellbore = d.WELLBORE (+)
and a.wellbore = e.WELLBORE(+)
and d.MD = (select max(MD) from table_A b where b.MD < a.depth and
d.wellBORE = b.wellBORE)
and e.md = (select min(md) from table_A c where c.MD > a.depth and
e.wellBORE = c.wellBORE);Thanks I will move to the correct one..
Rafi,
Build the Indexes and it is still slow. I am querying from a view from another database, which is in 10g instances.
Moved: Query can run in Oracle 10g but very slow in 11g
Edited by: 924400 on Apr 1, 2012 6:03 PM
Edited by: 924400 on Apr 1, 2012 6:26 PM -
How can i speedup this query ?
Hi,
have a look at this query:
SELECT DISTINCT element_short_description
FROM sample_test_report, test_group, test_element_master
WHERE str_test_group_code = tgr_test_group_code
AND str_element_code = element_code
AND tgr_group_description = 'SINTER_CHEMICAL_ANALYSIS'
ORDER BY element_short_descriptionThing is that total number of rows present in "sample_test_report" tables are in lakh ...around 50 lakh.Other two tables have a few rows. This query is taking around 15 seconds for completion, can this query be made faster ?
Note that proper indexing have been done already.
Thanks.SELECT DISTINCT element_short_description
FROM sample_test_report, test_group, test_element_master
WHERE str_test_group_code = tgr_test_group_code
AND str_element_code = element_code
AND tgr_group_description = 'SINTER_CHEMICAL_ANALYSIS'
ORDER BY element_short_description
Can you provide us with explain plan?
I suggest you use table alias every time:
The alias is specified in the FROM clause after each table name.
Table aliases make your queries more readable.
Then gather statistics your tables:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS('OWNER','TABLE',estimate_percent=>NULL,method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO',DEGREE=>10,CASCADE=>TRUE,granularity=>'ALL');
END;
if your db version < 10g
ANALYZE TABLE employees COMPUTE STATISTICS;
ANALYZE TABLE employees ESTIMATE STATISTICS;
If necessary rebuild index, check indexes are use.
Unusable indexes are made valid by rebuilding them to recalculate the pointers.
Rebuilding an unusable index re-creates the index in a new location, and then drops the unusable index. This can be done either by using Enterprise Manager or through SQL commands:
ALTER INDEX HR.emp_empid_pk REBUILD;
ALTER INDEX HR.emp_empid_pk REBUILD ONLINE;
ALTER INDEX HR.email REBUILD TABLESPACE USERS;
Note: Rebuilding an index requires that free space be available for the rebuild. Verify that there is sufficient space before attempting the rebuild. Enterprise Manager checks space requirements automatically.
At the end, try join table separately and then concatenate them.
Your query will be work good.
Look at execution plan, if indexes are not used, use HINTs : There are many Oracle hints available to the developer for use in tuning SQL statements that are embedded in PL/SQL.
please refer to http://www.dba-oracle.com/t_sql_hints_tuning.htm and http://www.adp-gmbh.ch/ora/sql/hints/index.html
Good luck -
How I can change this query, so I can display the name and scores in one r
How I can change this query, so I can add the ID from the table SPRIDEN
as of now is giving me what I want:
1,543 A05 24 A01 24 BAC 24 BAE 24 A02 20 BAM 20in one line but I would like to add the id and name that are stored in the table SPRIDEN
SELECT sortest_pidm,
max(decode(rn,1,sortest_tesc_code)) tesc_code1,
max(decode(rn,1,score)) score1,
max(decode(rn,2,sortest_tesc_code)) tesc_code2,
max(decode(rn,2,score)) score2,
max(decode(rn,3,sortest_tesc_code)) tesc_code3,
max(decode(rn,3,score)) score3,
max(decode(rn,4,sortest_tesc_code)) tesc_code4,
max(decode(rn,4,score)) score4,
max(decode(rn,5,sortest_tesc_code)) tesc_code5,
max(decode(rn,5,score)) score5,
max(decode(rn,6,sortest_tesc_code)) tesc_code6,
max(decode(rn,6,score)) score6
FROM (select sortest_pidm,
sortest_tesc_code,
score,
row_number() over (partition by sortest_pidm order by score desc) rn
FROM (select sortest_pidm,
sortest_tesc_code,
max(sortest_test_score) score
from sortest,SPRIDEN
where
SPRIDEN_pidm =SORTEST_PIDM
AND sortest_tesc_code in ('A01','BAE','A02','BAM','A05','BAC')
and sortest_pidm is not null
GROUP BY sortest_pidm, sortest_tesc_code))
GROUP BY sortest_pidm;
Hi,
That depends on whether spriden_pidm is unique, and on what you want for results.
Whenever you have a problem, post a little sample data (CREATE TABLE and INSERT statements, relevamnt columns only) for all tables, and the results you want from that data.
If you can illustrate your problem using commonly available tables (such as those in the scott or hr schemas) then you don't have to post any sample data; just post the results you want.
Either way, explain how you get those results from that data.
Always say which version of Oracle you're using.
It looks like you're doing something similiar to the following.
Using the emp and dept tables in the scott schema, produce one row of output per department showing the highest salary in each job, for a given set of jobs:
DEPTNO DNAME LOC JOB_1 SAL_1 JOB_2 SAL_2 JOB_3 SAL_3
20 RESEARCH DALLAS ANALYST 3000 MANAGER 2975 CLERK 1100
10 ACCOUNTING NEW YORK MANAGER 2450 CLERK 1300
30 SALES CHICAGO MANAGER 2850 CLERK 950On each row, the jobs are listed in order by the highest salary.
This seems to be analagous to what you're doing. The roles played by sortest_pidm, sortest_tesc_code and sortest_test_score in your sortest table are played by deptno, job and sal in the emp table. The roles played by spriden_pidm, id and name in your spriden table are played by deptno, dname and loc in the dept table.
It sounds like you already have something like the query below, that produces the correct output, except that it does not include the dname and loc columns from the dept table.
SELECT deptno
, MAX (DECODE (rn, 1, job)) AS job_1
, MAX (DECODE (rn, 1, max_sal)) AS sal_1
, MAX (DECODE (rn, 2, job)) AS job_2
, MAX (DECODE (rn, 2, max_sal)) AS sal_2
, MAX (DECODE (rn, 3, job)) AS job_3
, MAX (DECODE (rn, 3, max_sal)) AS sal_3
FROM (
SELECT deptno
, job
, max_sal
, ROW_NUMBER () OVER ( PARTITION BY deptno
ORDER BY max_sal DESC
) AS rn
FROM (
SELECT e.deptno
, e.job
, MAX (e.sal) AS max_sal
FROM scott.emp e
, scott.dept d
WHERE e.deptno = d.deptno
AND e.job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY e.deptno
, e.job
GROUP BY deptno
;Since dept.deptno is unique, there will only be one dname and one loc for each deptno, so we can change the query by replacing "deptno" with "deptno, dname, loc" throughout the query (except in the join condition, of course):
SELECT deptno, dname, loc -- Changed
, MAX (DECODE (rn, 1, job)) AS job_1
, MAX (DECODE (rn, 1, max_sal)) AS sal_1
, MAX (DECODE (rn, 2, job)) AS job_2
, MAX (DECODE (rn, 2, max_sal)) AS sal_2
, MAX (DECODE (rn, 3, job)) AS job_3
, MAX (DECODE (rn, 3, max_sal)) AS sal_3
FROM (
SELECT deptno, dname, loc -- Changed
, job
, max_sal
, ROW_NUMBER () OVER ( PARTITION BY deptno -- , dname, loc -- Changed
ORDER BY max_sal DESC
) AS rn
FROM (
SELECT e.deptno, d.dname, d.loc -- Changed
, e.job
, MAX (e.sal) AS max_sal
FROM scott.emp e
, scott.dept d
WHERE e.deptno = d.deptno
AND e.job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY e.deptno, d.dname, d.loc -- Changed
, e.job
GROUP BY deptno, dname, loc -- Changed
;Actually, you can keep using just deptno in the analytic PARTITION BY clause. It might be a little more efficient to just use deptno, like I did above, but it won't change the results if you use all 3, if there is only 1 danme and 1 loc per deptno.
By the way, you don't need so many sub-queries. You're using the inner sub-query to compute the MAX, and the outer sub-query to compute rn. Analytic functions are computed after aggregate fucntions, so you can do both in the same sub-query like this:
SELECT deptno, dname, loc
, MAX (DECODE (rn, 1, job)) AS job_1
, MAX (DECODE (rn, 1, max_sal)) AS sal_1
, MAX (DECODE (rn, 2, job)) AS job_2
, MAX (DECODE (rn, 2, max_sal)) AS sal_2
, MAX (DECODE (rn, 3, job)) AS job_3
, MAX (DECODE (rn, 3, max_sal)) AS sal_3
FROM (
SELECT e.deptno, d.dname, d.loc
, e.job
, MAX (e.sal) AS max_sal
, ROW_NUMBER () OVER ( PARTITION BY e.deptno
ORDER BY MAX (sal) DESC
) AS rn
FROM scott.emp e
, scott.dept d
WHERE e.deptno = d.deptno
AND e.job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY e.deptno, d.dname, d.loc
, e.job
GROUP BY deptno, dname, loc
;This will work in Oracle 8.1 and up. In Oracle 11, however, it's better to use the SELECT ... PIVOT feature. -
Please can you help me in Tuning this query..?
Hi ,
Please can you help me in re-structuring this query? .Details are given below.
I have 2 tables as shown below and data is like this.
Position
COD IND
AAA N
BBB N
CCC N
DDD Y
Distance
orig dest
AAA BBB
BBB CCC
AAA CCC
I need to create the records like this
start end
DDD AAA
DDD BBB
DDD CCC
The query which i am using now for this is
select p.code AS start,
P1.CODE AS end
from position p, position p1
where
P.CODE != P1.CODE
AND (P.ind = 'Y' or P1.IND = 'Y')
AND not exists
(select 1
from distance d
where (d.orig = p.code or d.dest = p.code)
and (d.orig = p1.code or d.dest = p1.code))
table is having above a crore record. so its taking a lot of time.
Please someone please help in tuning this query?
Thanks and regards,
ShabirLooks like you want this
select a.strt, b.ends from
(select p.code strt from position p where p.ind='Y') a,
(select p.code ends from position p where p.ind='N') b
where not exists (select 1 from distance d where d.orig=a.strt or d.dest=a.strt);
DDD AAA
DDD BBB
DDD CCCYour query result is:
AAA DDD
BBB DDD
CCC DDD
DDD AAA
DDD BBB
DDD CCCYou should be more descriptive about what kind of result you want, so that people can get more interested in helping you. -
Error when trying to use this query in report region
Hi ,
I am getting "1 error has occurred
Query cannot be parsed within the Builder. If you believe your query is syntactically correct, check the ''generic columns'' checkbox below the region source to proceed without parsing. ORA-00933: SQL command not properly ended"
while trying to use this query in reports region .
Pls help.
Thanks ,
Madhuri
declare
x varchar2(32000);
begin
x := q'!select (first_name||' '|| last_name)a ,
count(distinct(session_id)),manager_name
from cappap_log,
MIS_CDR_HR_EMPLOYEES_MV
where DECODE(instr(upper(userid),'@ORACLE.COM',1),0,upper(userid)||'@ORACLE.COM',upper(userid)) = upper(email_address)!';
if :P1_ALL = 'N' then
x:= x||q'!and initcap(first_name ||' '|| last_name)=:P1_USERNAME!';
else
x:= x||q'!and initcap(first_name ||' '|| last_name)like '%'|| :P1_USERNAME||'%'!';
end if;
if :P1_APP_NAME = '%' then
x:= x||q'! and flow_id like '%'!';
else
x:= x||'flow_id = :P1_APP_NAME';
end if;
x:= x||q'! group by first_name||' '|| last_name , manager_name!';
return x;
end;Hi, I am actually stuck here. Can you please let me know which among these is the higher version.
1) Final Release 3.50
Version 3500.3.016
2) Final Release 3.50
Version (Revision 481)
Because it is working fine in the 1st one whereas its throwing that error pop-up in 2nd one(as soon as we select the Change query global definition option) . -
Index is not using for this query
I have this query and it doesn't use index. Can you put your suggestion please?
SELECT /*+ ORDERED USE_HASH(IC_GSMRELATION) USE_HASH(IC_UTRANCELL) USE_HASH(IC_SECTOR) USE_HASH(bt) */
/* cp */
bt.value value,
bt.tstamp tstamp,
ic_GsmRelation.instance_id instance_id
FROM
xr_scenario_tmp IC_GSMRELATION,
xr_scenario_tmp IC_UTRANCELL,
xr_scenario_tmp IC_SECTOR,
rg_busyhour_tmp bt
WHERE
bt.instance_id != -1
AND (IC_GSMRELATION.entity_id = 133)
AND (IC_GSMRELATION.parentinstance_id = ic_UtranCell.instance_id)
AND (IC_UTRANCELL.entity_id = 254)
AND (IC_UTRANCELL.parentinstance_id = ic_Sector.instance_id)
AND (IC_SECTOR.entity_id = 227)
AND (IC_SECTOR.parentinstance_id = bt.instance_id);
table : xr_scenario_tmp
entity_id num
instance_id num
parentinstance_id num
localkey varchar
indexes: 1. entity_id+instance_id
2. entity_id+parentinstance_id
table : rg_busyhour_tmp
instance_id notnull num
tstamp notnull date
rank notnumm num
value float
index: instance_id+tstamp+rank
thanksuser5797895 wrote:
Thanks for the update
1. I don't understand where to put {}. you meant in the forum page like below
Use the tag. Read the [FAQ|http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ?t=anon] for more information. It's the link on the top right corner.
>
2. AROUND 8000 IN DEV MACHINE. BUT 1.5M IN PRODUCTION
It's a more or less useless exercise if you have that vast difference between the two systems. You need to test this thoroughly using a similar amount of data.
3.
Note: cpu costing is off, PLAN_TABLE' is old version
You need to re-create your PLAN_TABLE. That's the reason why important information is missing from your plans. It's the so called "Predicate Information" section below the execution plan and it requires the correct version of the plan table. Drop your current plan table and re-run in SQL*Plus on the server:
@?/rdbms/admin/utlxplan
to re-create the plan table.
Dynamic sampling doesn't alter the plan in any way no matter what sampling level I choose.
When I added Cardinality it switched from 1 full table scan and 2 index read
Can you post the statements with the hints included resp. just the first line including the hints used for the different attempts?
# WITH dbms_stats.gather_table_stats, without cardinality it uses indexes all the time.
How did you call DBMS_STATS.GATHER_TABLE_STATS, i.e. which parameter values where you using?
# After deleting the table stats performance improved back
All these different attempts are not really helpful if you don't say which of them was more effective than the other ones. That's why I'm asking for the "Predicate Information" section so that this information can be used to determine which of your tables might benefit from an indexed access path and which don't.
As already mentioned several times if you use SQL tracing as described in one of the links provided you could see which operation produces how many rows. This would allow to determine if it is efficient or not.
But given that you're doing all this with your test data it doesn't say much about the performance in your production environment.
4. whether GTT created with "ON COMMIT PRESERVE ROWS"?
YES - BUT DIFFERENT SESSIONS HAS DIFFERENT NUMBER OF ROWS
The question is, whether the number of rows differs significantly, if yes, then you shouldn't use the DBMS_STATS approach
5. neigher (48 sec. / 25 sec. run time) are sufficient, then what is the expected?
ACTUALLY I AM DOING IT IN DEVELOPMENT MACHINVE. IN PRODUCTION THE NUMBER OF ROWS ARE DIFFERENT. LAST TIME WHEN WE RELEASED THE
PATCH WITH THIS CODE, THE PERFORMANCE WAS BAD.
See 2., you need to have a suitable test environment. It's a more or less useless exercise if you only have a fraction of the actual amount of data.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Can you please explain how this query is fetching the rows?
here is a query to find the top 3 salaries. But the thing is that i am now able to understand how its working to get the correct data :How the data in the alias table P1 and P2 getting compared. Can you please explain in some steps.
SELECT MIN(P1.SAL) FROM PSAL P1, PSAL P2
WHERE P1.SAL >= P2.SAL
GROUP BY P2.SAL
HAVING COUNT (DISTINCT P1.SAL) <=3 ;
here is the data i used :
SQL> select * from psal;
NAME SAL
able 1000
baker 900
charles 900
delta 800
eddy 700
fred 700
george 700
george 700
Regards,
Renu... Please help me in understanding the query.
Your query looks like anything but a Top-N query.
If you run it in steps and analyze the output at the end of each step, then you should be able to understand what it does.
Given below is some brief information on the same:
test@ora>
test@ora> --
test@ora> -- Query 1 - using the non-equi (theta) join
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT p1.sal AS p1_sal, p1.NAME AS p1_name, p2.sal AS p2_sal,
12 p2.NAME AS p2_name
13 FROM psal p1, psal p2
14 WHERE p1.sal >= p2.sal;
P1_SAL P1_NAME P2_SAL P2_NAME
1000 able 1000 able
1000 able 900 baker
1000 able 900 charles
1000 able 800 delta
1000 able 700 eddy
1000 able 700 fred
1000 able 700 george
1000 able 700 george
900 baker 900 baker
900 baker 900 charles
900 baker 800 delta
900 baker 700 eddy
900 baker 700 fred
900 baker 700 george
900 baker 700 george
900 charles 900 baker
900 charles 900 charles
900 charles 800 delta
900 charles 700 eddy
900 charles 700 fred
900 charles 700 george
900 charles 700 george
800 delta 800 delta
800 delta 700 eddy
800 delta 700 fred
800 delta 700 george
800 delta 700 george
700 eddy 700 eddy
700 eddy 700 fred
700 eddy 700 george
700 eddy 700 george
700 fred 700 eddy
700 fred 700 fred
700 fred 700 george
700 fred 700 george
700 george 700 eddy
700 george 700 fred
700 george 700 george
700 george 700 george
700 george 700 eddy
700 george 700 fred
700 george 700 george
700 george 700 george
43 rows selected.
test@ora>
test@ora>This query joins PSAL with itself using a non equi-join. Take each row of PSAL p1 and see how it compares with PSAL p2. You'll see that:
- Row 1 with sal 1000 is >= to all sal values of p2, so it occurs 8 times
- Row 2 with sal 900 is >= to 9 sal values of p2, so it occurs 7 times
- Row 3: 7 times again... and so on.
- So, total no. of rows are: 8 + 7 + 7 + 5 + 4 + 4 + 4 + 4 = 43
test@ora>
test@ora> --
test@ora> -- Query 2 - add the GROUP BY
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT p2.sal AS p2_sal,
12 COUNT(*) as cnt,
13 COUNT(p1.sal) as cnt_p1_sal,
14 COUNT(DISTINCT p1.sal) as cnt_dist_p1_sal,
15 MIN(p1.sal) as min_p1_sal,
16 MAX(p1.sal) as max_p1_sal
17 FROM psal p1, psal p2
18 WHERE p1.sal >= p2.sal
19 GROUP BY p2.sal;
P2_SAL CNT CNT_P1_SAL CNT_DIST_P1_SAL MIN_P1_SAL MAX_P1_SAL
700 32 32 4 700 1000
800 4 4 3 800 1000
900 6 6 2 900 1000
1000 1 1 1 1000 1000
test@ora>
test@ora>Now, if you group by p2.sal in the output of query 1, and check the number of distinct p1.sal, min of p1.sal etc. you see that for p2.sal values - 800, 900 and 1000, there are 3 or less p1.sal values associated.
So, the last 3 rows are the ones you are interested in, essentially. As follows:
test@ora>
test@ora> --
test@ora> -- Query 3 - GROUP BY and HAVING
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT p2.sal AS p2_sal,
12 COUNT(*) as cnt,
13 COUNT(p1.sal) as cnt_p1_sal,
14 COUNT(DISTINCT p1.sal) as cnt_dist_p1_sal,
15 MIN(p1.sal) as min_p1_sal,
16 MAX(p1.sal) as max_p1_sal
17 FROM psal p1, psal p2
18 WHERE p1.sal >= p2.sal
19 GROUP BY p2.sal
20 HAVING COUNT(DISTINCT p1.sal) <= 3;
P2_SAL CNT CNT_P1_SAL CNT_DIST_P1_SAL MIN_P1_SAL MAX_P1_SAL
800 4 4 3 800 1000
900 6 6 2 900 1000
1000 1 1 1 1000 1000
test@ora>
test@ora>
test@ora>That's what you are doing in that query.
The thing is - in order to find out Top-N values, you simply need to scan that one table PSAL. So, joining it to itself is not necessary.
A much simpler query is as follows:
test@ora>
test@ora>
test@ora> --
test@ora> -- Top-3 salaries - distinct or not; using ROWNUM on ORDER BY
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT sal
12 FROM (
13 SELECT sal
14 FROM psal
15 ORDER BY sal DESC
16 )
17 WHERE rownum <= 3;
SAL
1000
900
900
test@ora>
test@ora>
test@ora>And for Top-3 distinct salaries:
test@ora>
test@ora> --
test@ora> -- Top-3 DISTINCT salaries; using ROWNUM on ORDER BY on DISTINCT
test@ora> --
test@ora> with psal as (
2 select 'able' as name, 1000 as sal from dual union all
3 select 'baker', 900 from dual union all
4 select 'charles', 900 from dual union all
5 select 'delta', 800 from dual union all
6 select 'eddy', 700 from dual union all
7 select 'fred', 700 from dual union all
8 select 'george', 700 from dual union all
9 select 'george', 700 from dual)
10 --
11 SELECT sal
12 FROM (
13 SELECT DISTINCT sal
14 FROM psal
15 ORDER BY sal DESC
16 )
17 WHERE rownum <= 3;
SAL
1000
900
800
test@ora>
test@ora>
test@ora>You may also want to check out the RANK and DENSE_RANK analytic functions.
RANK:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions123.htm#SQLRF00690
DENSE_RANK:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions043.htm#SQLRF00633
HTH
isotope -
How Can i add "DateDiff(day, T0.DueDate" as a column in this query?
How Can i add "DateDiff(day, T0.DueDate" as a column in this query?
SELECT T1.CardCode, T1.CardName, T1.CreditLine, T0.RefDate, T0.Ref1 'Document Number',
CASE WHEN T0.TransType=13 THEN 'Invoice'
WHEN T0.TransType=14 THEN 'Credit Note'
WHEN T0.TransType=30 THEN 'Journal'
WHEN T0.TransType=24 THEN 'Receipt'
END AS 'Document Type',
T0.DueDate, (T0.Debit- T0.Credit) 'Balance'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')<=-1),0) 'Future'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>=0 and DateDiff(day, T0.DueDate,'[%1]')<=30),0) 'Current'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>30 and DateDiff(day, T0.DueDate,'[%1]')<=60),0) '31-60 Days'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>60 and DateDiff(day, T0.DueDate,'[%1]')<=90),0) '61-90 Days'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>90 and DateDiff(day, T0.DueDate,'[%1]')<=120),0) '91-120 Days'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>=121),0) '121+ Days'
FROM JDT1 T0 INNER JOIN OCRD T1 ON T0.ShortName = T1.CardCode
WHERE (T0.MthDate IS NULL OR T0.MthDate > [%1]) AND T0.RefDate <= [%1] AND T1.CardType = 'C'
ORDER BY T1.CardCode, T0.DueDate, T0.Ref1Hi,
As you mentioned not possible to assign the dynamic column in the query.
will give you example for generate a dynamic column name in SQL query, using this example you can achieve your requirement.
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(C.Name)
from [History]
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT [Date],' + @cols +'
from
select [Date], Name, Value
from [History]
) x
pivot
max(value)
for Name in (' + @cols + ')
) p '
execute(@query) -
Can this query be optimised?
Hi,
This query is taking more time to execute. please can someone advise..
SELECT reference_value AS billing_system_account_id,
account_id AS sim_account_id
FROM account_reference
WHERE reference_name = 'ACCOUNT_ID'
AND account_id IN
(SELECT DISTINCT (ACCOUNT_ID)
FROM asset
WHERE status NOT IN ('CEASED', 'CANCELLED')
AND asset_id IN
(SELECT asset_id
FROM asset_config
WHERE config_name IN
('userName', 'login')
AND (config_value IN
('abc'
|| '@abc',
'abc'))))Using EXISTS instead of IN, as Salim suggested, might work for you.
Some more explanation:
http://asktom.oracle.com/pls/asktom/f?p=100:11:2095243262787694::::P11_QUESTION_ID:953229842074
But anyway:
when posting a tuning request, you always need to mention:
- your database version
- an execution plan of the query (or tkprof output)
- information regarding indexes
- information regarding table statistics
in order to make it possible for us to help you better.
Please read these informative threads regarding posting tuning requests:
When your query takes too long ...
HOW TO: Post a SQL statement tuning request - template posting
Maybe you are looking for
-
WS exception : Failed to Decode message
Hi, We have deployed a Java WS (using JDev 10.1.3.3) to WLS 9.2. One of the methods contains java.util.list object as return type. The following error is thrown when web service is tested using SOAP UI. <bea_fault:stacktrace xmlns:bea_fault="http://w
-
Pacman not finding virtualbox-ose?
I am fairly new to Linux and VERY new to Arch but so far, I haven't had any problems that I couldn't solve. I am especially enjoying using pacman. I'm really surprised at how powerful & easy to use it is. However, I am having a problem trying to i
-
WHEN I PRESS NAVIGTOR ON MY NOKIA 500 IS NOT WORKI...
PLEASE I NEED SOMEBODY TO HELP ME TO INSTALL SOFTWARE TO MY NOKIA 500 STEP BY STEP FROM GET GO ? Solved! Go to Solution.
-
I have one of the first Linksys g routers. The issue that I am running into is that the iPhone is looking for a password and my configuration utilizes a 128 bit key. When I enter the key as a password it comes right back saying it was the wrong passw
-
Oracle Data Integrator does not start
Hi, I just installed ODI 11g (11.1.1.7.0), and successfully created repositories with rcu. But I have problems startgin the application itself. The environment may be the issue : OS - Debian Wheezy 64bits java - version "1.6.0_27" OpenJDK Runtime Env