Slow Query on Virtual Tables
Hello!
I am using a proprietary IDE environment to compile some applications. I cannot change the following SQL query it performs. During "Compile", the IDE is checking the DB Function calls in the app by executing the following
SQL Statement:
select c.owner, c.synonym_name, INSTR(a.object_type, 'PROC'), '1' from sys.dba_objects a, sys.dba_synonyms c where c.table_owner = a.owner and c.table_name = a.object_name and a.object_type IN ('FUNCTION','PROCEDURE') and a.status = 'VALID' and c.owner like'USERNAME'escape'\'
*of course, I put in USERNAME where an actual UserName/Owner name was.
This is the query that happens for every DB Procedure step in the application. The app isn't holding the list in memory and using it over again (that would help performance, but is probably impossible to have the IDE updated in a reasonable timeframe).
There are 50,000 objects and 10,000 synonyms on this database server. This query takes 20 seconds to complete. When I remove the last WHERE condition - c.owner like'USERNAME'escape'\' - the query takes 3 seconds.
Is there a way I can speed up this query? If I use a login with some restriced permissions, will it limit what I see in the SYN$ table?
Is there a better way to state this query? I can make a recommendation to the IDE company (but I probably won't get a patch for months).
Thanks for your help,
Eric
Could you try the following?
select /*+ full ( a ) full ( c ) */
c.owner, c.synonym_name, INSTR(a.object_type, 'PROC'), '1'
from sys.dba_objects a
,sys.dba_synonyms c
where c.table_owner = a.owner
and c.table_name = a.object_name
and a.object_type IN ('FUNCTION','PROCEDURE')
and a.status = 'VALID'
and c.owner like 'X_OWNER' escape'\'
Similar Messages
-
Hi all,
I have a table with SUBTYPE, DATAID, ORIGNALDATAID, DATE column.
select dataid, orignaldataid from table where subtype=1 and Date=01-Aug-2010.
I have the above query statment. I need the dataid for further processing, but before the processing is done. I need to check if the orignaldataid=0, if yes, dataid remains else dataid=orginaldataid. Before the list is pass to another sql. But i intend to use nested sql for futher processing, hence i need to database to crunch the data instead of using java to change the data.
I was thinking of creaating another virtaual table in DB where it will automatically do this for me instead of having dataid and orginaldatid column, it will jus have dataid column. Then i can just query this virtual table.
Select dataid from virtualTable where subtype=1 and Date=01-Aug-2010. Is it possible?Hi,
If you want to have two tables
(1) your original table with all the data, and
(2) fubar, which perhaps has fewer rows, and some derived columns
and you want changes in the original table to automatically change fubar (when apppropriate), then make fubar a Materialized View . Despite the name, a materialized view is a real table. Is it auotmatically refreshed, or changed to show changes in the original table. Typically, this is done at fixed intervals, that is, you tell the system to refresh the materialized view every morning a 2:00 am, or every 30 minutes. You can also refresh it on demand, so that you know the dta is up to date.
If you are using Oracle 10, then you can't use the virtual column feature, but you can create regular columns and write triggers to populate them, based on other values in the same row. I don';t know if this would serve your needs better than a materialized view or not. -
Why Isn't xmlindex being used in slow query on binary xml table eval?
I am running a slow simple query on Oracle database server 11.2.0.1 that is not using an xmlindex. Instead, a full table scan against the eval binary xml table occurs. Here is the query:
select -- /*+ NO_XMLINDEX_REWRITE no_parallel(eval)*/
defid from eval,
XMLTable(XMLNAMESPACES(DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7"),
'$doc/eval/derivedFacts/ns7:derivedFact' passing eval.object_value as "doc" columns defid varchar2(100) path 'ns7:defId'
) eval_xml
where eval_xml.defid in ('59543','55208'); The predicate is not selective at all - the returned row count is the same as the table row count (325,550 xml documents in the eval table). When different values are used bringing the row count down to ~ 33%, the xmlindex still isn't used - as would be expected in a purely relational nonXML environment.
My question is why would'nt the xmlindex be used in a fast full scan manner versus a full table scan traversing the xml in each eval table document record?
Would a FFS hint be applicable to an xmlindex domain-type index?
Here is the xmlindex definition:
CREATE INDEX "EVAL_XMLINDEX_IX" ON "EVAL" (OBJECT_VALUE)
INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
('XMLTable eval_idx_tab XMLNamespaces(DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03'',
''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7"),''/eval''
COLUMNS defId VARCHAR2(100) path ''/derivedFacts/ns7:derivedFact/ns7:defId''');Here is the eval table definition:
CREATE
TABLE "N98991"."EVAL" OF XMLTYPE
CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
DEFAULT) TABLESPACE "ACME_DATA" ENABLE
XMLTYPE STORE AS SECUREFILE BINARY XML
TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT)
ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
"EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
/eval/@eval_dt'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
WITH
TIME ZONE))),
"EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50))),
"ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50))),
"EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50)))
PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE "ACME_DATA" ; Sample cleansed xml snippet:
<?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?><eval createdById="xxxx" hhhhMemberId="37e6f05a-88dc-41e9-a8df-2a2ac6d822c9" category="eeeeeeee" eval_dt="2012-02-11T23:47:02.645Z" evalId="12e007f5-b7c3-4da2-b8b8-4bf066675d1a" xmlns="http://www.xxxxx.com/vvvv/domains/eval/2010/03" xmlns:ns2="http://www.cigna.com/nnnn/domains/derived/fact/2010/03" xmlns:ns3="http://www.xxxxx.com/vvvv/domains/common/2010/03">
<derivedFacts>
<ns2:derivedFact>
<ns2:defId>12345</ns2:defId>
<ns2:defUrn>urn:mmmmrunner:Medical:Definition:DerivedFact:52657:1</ns2:defUrn>
<ns2:factSource>tttt Member</ns2:factSource>
<ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
<ns2:factValue>
<ns2:type>boolean</ns2:type>
<ns2:value>true</ns2:value>
</ns2:factValue>
</ns2:derivedFact>
<ns2:derivedFact>
<ns2:defId>52600</ns2:defId>
<ns2:defUrn>urn:ddddrunner:Medical:Definition:DerivedFact:52600:2</ns2:defUrn>
<ns2:factSource>cccc Member</ns2:factSource>
<ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
<ns2:factValue>
<ns2:type>string</ns2:type>
<ns2:value>null</ns2:value>
</ns2:factValue>
</ns2:derivedFact>
<ns2:derivedFact>
<ns2:defId>59543</ns2:defId>
<ns2:defUrn>urn:ddddunner:Medical:Definition:DerivedFact:52599:1</ns2:defUrn>
<ns2:factSource>dddd Member</ns2:factSource>
<ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
<ns2:factValue>
<ns2:type>string</ns2:type>
<ns2:value>INT</ns2:value>
</ns2:factValue>
</ns2:derivedFact>
With the repeating <ns2:derivedFact> element continuing under the <derivedFacts>The Oracle XML DB Developer's Guide 11g Release 2 isn't helping much...
Any assitance much appreciated.
Regards,
Rick Blanchardodie 63, et. al.;
Attached is the reworked select query, xmlindex, and 2ndary indexes. Note: though namespaces are used; we're not registering any schema defns.
SELECT /*+ NO_USE_HASH(eval) +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
eval_xml.eval_catt, df.defid FROM eval,
--df.defid FROM eval,
XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
'/eval' passing eval.object_value
COLUMNS
eval_catt VARCHAR2(50) path '@category',
derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
'/ns7:derivedFact' passing eval_xml.derivedFact
COLUMNS
defid VARCHAR2(100) path 'ns7:defId') df
WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
--where df.defid = '52657';
SELECT /*+ NO_USE_HASH(eval +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
eval_xml.eval_catt, df.defid FROM eval,
--df.defid FROM eval,
XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
'/eval' passing eval.object_value
COLUMNS
eval_catt VARCHAR2(50) path '@category',
derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
'/ns7:derivedFact' passing eval_xml.derivedFact
COLUMNS
defid VARCHAR2(100) path 'ns7:defId') df
WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
--where df.defid = '52657'; create index defid_2ndary_ix on eval_idx_tab_II (defID);
eval_catt VARCHAR2(50) path ''@CATEGORY''');
create index eval_catt_2ndary_ix on eval_idx_tab_I (eval_catt);The xmlindex is getting picked up but a couple of problesm:
1. In the developemnt environment, no xml source records for defid '52657' or '52599' are being displayed - just an empty output set occurs; in spite of these values being present and stored in the source xml.
This really has me stumped, as can query the eval table to see the actual xml defid vaues '52657' and '52599' exist. Something appears off with the query - which didn't return records even before the corrresponding xml index was created.
2. The query still performs slowly, in spite of using the xmlindex. The execution plan shows a full table scan of eval occurring whether or not a HASH JOIN or MERGE JOIN (gets used in place of the HASH JOIN when the NO_USE_HASH(eval) int is used.
3. Single column 2ndary indexes created respectively for eval_catt and defid are not used in the execution plan - which may be expected upon further consideration.
In the process of running stats at this moment, to see if performance improves....
At this point, really after why item '1.' is occurring?
Edited by: RickBlanchardSRSCigna on Apr 16, 2012 1:33 PM -
Slow Query Using index. Fast with full table Scan.
Hi;
(Thanks for the links)
Here's my question correctly formated.
The query:
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'Runs on 32 seconds!
Same query, but with one extra where clause:
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
and ( (ec.contextVersion = 'REALWORLD') --- ADDED HERE
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'This runs in 400 seconds.
It should return data from one table, given the conditions.
The version of the database is Oracle9i Release 9.2.0.7.0
These are the parameters relevant to the optimizer:
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 99
optimizer_index_cost_adj integer 10
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL> Here is the output of EXPLAIN PLAN for the first fast query:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
|* 2 | TABLE ACCESS FULL | EHCONS | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE"
IS NULL AND "EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy
-mm-dd
hh24:mi:ss') AND "EC"."TYPE"='BAR')
Note: rule based optimizationHere is the output of EXPLAIN PLAN for the slow query:
PLAN_TABLE_OUTPUT
| |
| 1 | SORT AGGREGATE | | |
| |
|* 2 | TABLE ACCESS BY INDEX ROWID| ehgeoconstru | |
| |
|* 3 | INDEX RANGE SCAN | ehgeoconstru_VSN | |
| |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE" IS
NULL AND "EC"."TYPE"='BAR')
PLAN_TABLE_OUTPUT
3 - access("EC"."CONTEXTVERSION"='REALWORLD' AND "EC"."BIRTHDATE"<=TO_DATE('2
009-10-06
11:52:12', 'yyyy-mm-dd hh24:mi:ss'))
filter("EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy-mm-dd hh24:
mi:ss'))
Note: rule based optimizationThe TKPROF output for this slow statement is:
TKPROF: Release 9.2.0.7.0 - Production on Tue Nov 17 14:46:32 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: gen_ora_3120.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
and ( (ec.contextVersion = 'REALWORLD')
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 538.12 162221 1355323 0 1
total 4 0.00 538.12 162221 1355323 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 153
Rows Row Source Operation
1 SORT AGGREGATE
27747 TABLE ACCESS BY INDEX ROWID OBJ#(73959)
2134955 INDEX RANGE SCAN OBJ#(73962) (object id 73962)
alter session set sql_trace=true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.02 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 153
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.02 0 0 0 0
Fetch 2 0.00 538.12 162221 1355323 0 1
total 5 0.00 538.15 162221 1355323 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
2 user SQL statements in session.
0 internal SQL statements in session.
2 SQL statements in session.
Trace file: gen_ora_3120.trc
Trace file compatibility: 9.02.00
Sort options: prsela exeela fchela
2 sessions in tracefile.
2 user SQL statements in trace file.
0 internal SQL statements in trace file.
2 SQL statements in trace file.
2 unique SQL statements in trace file.
94 lines in trace file.Edited by: PauloSMO on 17/Nov/2009 4:21
Edited by: PauloSMO on 17/Nov/2009 7:07
Edited by: PauloSMO on 17/Nov/2009 7:38 - Changed title to be more correct.Although your optimizer_mode is choose, it appears that there are no statistics gathered on ehgeoconstru. The lack of cost estimate and estimated row counts from each step of the plan, and the "Note: rule based optimization" at the end of both plans would tend to confirm this.
Optimizer_mode choose means that if statistics are gathered then it will use the CBO, but if no statistics are present in any of the tables in the query, then the Rule Based Optimizer will be used. The RBO tends to be index happy at the best of times. I'm guessing that the index ehgeoconstru_VSN has contextversion as the leading column and also includes birthdate.
You can either gather statistics on the table (if all of the other tables have statistics) using dbms_stats.gather_table_stats, or hint the query to use a full scan instead of the index. Another alternative would be to apply a function or operation against the contextversion to preclude the use of the index. something like this:
SELECT COUNT(*)
FROM ehgeoconstru ec
WHERE ec.type='BAR' and
ec.contextVersion||'' = 'REALWORLD'
ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') and
deathdate is null and
SUBSTR(ec.strgfd, 1, LENGTH('[CIMText')) <> '[CIMText'or perhaps UPPER(ec.contextVersion) if that would not change the rows returned.
John -
Query on GL_BALANCES Table is very slow
Dear Members,
I have prepared a query on GL_BALANCES Table which is as follows:
SQL>
select (nvl(sum(gb.period_net_dr), 0) - nvl(sum(gb.period_net_cr), 0))
INTO V_AMT
from gl_balances gb,
gl_code_combinations gcc,
fnd_flex_value_hierarchies ffvh
where gb.set_of_books_id = 7
and gb.actual_flag = 'B'
and gb.period_year = P_YEAR --'2010'
and gb.budget_version_id = P_BUD_VER_ID--1234
and gb.code_combination_id = gcc.code_combination_id
and gcc.segment3 >= ffvh.child_flex_value_low
and gcc.segment3 <= ffvh.child_flex_value_high
and ffvh.flex_value_set_id = 1012576
and ffvh.parent_flex_value = P_FLEX_VAL;
The above query is taking long time to complete.
Plan of my query is as follows
SELECT STATEMENT, GOAL = CHOOSE Cost=49885 Cardinality=1 Bytes=64
SORT AGGREGATE Cardinality=1 Bytes=64
MERGE JOIN Cost=49885 Cardinality=12 Bytes=768
SORT JOIN Cost=49859 Cardinality=2192 Bytes=74528
HASH JOIN Cost=49822 Cardinality=2192 Bytes=74528
TABLE ACCESS FULL Object owner=GL Object name=GL_BALANCES Cost=47955 Cardinality=2192 Bytes=50416
TABLE ACCESS FULL Object owner=GL Object name=GL_CODE_COMBINATIONS Cost=1854 Cardinality=526850 Bytes=5795350
FILTER
SORT JOIN
INDEX RANGE SCAN Object owner=APPLSYS Object name=FND_FLEX_VALUE_HIERARCHIES_N1 Cost=2 Cardinality=2 Bytes=60
Can any one please help me in tuning this query.
Thanks in advance.
Best Regards.As per the other replies, you've not really given enough information to go on - what are you trying to achieve, versions, etc.
On my old apps 11.5.8 system, the explain plan for your query uses GL_CODE_COMBINATIONS_U1 rather than a full scan of gl_code_combinations.
Check your stats are up to date (select table_name, num_rows, last_analyzed from dba_tables where ...)
See if you can also use period_name and/or period_set_name (or period_num) from GL_Periods rather than period_year (i.e. use P_YEAR to lookup the period_name/period_set_name/period_num from gl_periods). It might be faster to do per period and then consolidate for the whole year, as there are indexes on gl_balances for columns period_name, period_set_name, period_num.
regards, Ivan -
Slow queries and full table scans in spite of context index
I have defined a USER_DATASTORE, which uses a PL/SQL procedure to compile data from several tables. The master table has 1.3 million rows, and one of the fields being joined is a CLOB field.
The resulting token table has 65,000 rows, which seems about right.
If I query the token table for a word, such as "ORACLE" in the token_text field, I see that the token_count is 139. This query returns instantly.
The query against the master table is very slow, taking about 15 minutes to return the 139 rows.
Example query:
select hnd from master_table where contains(myindex,'ORACLE',1) > 0;
I've run a sql_trace on this query, and it shows full table scans on both the master table and the DR$MYINDEX$I table. Why is it doing this, and how can I fix it?After looking at the tuning FAQ, I can see that this is doing a functional lookup instead of an indexed lookup. But why, when the rows are not constrained by any structural query, and how can I get it to instead to an indexed lookup?
Thanks in advance,
Annie -
Slow query performance in Oracle 10.2.0.3
Hi,
We have Oracle 10.2.0.3 installed on RHEL 5(64 bit).We have two queries out of which one is a query using select while other query is using a insert.First we executed insert query which inserts 10000 rows in a table and then select query on this table.This works fine in one thread.But when we do samething in 10 threads, at that time INSERT is fine but select is taking very long time for 10 threads.Any bug related to parallel execution of queries for SELECT in 10.2.0.3?Any suggestion??
Thanks in advance.
Regards,
RJ.Justin,
We have a same queries for INSERT and Select in 10 manual sessions outof which select query is taking more time to execute.Please refer to WAITs given below.No there is no bottleneck as far as hardware is concerned because we tested it on different configuration of servers.
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 52 93.2
latch: cache buffers chains 45,542 6 0 10.7 Concurrency
log file parallel write 2,107 3 1 5.2 System I/O
log file sync 805 2 2 3.5 Commit
latch: session allocation 5,116 1 0 2.6 Other Wait Events
• s - second
• cs - centisecond - 100th of a second
• ms - millisecond - 1000th of a second
• us - microsecond - 1000000th of a second
• ordered by wait time desc, waits desc (idle events last)
Event Waits %Time-outs Total Wait Time (s) Avg wait (ms) Waits /txn
latch: cache buffers chains 45,542 0.00 6 0 22.99
log file parallel write 2,107 0.00 3 1 1.06
log file sync 805 0.00 2 2 0.41
latch: session allocation 5,116 0.00 1 0 2.58
buffer busy waits 20,482 0.00 1 0 10.34
db file sequential read 157 0.00 1 4 0.08
control file parallel write 1,330 0.00 0 0 0.67
wait list latch free 39 0.00 0 10 0.02
enq: TX - index contention 632 0.00 0 0 0.32
latch free 996 0.00 0 0 0.50
SQL*Net break/reset to client 1,738 0.00 0 0 0.88
SQL*Net message to client 108,947 0.00 0 0 55.00
os thread startup 2 0.00 0 19 0.00
cursor: pin S wait on X 3 100.00 0 11 0.00
latch: In memory undo latch 136 0.00 0 0 0.07
log file switch completion 4 0.00 0 7 0.00
latch: shared pool 119 0.00 0 0 0.06
latch: undo global data 121 0.00 0 0 0.06
buffer deadlock 238 99.58 0 0 0.12
control file sequential read 1,735 0.00 0 0 0.88
SQL*Net more data to client 506 0.00 0 0 0.26
log file single write 2 0.00 0 2 0.00
SQL*Net more data from client 269 0.00 0 0 0.14
reliable message 12 0.00 0 0 0.01
LGWR wait for redo copy 26 0.00 0 0 0.01
rdbms ipc reply 6 0.00 0 0 0.00
latch: library cache 7 0.00 0 0 0.00
latch: redo allocation 2 0.00 0 0 0.00
enq: RO - fast object reuse 2 0.00 0 0 0.00
direct path write 21 0.00 0 0 0.01
cursor: pin S 1 0.00 0 0 0.00
log file sequential read 2 0.00 0 0 0.00
direct path read 8 0.00 0 0 0.00
SQL*Net message from client 108,949 0.00 43,397 398 55.00
jobq slave wait 14,527 49.56 35,159 2420 7.33
Streams AQ: qmn slave idle wait 246 0.00 3,524 14326 0.12
Streams AQ:qmn coordinator-
idle wait 451 45.45 3,524 7814 0.23
wait for unread message on -
broadcast channel 3,597 100.00 3,516 978 1.82
virtual circuit status 120 100.00 3,516 29298 0.06
class slave wait 2 0.00 0 0 0.00 Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv -
Very Slow Query with CTE inner join
I have 2 tables (heavily simplified here to show relevant columns):
CREATE TABLE tblCharge
(ChargeID int NOT NULL,
ParentChargeID int NULL,
ChargeName varchar(200) NULL)
CREATE TABLE tblChargeShare
(ChargeShareID int NOT NULL,
ChargeID int NOT NULL,
TotalAmount money NOT NULL,
TaxAmount money NULL,
DiscountAmount money NULL,
CustomerID int NOT NULL,
ChargeShareStatusID int NOT NULL)
I have a very basic View to Join them:
CREATE VIEW vwBASEChargeShareRelation as
Select c.ChargeID, ParentChargeID, s.CustomerID, s.TotalAmount, isnull(s.TaxAmount, 0) as TaxAmount, isnull(s.DiscountAmount, 0) as DiscountAmount
from tblCharge c inner join tblChargeShare s
on c.ChargeID = s.ChargeID Where s.ChargeShareStatusID < 3
GO
I then have a view containing a CTE to get the children of the Parent Charge:
ALTER VIEW [vwChargeShareSubCharges] AS
WITH RCTE AS
SELECT ParentChargeId, ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(TaxAmount, 0) as TaxAmount,
ISNULL(DiscountAmount, 0) as DiscountAmount, CustomerID, ChargeID as MasterChargeID
FROM vwBASEChargeShareRelation Where ParentChargeID is NULL
UNION ALL
SELECT rh.ParentChargeID, rh.ChargeID, Lvl+1 AS Lvl, ISNULL(rh.TotalAmount, 0), ISNULL(rh.TaxAmount, 0), ISNULL(rh.DiscountAmount, 0) , rh.CustomerID
, rc.MasterChargeID
FROM vwBASEChargeShareRelation rh
INNER JOIN RCTE rc ON rh.PArentChargeID = rc.ChargeID and rh.CustomerID = rc.CustomerID
Select MasterChargeID as ChargeID, CustomerID, Sum(TotalAmount) as TotalCharged, Sum(TaxAmount) as TotalTax, Sum(DiscountAmount) as TotalDiscount
from RCTE
Group by MasterChargeID, CustomerID
GO
So far so good, I can query this view and get the total cost for a line item including all children.
The problem occurs when I join this table. The query:
Select t.* from vwChargeShareSubCharges t
inner join
tblChargeShare s
on t.CustomerID = s.CustomerID
and t.MasterChargeID = s.ChargeID
Where s.ChargeID = 1291094
Takes around 30 ms to return a result (tblCharge and Charge Share have around 3.5 million records).
But the query:
Select t.* from vwChargeShareSubCharges t
inner join
tblChargeShare s
on t.CustomerID = s.CustomerID
and t.MasterChargeID = s.ChargeID
Where InvoiceID = 1045854
Takes around 2 minutes to return a result - even though the only charge with that InvoiceID is the same charge as the one used in the previous query.
The same thing occurs if I do the join in the same query that the CTE is defined in.
I ran the execution plan for each query. The first (fast) query looks like this:
The second(slow) query looks like this:
I am at a loss, and my skills at decoding execution plans to resolve this are lacking.
I have separate indexes on tblCharge.ChargeID, tblCharge.ParentChargeID, tblChargeShare.ChargeID, tblChargeShare.InvoiceID, tblChargeShare.ChargeShareStatusID
Any ideas? Tested on SQL 2008R2 and SQL 2012>> The database is linked [sic] to an established app and the column and table names can't be changed. <<
Link? That is a term from pointer chains and network databases, not SQL. I will guess that means the app came back in the old pre-RDBMS days and you are screwed.
>> I am not too worried about the money field [sic], this is used for money and money based calculations so the precision and rounding are acceptable at this level. <<
Field is a COBOL concept; columns are totally different. MONEY is how Sybase mimics the PICTURE clause that puts currency signs, commas, period, etc in a COBOL money field.
Using more than one operation (multiplication or division) on money columns will produce severe rounding errors. A simple way to visualize money arithmetic is to place a ROUND() function calls after
every operation. For example,
Amount = (Portion / total_amt) * gross_amt
can be rewritten using money arithmetic as:
Amount = ROUND(ROUND(Portion/total_amt, 4) *
gross_amt, 4)
Rounding to four decimal places might not seem an
issue, until the numbers you are using are greater
than 10,000.
BEGIN
DECLARE @gross_amt MONEY,
@total_amt MONEY,
@my_part MONEY,
@money_result MONEY,
@float_result FLOAT,
@all_floats FLOAT;
SET @gross_amt = 55294.72;
SET @total_amt = 7328.75;
SET @my_part = 1793.33;
SET @money_result = (@my_part / @total_amt) *
@gross_amt;
SET @float_result = (@my_part / @total_amt) *
@gross_amt;
SET @Retult3 = (CAST(@my_part AS FLOAT)
/ CAST( @total_amt AS FLOAT))
* CAST(FLOAT, @gross_amt AS FLOAT);
SELECT @money_result, @float_result, @all_floats;
END;
@money_result = 13525.09 -- incorrect
@float_result = 13525.0885 -- incorrect
@all_floats = 13530.5038673171 -- correct, with a -
5.42 error
>> The keys are ChargeID(int, identity) and ChargeShareID(int, identity). <<
Sorry, but IDENTITY is not relational and cannot be a key by definition. But it sure works just like a record number in your old COBOL file system.
>> .. these need to be int so that they are assigned by the database and unique. <<
No, the data type of a key is not determined by physical storage, but by logical design. IDENTITY is the number of a parking space in a garage; a VIN is how you identify the automobile.
>> What would you recommend I use as keys? <<
I do not know. I have no specs and without that, I cannot pull a Kabbalah number from the hardware. Your magic numbers can identify Squids, Automobile or Lady Gaga! I would ask the accounting department how they identify a charge.
>> Charge_Share_Status_ID links [sic] to another table which contains the name, formatting [sic] and other information [sic] or a charge share's status, so it is both an Id and a status. <<
More pointer chains! Formatting? Unh? In RDBMS, we use a tiered architecture. That means display formatting is in a presentation layer. A properly created table has cohesion – it does one and only one data element. A status is a state of being that applies
to an entity over a period time (think employment, marriage, etc. status if that is too abstract).
An identifier is based on the Law of Identity from formal logic “To be is to be something in particular” or “A is A” informally. There is no entity here! The Charge_Share_Status table should have the encoded values for a status and perhaps a description if
they are unclear. If the list of values is clear, short and static, then use a CHECK() constraint.
On a scale from 1 to 10, what color is your favorite letter of the alphabet? Yes, this is literally that silly and wrong.
>> I understand what a CTE is; is there a better way to sum all children for a parent hierarchy? <<
There are many ways to represent a tree or hierarchy in SQL. This is called an adjacency list model and it looks like this:
CREATE TABLE OrgChart
(emp_name CHAR(10) NOT NULL PRIMARY KEY,
boss_emp_name CHAR(10) REFERENCES OrgChart(emp_name),
salary_amt DECIMAL(6,2) DEFAULT 100.00 NOT NULL,
<< horrible cycle constraints >>);
OrgChart
emp_name boss_emp_name salary_amt
==============================
'Albert' NULL 1000.00
'Bert' 'Albert' 900.00
'Chuck' 'Albert' 900.00
'Donna' 'Chuck' 800.00
'Eddie' 'Chuck' 700.00
'Fred' 'Chuck' 600.00
This approach will wind up with really ugly code -- CTEs hiding recursive procedures, horrible cycle prevention code, etc. The root of your problem is not knowing that rows are not records, that SQL uses sets and trying to fake pointer chains with some
vague, magical non-relational "id".
This matches the way we did it in old file systems with pointer chains. Non-RDBMS programmers are comfortable with it because it looks familiar -- it looks like records and not rows.
Another way of representing trees is to show them as nested sets.
Since SQL is a set oriented language, this is a better model than the usual adjacency list approach you see in most text books. Let us define a simple OrgChart table like this.
CREATE TABLE OrgChart
(emp_name CHAR(10) NOT NULL PRIMARY KEY,
lft INTEGER NOT NULL UNIQUE CHECK (lft > 0),
rgt INTEGER NOT NULL UNIQUE CHECK (rgt > 1),
CONSTRAINT order_okay CHECK (lft < rgt));
OrgChart
emp_name lft rgt
======================
'Albert' 1 12
'Bert' 2 3
'Chuck' 4 11
'Donna' 5 6
'Eddie' 7 8
'Fred' 9 10
The (lft, rgt) pairs are like tags in a mark-up language, or parens in algebra, BEGIN-END blocks in Algol-family programming languages, etc. -- they bracket a sub-set. This is a set-oriented approach to trees in a set-oriented language.
The organizational chart would look like this as a directed graph:
Albert (1, 12)
Bert (2, 3) Chuck (4, 11)
/ | \
/ | \
/ | \
/ | \
Donna (5, 6) Eddie (7, 8) Fred (9, 10)
The adjacency list table is denormalized in several ways. We are modeling both the Personnel and the Organizational chart in one table. But for the sake of saving space, pretend that the names are job titles and that we have another table which describes the
Personnel that hold those positions.
Another problem with the adjacency list model is that the boss_emp_name and employee columns are the same kind of thing (i.e. identifiers of personnel), and therefore should be shown in only one column in a normalized table. To prove that this is not
normalized, assume that "Chuck" changes his name to "Charles"; you have to change his name in both columns and several places. The defining characteristic of a normalized table is that you have one fact, one place, one time.
The final problem is that the adjacency list model does not model subordination. Authority flows downhill in a hierarchy, but If I fire Chuck, I disconnect all of his subordinates from Albert. There are situations (i.e. water pipes) where this is true, but
that is not the expected situation in this case.
To show a tree as nested sets, replace the nodes with ovals, and then nest subordinate ovals inside each other. The root will be the largest oval and will contain every other node. The leaf nodes will be the innermost ovals with nothing else inside them
and the nesting will show the hierarchical relationship. The (lft, rgt) columns (I cannot use the reserved words LEFT and RIGHT in SQL) are what show the nesting. This is like XML, HTML or parentheses.
At this point, the boss_emp_name column is both redundant and denormalized, so it can be dropped. Also, note that the tree structure can be kept in one table and all the information about a node can be put in a second table and they can be joined on employee
number for queries.
To convert the graph into a nested sets model think of a little worm crawling along the tree. The worm starts at the top, the root, makes a complete trip around the tree. When he comes to a node, he puts a number in the cell on the side that he is visiting
and increments his counter. Each node will get two numbers, one of the right side and one for the left. Computer Science majors will recognize this as a modified preorder tree traversal algorithm. Finally, drop the unneeded OrgChart.boss_emp_name column
which used to represent the edges of a graph.
This has some predictable results that we can use for building queries. The root is always (left = 1, right = 2 * (SELECT COUNT(*) FROM TreeTable)); leaf nodes always have (left + 1 = right); subtrees are defined by the BETWEEN predicate; etc. Here are
two common queries which can be used to build others:
1. An employee and all their Supervisors, no matter how deep the tree.
SELECT O2.*
FROM OrgChart AS O1, OrgChart AS O2
WHERE O1.lft BETWEEN O2.lft AND O2.rgt
AND O1.emp_name = :in_emp_name;
2. The employee and all their subordinates. There is a nice symmetry here.
SELECT O1.*
FROM OrgChart AS O1, OrgChart AS O2
WHERE O1.lft BETWEEN O2.lft AND O2.rgt
AND O2.emp_name = :in_emp_name;
3. Add a GROUP BY and aggregate functions to these basic queries and you have hierarchical reports. For example, the total salaries which each employee controls:
SELECT O2.emp_name, SUM(S1.salary_amt)
FROM OrgChart AS O1, OrgChart AS O2,
Salaries AS S1
WHERE O1.lft BETWEEN O2.lft AND O2.rgt
AND S1.emp_name = O2.emp_name
GROUP BY O2.emp_name;
4. To find the level and the size of the subtree rooted at each emp_name, so you can print the tree as an indented listing.
SELECT O1.emp_name,
SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt
THEN O2.sale_amt ELSE 0.00 END) AS sale_amt_tot,
SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt
THEN 1 ELSE 0 END) AS subtree_size,
SUM(CASE WHEN O1.lft BETWEEN O2.lft AND O2.rgt
THEN 1 ELSE 0 END) AS lvl
FROM OrgChart AS O1, OrgChart AS O2
GROUP BY O1.emp_name;
5. The nested set model has an implied ordering of siblings which the adjacency list model does not. To insert a new node, G1, under part G. We can insert one node at a time like this:
BEGIN ATOMIC
DECLARE rightmost_spread INTEGER;
SET rightmost_spread
= (SELECT rgt
FROM Frammis
WHERE part = 'G');
UPDATE Frammis
SET lft = CASE WHEN lft > rightmost_spread
THEN lft + 2
ELSE lft END,
rgt = CASE WHEN rgt >= rightmost_spread
THEN rgt + 2
ELSE rgt END
WHERE rgt >= rightmost_spread;
INSERT INTO Frammis (part, lft, rgt)
VALUES ('G1', rightmost_spread, (rightmost_spread + 1));
COMMIT WORK;
END;
The idea is to spread the (lft, rgt) numbers after the youngest child of the parent, G in this case, over by two to make room for the new addition, G1. This procedure will add the new node to the rightmost child position, which helps to preserve the idea
of an age order among the siblings.
6. To convert a nested sets model into an adjacency list model:
SELECT B.emp_name AS boss_emp_name, E.emp_name
FROM OrgChart AS E
LEFT OUTER JOIN
OrgChart AS B
ON B.lft
= (SELECT MAX(lft)
FROM OrgChart AS S
WHERE E.lft > S.lft
AND E.lft < S.rgt);
7. To find the immediate parent of a node:
SELECT MAX(P2.lft), MIN(P2.rgt)
FROM Personnel AS P1, Personnel AS P2
WHERE P1.lft BETWEEN P2.lft AND P2.rgt
AND P1.emp_name = @my_emp_name;
I have a book on TREES & HIERARCHIES IN SQL which you can get at Amazon.com right now. It has a lot of other programming idioms for nested sets, like levels, structural comparisons, re-arrangement procedures, etc.
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
Select query on a table with 13 million of rows
Hi guys,
I have been trying to perform a select query on a table which has 13 millions of entries however it took around 58 min to complete.
The table has 8 columns with 4 Primary keys looks like below:
(PK) SegmentID > INT
(PK) IPAddress > VARCHAR (45)
MAC Address > VARCHAR (45)
(PK) Application Name > VARCHAR (45)
Total Bytes > INT
Dates > VARCHAR (45)
Times > VARCHAR (45)
(PK) DateTime > DATETIME
The sql query format is :
select ipaddress, macaddress, sum(totalbytes), applicationname , dates,
times from appstat where segmentid = 1 and datetime between '2011-01-03
15:00:00.0' and '2011-01-04 15:00:00.0' group by ipaddress,
applicationname order by applicationname, sum(totalbytes) desc
Is there a way I can improve this query to be faster (through my.conf or any other method)?
Any feedback is welcomed.
Thank you.
MusTolls wrote:
What db is this?
You never said.
Anyway, it looks like it's using the Primary Key to find the correct rows.
Is that the correct number of rows returned?
5 million?
Sorted?I am using MySQL. By the way, the query time has been much more faster (22 sec) after I changed the configuration file (based on my-huge.cnf).
The number of rows returned is 7999 Rows
This is some portion of the my.cnf
# The MySQL server
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 800M
max_allowed_packet = 1M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
log = /var/log/mysql.log
log-slow-queries = /var/log/mysqld.slow.log
long_query_time=10
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 6
Is there anything else I need to tune so it can be faster ?
Thanks a bunch.
Edited by: user578505 on Jan 17, 2011 6:47 PM -
Hi,
I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
The query:
SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
FROM MyTable
WHERE SomeDate= date_entered_by_user AND SomeString IN ("aaa","bbb")
GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
To check I replaced the where clause with
WHERE SomeDate= date_entered_by_user AND SomeString = "aaa"No improvements.
What could be the problem?
Thank you,
LoboIt's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
The stored execution plan will enable the engine to execute the query faster.
/> -
CV04N takes long time to process select query on DRAT table
Hello Team,
While using CV04N to display DIR's, it takes long time to process select query on DRAT table. This query includes all the key fields. Any idea as to how to analyse this?
Thanks and best regards,
Bobby
Moderator message: please read the sticky threads of this forum, there is a lot of information on what you can do.
Edited by: Thomas Zloch on Feb 24, 2012Be aware that XP takes approx 1gb of your RAM leaving you with 1gb for whatever else is running. MS Outlook is also a memory hog.
To check Virtual Memory Settings:
Control Panel -> System
System Properties -> Advanced Tab -> Performance Settings
Performance Options -> Adavanced Tab - Virtual Memory section
Virtual Memory -
what are
* Initial Size
* Maximum Size
In a presentation at one of the Hyperion conferences years ago, Mark Ostroff suggested that the initial be set to the same as Max. (Max is typically 2x physical RAM)
These changes may provide some improvement. -
Are Selects from ( two or more ) Virtual Tables Possible?
Hello.
This is what I mean by a virtual table - and this works in OBIEE -
select
saw_0, saw_2, saw_3
from
SELECT
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number saw_0,
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Order_Creation_Date saw_2,
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Order_Closed_Date saw_3
FROM
"[Noetix-NoetixGlobalRepository] NoetixViews for Oracle Service"
WHERE
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number BETWEEN '100000' AND '100009'
) t1However, this does not work -
[ Note that each of the nested queries works ok on its own. ]
select
saw_0, saw_2, saw_3, saw_4
from
(SELECT
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number saw_0,
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Order_Creation_Date saw_2,
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Order_Closed_Date saw_3
FROM
"[Noetix-NoetixGlobalRepository] NoetixViews for Oracle Service"
WHERE
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number BETWEEN '100000' AND '100009' ) t1,
(SELECT
TOPN("- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number, 3) saw_4,
FROM
"[Noetix-NoetixGlobalRepository] NoetixViews for Oracle Service"
WHERE
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number BETWEEN '100000' AND '100009' ) t2
) t3My question is this - is there a syntactic variation on this that does work? Or is it impossible?
Thank you!
CharlieHello, User.
I think what I'm trying to achieve is unclear. Consider first this query -
SELECT
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number saw_0,
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Order_Creation_Date saw_2,
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Order_Closed_Date saw_3
FROM
"[Noetix-NoetixGlobalRepository] NoetixViews for Oracle Service"
WHERE
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number BETWEEN '100000' AND '100009'Let's call that the "business result".
What I'm trying to do is get a result set that has 3 rows for every row in the "business result" set. I'm trying to get this by coercing OBIEE to join the business result to an arbitrary 3-row set whose only purpose is to create that cartesian product result.
So I create a second SQL expression -
SELECT
TOPN("- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number, 3) saw_4,
FROM
"[Noetix-NoetixGlobalRepository] NoetixViews for Oracle Service"
WHERE
"- Nx_CSDG0_Repair_Orders (Depot Repair Views)".Repair_Number BETWEEN '100000' AND '100009'- which returns 3 rows of anything.
I want to turn this -
1 ABC 20.95
2 DEF 19.95
3 XYZ 24.95Into this -
A 1 ABC 20.95
B 1 ABC 20.95
C 1 ABC 20.95
A 2 DEF 19.95
B 2 DEF 19.95
C 2 DEF 19.95
A 3 XYZ 24.95
B 3 XYZ 24.95
C 3 XYZ 24.95My attempt to force a join fails - and I'm asking if there is a syntax tweak (or another approach entirely) that will do the trick.
Thank you.
Charlie -
Error in Creating virtual tables
Hi
I get back a SQL error (SQL Statement not properly ended) in the case of Oracle for the following query. Is it not possible to create virtual tables in Oracle. I work on Oracle 9i
CREATE GLOBAL TEMPORARY TABLE derTbl2143679361 (DerCol0, DerCol1, DerCol2, MemberCol0, OrdCol1) ON COMMIT PRESERVE ROWS AS
SELECT TBC.SUPPLIER.COUNTYID, 'AMOUNT', SUM(AMOUNT)
, TBC.SUPPLIER.COUNTRY
, TBC.SUPPLIER.COUNTYID
FROM (TBC.SUPPLIER join TBC.SALES on (TBC.SALES.SUPPLIERID = TBC.SUPPLIER.SUPPLIERID))
GROUP BY TBC.SUPPLIER.COUNTYID,TBC.SUPPLIER.COUNTRY
/*The above query works fine*/
SELECT NETEMP5.Member0, 'AMOUNT', derTbl214367936.DerCol2
, NETEMP5.Ord1
FROM (SELECT DISTINCT COALESCE(derTbl2143679361.DerCol0, -2147483647), COALESCE(derTbl2143679361.MemberCol0, '$U$6$N$9$'), COALESCE(derTbl2143679361.OrdCol1, -2147483647) FROM derTbl2143679361) NETEMP5(KeyCol0,Member0,Ord1)
left outer join derTbl2143679361 on ((COALESCE(NETEMP5.KeyCol0, -2147483647) = COALESCE(derTbl2143679361.DerCol0, -2147483647)))
GROUP BY NETEMP5.Member0,derTbl2143679361.DerCol2, NETEMP5.Ord1
ORDER BY 4
/*This query throws the error*/
Can I not assign Aliases to the virtual table in this form - NETEMP5(KeyCol0,Member0,Ord1)??
ThanksHi,
This is the error that I get back
ORA-00933- SQL command not properly ended
It seems that the problem is with the way I use the Alias. The following query works fine
SELECT NETEMP5.Member0, 'AMOUNT', derTbl214367936.DerCol2
, NETEMP5.Ord1
FROM (SELECT DISTINCT COALESCE(derTbl2143679361.DerCol0, -2147483647) KeyCol0, COALESCE(derTbl2143679361.MemberCol0, '$U$6$N$9$') Member0, COALESCE(derTbl2143679361.OrdCol1, -2147483647) Ord1 FROM derTbl2143679361) NETEMP5
left outer join derTbl2143679361 on ((COALESCE(NETEMP5.KeyCol0, -2147483647) = COALESCE(derTbl2143679361.DerCol0, -2147483647)))
GROUP BY NETEMP5.Member0,derTbl2143679361.DerCol2, NETEMP5.Ord1
ORDER BY 4
Here i use the alias following the column name itself. But is it not possible to have the aliases with the virtual table name (i.e NETEMP5(KeyCol0,Member0,Ord1) ) this works fine on MSSQL & db2
Thanks -
Stumbled on the slow query, Can any one look into it, Y it is so slow
I just stumbled on the slow query . Can any one please guess why this querie is so slow, do i need to change anything in it
Pid=32521 Tid=2884070320 03/26/2011 07:54:19.176 - Cursor wm09_2_49107 took 27996 ms elapsed time and 27995 ms db time for 1 fetches. sql string:
SELECT ALLOC_INVN_DTL.ALLOC_INVN_DTL_ID, ALLOC_INVN_DTL.WHSE, ALLOC_INVN_DTL.SKU_ID, ALLOC_INVN_DTL.INVN_TYPE, ALLOC_INVN_DTL.PROD_STAT, ALLOC_INVN_DTL.BATCH_NBR, ALLOC_INVN_DTL.SKU_ATTR_1, ALLOC_INVN_DTL.SKU_ATTR_2, ALLOC_INVN_DTL.SKU_ATTR_3, ALLOC_INVN_DTL.SKU_ATTR_4, ALLOC_INVN_DTL.SKU_ATTR_5, ALLOC_INVN_DTL.CNTRY_OF_ORGN, ALLOC_INVN_DTL.ALLOC_INVN_CODE, ALLOC_INVN_DTL.CNTR_NBR, ALLOC_INVN_DTL.TRANS_INVN_TYPE, ALLOC_INVN_DTL.PULL_LOCN_ID, ALLOC_INVN_DTL.INVN_NEED_TYPE, ALLOC_INVN_DTL.TASK_TYPE, ALLOC_INVN_DTL.TASK_PRTY, ALLOC_INVN_DTL.TASK_BATCH, ALLOC_INVN_DTL.ALLOC_UOM, ALLOC_INVN_DTL.ALLOC_UOM_QTY, ALLOC_INVN_DTL.QTY_PULLD, ALLOC_INVN_DTL.FULL_CNTR_ALLOCD, ALLOC_INVN_DTL.ORIG_REQMT, ALLOC_INVN_DTL.QTY_ALLOC, ALLOC_INVN_DTL.DEST_LOCN_ID, ALLOC_INVN_DTL.TASK_GENRTN_REF_CODE, ALLOC_INVN_DTL.TASK_GENRTN_REF_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_CODE, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR, ALLOC_INVN_DTL.ERLST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_CMPL_DATE_TIME, ALLOC_INVN_DTL.NEED_ID, ALLOC_INVN_DTL.STAT_CODE, ALLOC_INVN_DTL.CREATE_DATE_TIME, ALLOC_INVN_DTL.MOD_DATE_TIME, ALLOC_INVN_DTL.USER_ID, ALLOC_INVN_DTL.PKT_CTRL_NBR, ALLOC_INVN_DTL.REQD_INVN_TYPE, ALLOC_INVN_DTL.REQD_PROD_STAT, ALLOC_INVN_DTL.REQD_BATCH_NBR, ALLOC_INVN_DTL.REQD_SKU_ATTR_1, ALLOC_INVN_DTL.REQD_SKU_ATTR_2, ALLOC_INVN_DTL.REQD_SKU_ATTR_3, ALLOC_INVN_DTL.REQD_SKU_ATTR_4, ALLOC_INVN_DTL.REQD_SKU_ATTR_5, ALLOC_INVN_DTL.REQD_CNTRY_OF_ORGN, ALLOC_INVN_DTL.PKT_SEQ_NBR, ALLOC_INVN_DTL.CARTON_NBR, ALLOC_INVN_DTL.CARTON_SEQ_NBR, ALLOC_INVN_DTL.PIKR_NBR, ALLOC_INVN_DTL.PULL_LOCN_SEQ_NBR, ALLOC_INVN_DTL.DEST_LOCN_SEQ_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR_SEQ, ALLOC_INVN_DTL.SUBSTITUTION_FLAG, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_1, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_2, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_3, ALLOC_INVN_DTL.CD_MASTER_ID FROM ALLOC_INVN_DTL WHERE ( ( ( ( ( ( ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1 ) AND ( ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2 ) ) AND ( ALLOC_INVN_DTL.SKU_ID = :3 ) ) AND ( ALLOC_INVN_DTL.CNTR_NBR = :4 ) ) AND ( ALLOC_INVN_DTL.STAT_CODE < 1 ) ) AND ( ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL ) )
input variables
1: Address(0xabe74300) Length(0) Type(8) "2" - No Indicator
2: Address(0x8995474) Length(0) Type(8) "PERP014119" - No Indicator
3: Address(0xab331f1c) Length(0) Type(8) "MB57545217" - No Indicator
4: Address(0xab31e32c) Length(0) Type(8) "T0000000000000078257" - No Indicator784786 wrote:
I just stumbled on the slow query . Can any one please guess why this querie is so slow, do i need to change anything in it
Pid=32521 Tid=2884070320 03/26/2011 07:54:19.176 - Cursor wm09_2_49107 took 27996 ms elapsed time and 27995 ms db time for 1 fetches. sql string:
SELECT ALLOC_INVN_DTL.ALLOC_INVN_DTL_ID, ALLOC_INVN_DTL.WHSE, ALLOC_INVN_DTL.SKU_ID, ALLOC_INVN_DTL.INVN_TYPE, ALLOC_INVN_DTL.PROD_STAT, ALLOC_INVN_DTL.BATCH_NBR, ALLOC_INVN_DTL.SKU_ATTR_1, ALLOC_INVN_DTL.SKU_ATTR_2, ALLOC_INVN_DTL.SKU_ATTR_3, ALLOC_INVN_DTL.SKU_ATTR_4, ALLOC_INVN_DTL.SKU_ATTR_5, ALLOC_INVN_DTL.CNTRY_OF_ORGN, ALLOC_INVN_DTL.ALLOC_INVN_CODE, ALLOC_INVN_DTL.CNTR_NBR, ALLOC_INVN_DTL.TRANS_INVN_TYPE, ALLOC_INVN_DTL.PULL_LOCN_ID, ALLOC_INVN_DTL.INVN_NEED_TYPE, ALLOC_INVN_DTL.TASK_TYPE, ALLOC_INVN_DTL.TASK_PRTY, ALLOC_INVN_DTL.TASK_BATCH, ALLOC_INVN_DTL.ALLOC_UOM, ALLOC_INVN_DTL.ALLOC_UOM_QTY, ALLOC_INVN_DTL.QTY_PULLD, ALLOC_INVN_DTL.FULL_CNTR_ALLOCD, ALLOC_INVN_DTL.ORIG_REQMT, ALLOC_INVN_DTL.QTY_ALLOC, ALLOC_INVN_DTL.DEST_LOCN_ID, ALLOC_INVN_DTL.TASK_GENRTN_REF_CODE, ALLOC_INVN_DTL.TASK_GENRTN_REF_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_CODE, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR, ALLOC_INVN_DTL.ERLST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_START_DATE_TIME, ALLOC_INVN_DTL.LTST_CMPL_DATE_TIME, ALLOC_INVN_DTL.NEED_ID, ALLOC_INVN_DTL.STAT_CODE, ALLOC_INVN_DTL.CREATE_DATE_TIME, ALLOC_INVN_DTL.MOD_DATE_TIME, ALLOC_INVN_DTL.USER_ID, ALLOC_INVN_DTL.PKT_CTRL_NBR, ALLOC_INVN_DTL.REQD_INVN_TYPE, ALLOC_INVN_DTL.REQD_PROD_STAT, ALLOC_INVN_DTL.REQD_BATCH_NBR, ALLOC_INVN_DTL.REQD_SKU_ATTR_1, ALLOC_INVN_DTL.REQD_SKU_ATTR_2, ALLOC_INVN_DTL.REQD_SKU_ATTR_3, ALLOC_INVN_DTL.REQD_SKU_ATTR_4, ALLOC_INVN_DTL.REQD_SKU_ATTR_5, ALLOC_INVN_DTL.REQD_CNTRY_OF_ORGN, ALLOC_INVN_DTL.PKT_SEQ_NBR, ALLOC_INVN_DTL.CARTON_NBR, ALLOC_INVN_DTL.CARTON_SEQ_NBR, ALLOC_INVN_DTL.PIKR_NBR, ALLOC_INVN_DTL.PULL_LOCN_SEQ_NBR, ALLOC_INVN_DTL.DEST_LOCN_SEQ_NBR, ALLOC_INVN_DTL.TASK_CMPL_REF_NBR_SEQ, ALLOC_INVN_DTL.SUBSTITUTION_FLAG, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_1, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_2, ALLOC_INVN_DTL.MISC_ALPHA_FIELD_3, ALLOC_INVN_DTL.CD_MASTER_ID FROM ALLOC_INVN_DTL WHERE ( ( ( ( ( ( ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1 ) AND ( ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2 ) ) AND ( ALLOC_INVN_DTL.SKU_ID = :3 ) ) AND ( ALLOC_INVN_DTL.CNTR_NBR = :4 ) ) AND ( ALLOC_INVN_DTL.STAT_CODE < 1 ) ) AND ( ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL ) )
input variables
1: Address(0xabe74300) Length(0) Type(8) "2" - No Indicator
2: Address(0x8995474) Length(0) Type(8) "PERP014119" - No Indicator
3: Address(0xab331f1c) Length(0) Type(8) "MB57545217" - No Indicator
4: Address(0xab31e32c) Length(0) Type(8) "T0000000000000078257" - No IndicatorWithout more information I cannot tell you why it is slow, but I can certainly tell you why it is impossible to read.
Just because sql allows unformatted query text does not mean it is a good idea. Why not bring some sanity to this for your own sake, not to mention that of people from whom you are expecting to actually read and analyze this mess. I wish someone could explain to me why people write these long stream-of-consciousness queries.
When posting to this forum you should use the code tags to bracket your code and preserve the formatting. Then the code should actually be formatted:
SELECT
ALLOC_INVN_DTL.ALLOC_INVN_DTL_ID,
ALLOC_INVN_DTL.WHSE,
ALLOC_INVN_DTL.SKU_ID,
ALLOC_INVN_DTL.INVN_TYPE,
ALLOC_INVN_DTL.PROD_STAT,
ALLOC_INVN_DTL.BATCH_NBR,
ALLOC_INVN_DTL.SKU_ATTR_1,
ALLOC_INVN_DTL.SKU_ATTR_2,
ALLOC_INVN_DTL.SKU_ATTR_3,
ALLOC_INVN_DTL.SKU_ATTR_4,
ALLOC_INVN_DTL.SKU_ATTR_5,
ALLOC_INVN_DTL.CNTRY_OF_ORGN,
ALLOC_INVN_DTL.ALLOC_INVN_CODE,
ALLOC_INVN_DTL.CNTR_NBR,
ALLOC_INVN_DTL.TRANS_INVN_TYPE,
ALLOC_INVN_DTL.PULL_LOCN_ID,
ALLOC_INVN_DTL.INVN_NEED_TYPE,
ALLOC_INVN_DTL.TASK_TYPE,
ALLOC_INVN_DTL.TASK_PRTY,
ALLOC_INVN_DTL.TASK_BATCH,
ALLOC_INVN_DTL.ALLOC_UOM,
ALLOC_INVN_DTL.ALLOC_UOM_QTY,
ALLOC_INVN_DTL.QTY_PULLD,
ALLOC_INVN_DTL.FULL_CNTR_ALLOCD,
ALLOC_INVN_DTL.ORIG_REQMT,
ALLOC_INVN_DTL.QTY_ALLOC,
ALLOC_INVN_DTL.DEST_LOCN_ID,
ALLOC_INVN_DTL.TASK_GENRTN_REF_CODE,
ALLOC_INVN_DTL.TASK_GENRTN_REF_NBR,
ALLOC_INVN_DTL.TASK_CMPL_REF_CODE,
ALLOC_INVN_DTL.TASK_CMPL_REF_NBR,
ALLOC_INVN_DTL.ERLST_START_DATE_TIME,
ALLOC_INVN_DTL.LTST_START_DATE_TIME,
ALLOC_INVN_DTL.LTST_CMPL_DATE_TIME,
ALLOC_INVN_DTL.NEED_ID,
ALLOC_INVN_DTL.STAT_CODE,
ALLOC_INVN_DTL.CREATE_DATE_TIME,
ALLOC_INVN_DTL.MOD_DATE_TIME,
ALLOC_INVN_DTL.USER_ID,
ALLOC_INVN_DTL.PKT_CTRL_NBR,
ALLOC_INVN_DTL.REQD_INVN_TYPE,
ALLOC_INVN_DTL.REQD_PROD_STAT,
ALLOC_INVN_DTL.REQD_BATCH_NBR,
ALLOC_INVN_DTL.REQD_SKU_ATTR_1,
ALLOC_INVN_DTL.REQD_SKU_ATTR_2,
ALLOC_INVN_DTL.REQD_SKU_ATTR_3,
ALLOC_INVN_DTL.REQD_SKU_ATTR_4,
ALLOC_INVN_DTL.REQD_SKU_ATTR_5,
ALLOC_INVN_DTL.REQD_CNTRY_OF_ORGN,
ALLOC_INVN_DTL.PKT_SEQ_NBR,
ALLOC_INVN_DTL.CARTON_NBR,
ALLOC_INVN_DTL.CARTON_SEQ_NBR,
ALLOC_INVN_DTL.PIKR_NBR,
ALLOC_INVN_DTL.PULL_LOCN_SEQ_NBR,
ALLOC_INVN_DTL.DEST_LOCN_SEQ_NBR,
ALLOC_INVN_DTL.TASK_CMPL_REF_NBR_SEQ,
ALLOC_INVN_DTL.SUBSTITUTION_FLAG,
ALLOC_INVN_DTL.MISC_ALPHA_FIELD_1,
ALLOC_INVN_DTL.MISC_ALPHA_FIELD_2,
ALLOC_INVN_DTL.MISC_ALPHA_FIELD_3,
ALLOC_INVN_DTL.CD_MASTER_ID
FROM ALLOC_INVN_DTL
WHERE (
( ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1
AND ( ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2
AND ( ALLOC_INVN_DTL.SKU_ID = :3
AND ( ALLOC_INVN_DTL.CNTR_NBR = :4
AND ( ALLOC_INVN_DTL.STAT_CODE < 1
AND ( ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL
)Now, since you are only selecting from one table, there is no need to clutter up the query by qualifying every column name with the table name. Let's simplify with this:
SELECT
ALLOC_INVN_DTL_ID,
WHSE,
SKU_ID,
INVN_TYPE,
PROD_STAT,
BATCH_NBR,
SKU_ATTR_1,
SKU_ATTR_2,
SKU_ATTR_3,
SKU_ATTR_4,
SKU_ATTR_5,
CNTRY_OF_ORGN,
ALLOC_INVN_CODE,
CNTR_NBR,
TRANS_INVN_TYPE,
PULL_LOCN_ID,
INVN_NEED_TYPE,
TASK_TYPE,
TASK_PRTY,
TASK_BATCH,
ALLOC_UOM,
ALLOC_UOM_QTY,
QTY_PULLD,
FULL_CNTR_ALLOCD,
ORIG_REQMT,
QTY_ALLOC,
DEST_LOCN_ID,
TASK_GENRTN_REF_CODE,
TASK_GENRTN_REF_NBR,
TASK_CMPL_REF_CODE,
TASK_CMPL_REF_NBR,
ERLST_START_DATE_TIME,
LTST_START_DATE_TIME,
LTST_CMPL_DATE_TIME,
NEED_ID,
STAT_CODE,
CREATE_DATE_TIME,
MOD_DATE_TIME,
USER_ID,
PKT_CTRL_NBR,
REQD_INVN_TYPE,
REQD_PROD_STAT,
REQD_BATCH_NBR,
REQD_SKU_ATTR_1,
REQD_SKU_ATTR_2,
REQD_SKU_ATTR_3,
REQD_SKU_ATTR_4,
REQD_SKU_ATTR_5,
REQD_CNTRY_OF_ORGN,
PKT_SEQ_NBR,
CARTON_NBR,
CARTON_SEQ_NBR,
PIKR_NBR,
PULL_LOCN_SEQ_NBR,
DEST_LOCN_SEQ_NBR,
TASK_CMPL_REF_NBR_SEQ,
SUBSTITUTION_FLAG,
MISC_ALPHA_FIELD_1,
MISC_ALPHA_FIELD_2,
MISC_ALPHA_FIELD_3,
CD_MASTER_ID
FROM ALLOC_INVN_DTL
WHERE (
( TASK_CMPL_REF_CODE = :1
AND ( TASK_CMPL_REF_NBR = :2
AND ( SKU_ID = :3
AND ( CNTR_NBR = :4
AND ( STAT_CODE < 1
AND ( PULL_LOCN_ID IS NULL
)And finally, your WHERE clause is a simple string of AND conditions, there was no need to complicate it with all of the nested parentheses. Much simpler:
WHERE ALLOC_INVN_DTL.TASK_CMPL_REF_CODE = :1
AND ALLOC_INVN_DTL.TASK_CMPL_REF_NBR = :2
AND ALLOC_INVN_DTL.SKU_ID = :3
AND ALLOC_INVN_DTL.CNTR_NBR = :4
AND ALLOC_INVN_DTL.STAT_CODE < 1
AND ALLOC_INVN_DTL.PULL_LOCN_ID IS NULL
None of the above makes a whit of difference in your query performance, but if you worked in my office, I would make you clean it up before I even attempted to do a performance analysis.
Edited by: EdStevens on Mar 26, 2011 2:14 PM -
How do I avoid ORA-01473 when querying hierarchial on tables with VPD predicates
My question is how to circumvent what seems to be a limitation i ORACLE, if at all possible. Please read on.
When using VPD (Virtual Private Database) predictaes on a table and performing a hierarchial query on that table I get the following error message:
ORA-01473: cannot have subqueries in CONNECT BY CLAUSE
My query may look like the folwing:
SELECT FIELD
FROM TABLE
START WITH ID = 1
CONNECT BY PRIOR ID = PARENT
As my predicate contains a query in it self, I suspect that the implicit augmentation of the predicate results in a query that looks like:
SELECT FIELD
FROM TABLE
START WITH ID = 1
CONNECT BY PRIOR ID = PARENT
AND OWNER IN (SELECT OWNER FROM TABLE2 WHERE ...)
at least, when executing a query like the one above (with the explicit predicate) I get the identical error message.
So my question is:
Do you know of any way to force the predicate to augment itslef onto the WHERE-clause? I would be perfectly happy with a query that looks like:
SELECT FIELD
FROM TABLE
START WITH ID = 1
CONNECT BY PRIOR ID = PARENT
WHERE OWNER IN (SELECT OWNER FROM TABLE2 WHERE ...)
or do you know of any fix/patch/release to ORACLE that allows you to include subqueries in the CONNECT BY-clause and eliminates the error message?The WHERE clause or AND clause applies to the line directly above it. Please see the examples of valid and invalid queries below, which differ only in the placement of the WHERE or AND clause. If this is not sufficient, please provide some sample data and desired output to clarify what you need.
-- valid:
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 WHERE deptno IN
6 (SELECT deptno
7 FROM dept
8 WHERE dname = 'RESEARCH')
9 START WITH mgr = 7566
10 CONNECT BY PRIOR empno = mgr
11 /
EMPNO MGR DEPTNO
7788 7566 20
7876 7788 20
7902 7566 20
800 7902 20
-- invalid:
SQL>
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 START WITH mgr = 7566
6 CONNECT BY PRIOR empno = mgr
7 WHERE deptno IN
8 (SELECT deptno
9 FROM dept
10 WHERE dname = 'RESEARCH')
11 /
WHERE deptno IN
ERROR at line 7:
ORA-00933: SQL command not properly ended
-- valid:
SQL>
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 START WITH mgr = 7566
6 AND deptno IN
7 (SELECT deptno
8 FROM dept
9 WHERE dname = 'RESEARCH')
10 CONNECT BY PRIOR empno = mgr
11 /
EMPNO MGR DEPTNO
7788 7566 20
7876 7788 20
7902 7566 20
800 7902 20
-- invalid:
SQL>
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 START WITH mgr = 7566
6 CONNECT BY PRIOR empno = mgr
7 AND deptno IN
8 (SELECT deptno
9 FROM dept
10 WHERE dname = 'RESEARCH')
11 /
FROM emp
ERROR at line 4:
ORA-01473: cannot have subqueries in CONNECT BY clause
Maybe you are looking for
-
Global vriable is not being reflected in the portal
Hi, I have to hide the element depening upon the value of the variable, the variable is global defined in the interface. My problem is that all the variables values are being reflected back in the form exept the global variable value. I have assigned
-
Compressor stopped working? Nothing happens after clicking submit
Hope there's a solve for this. I open compressor 4.0.3, drag my file in, apply the settings and destination, hit submit, the batch window comes up, cluster is "this computer", I click submit, it shows up in the history panel, but never comes up with
-
Hi I have developed a search servlet and deployed it in tomcat 4.0.3 and connected to mysql database through jdbc by specifying jndi. I have coded JNDI lokkup name as "java:comp/env/jdbc/KgoogleDB" I have added a context in server.xml file of tomcat
-
"Add a column to show each of these additional fields: " doesn't support person feild
"Add a column to show each of these additional fields: " doesn't support person feild I've seen that its not supported in 2010 version of sharepoint but I was hoping in the last 3 years its been fixed for the 2013 Any idea if this will be fixed or
-
HT4235 i can't connect my iphone5 to my computer
please help to connect my iphone to my computer