Problm with 'Connect By Level'
If I issue a query like :
select level x from dual connect by level < 30;
Then I only get a maximum of ten rows back.
If I wrap the query like this:
select * from (select level x from dual connect by level < 30);
Then I get the full rowset back.
Version Details:
select * from v$version;
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
PL/SQL Release 9.2.0.4.0 - Production
CORE 9.2.0.3.0 Production
TNS for Linux: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production
Sql Developer Version 1.0.0.15.27
Build MAIN-15.27
This got raised on AskTom sometime,
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:40476301944675
The behaviour changed depending on the client, and there was a lot of debate where the fault lay (SQL*Plus, OCI, DB server)
Don't know whether it ever got resolved in a patchset.
Maybe Note 185438.1 is relevant (don't have metalink access so can't check).
"The first statement only works correctly in 10g. "
There's nothing in the documentation supporting this construct, and the documentation indicates that any hierachial query (ie one with a CONNECT BY) should have a PRIOR operator.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries003.htm#i2053935
As such, I'd argue that the 'correct' (or at least documented) behaviour would be an error.
Message was edited by:
g.myers
Similar Messages
-
Performance issue with connect by level query
Hi I have a problem with connect by level in oracle.
My table is :
J_USER_CALENDAR
USER_NAME FROM_DATE TO_DATE COMMENTS
Uma Shankar 2-Nov-09 5-Nov-09 Comment1
Veera 11-Nov-09 13-Nov-09 Comment2
Uma Shankar 15-Dec-09 17-Dec-09 Commnet3
Vinod 20-Oct-09 21-Oct-09 Comments4
The above table is the user leave calendar.
Now I need to display the users who are on leave between 01-Nov-2009 to 30-Nov-2009
The output should look like:
USER_NAME FROM_DATE COMMENTS
Uma Shankar 2-Nov-09 Comment1
Uma Shankar 3-Nov-09 Comment1
Uma Shankar 4-Nov-09 Comment1
Uma Shankar 5-Nov-09 Comment1
Veera 11-Nov-09 Comment2
Veera 12-Nov-09 Comment2
Veera 13-Nov-09 Comment2
For this I have tried with following query , but it is taking too long time to execute.
select FROM_DATE,user_name,comments from (SELECT distinct FROM_DATE,user_name ,
comments FROM (SELECT (LEVEL) + FROM_DATE-1 FROM_DATE,TO_DATE, FIRST_NAME||' '|| LAST_NAME
user_name ,COMMENTS FROM J_USER_CALENDAR
where
and J_USER_CALENDAR.IS_DELETED=0
CONNECT BY LEVEL <= TO_DATE - FROM_DATE+1) a )where (FROM_DATE = '01-Nov-2009' or FROM_DATE = '30-Nov-2009'
or FROM_DATE between '01-Nov-2009' and '30-Nov-2009') order by from_Date ,lower(user_name)
Please help me.
Thanks in advance.
Regards,
PhanikanthI have not attempted to analyze your SQL statement.
Here is a test set up:
CREATE TABLE T1(
USERNAME VARCHAR2(30),
FROM_DATE DATE,
TO_DATE DATE,
COMMENTS VARCHAR2(100));
INSERT INTO T1 VALUES ('Uma Shankar', '02-Nov-09','05-Nov-09','Comment1');
INSERT INTO T1 VALUES ('Veera','11-Nov-09','13-Nov-09','Comment2');
INSERT INTO T1 VALUES ('Uma Shankar','15-Dec-09','17-Dec-09','Commnet3');
INSERT INTO T1 VALUES ('Vinod','20-Oct-09','21-Oct-09','Comments4');
INSERT INTO T1 VALUES ('Mo','20-Oct-09','05-NOV-09','Comments4');
COMMIT;Note that I included one additional row, where the person starts their vacation in the previous month and ends in the month of November.
You could approach the problem like this:
Assume that you would like to list all of the days of a particular month:
SELECT
TO_DATE('01-NOV-2009','DD-MON-YYYY')+(ROWNUM-1) MONTH_DAY
FROM
DUAL
CONNECT BY
LEVEL<=ADD_MONTHS(TO_DATE('01-NOV-2009','DD-MON-YYYY'),1)-TO_DATE('01-NOV-2009','DD-MON-YYYY');Note that the above attempts to calculate the number of days in the month of November - if it is known that the month has a particular number of days, 30 for instance, you could rewrite the CONNECT BY clause like this:
CONNECT BY
LEVEL<=30Now, we need to pick up those rows of interest from the table:
SELECT
FROM
T1 T
WHERE
(T.FROM_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY')
OR T.TO_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY'));
USERNAME FROM_DATE TO_DATE COMMENTS
Uma Shankar 02-NOV-09 05-NOV-09 Comment1
Veera 11-NOV-09 13-NOV-09 Comment2
Mo 20-OCT-09 05-NOV-09 Comments4If we then join the two resultsets, we have the following query:
SELECT
FROM
T1 T,
(SELECT
TO_DATE('01-NOV-2009','DD-MON-YYYY')+(ROWNUM-1) MONTH_DAY
FROM
DUAL
CONNECT BY
LEVEL<=ADD_MONTHS(TO_DATE('01-NOV-2009','DD-MON-YYYY'),1)-TO_DATE('01-NOV-2009','DD-MON-YYYY')) V
WHERE
(T.FROM_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY')
OR T.TO_DATE BETWEEN TO_DATE('01-NOV-2009','DD-MON-YYYY') AND TO_DATE('30-NOV-2009','DD-MON-YYYY'))
AND V.MONTH_DAY BETWEEN T.FROM_DATE AND T.TO_DATE
ORDER BY
USERNAME,
MONTH_DAY;
USERNAME FROM_DATE TO_DATE COMMENTS MONTH_DAY
Mo 20-OCT-09 05-NOV-09 Comments4 01-NOV-09
Mo 20-OCT-09 05-NOV-09 Comments4 02-NOV-09
Mo 20-OCT-09 05-NOV-09 Comments4 03-NOV-09
Mo 20-OCT-09 05-NOV-09 Comments4 04-NOV-09
Mo 20-OCT-09 05-NOV-09 Comments4 05-NOV-09
Uma Shankar 02-NOV-09 05-NOV-09 Comment1 02-NOV-09
Uma Shankar 02-NOV-09 05-NOV-09 Comment1 03-NOV-09
Uma Shankar 02-NOV-09 05-NOV-09 Comment1 04-NOV-09
Uma Shankar 02-NOV-09 05-NOV-09 Comment1 05-NOV-09
Veera 11-NOV-09 13-NOV-09 Comment2 11-NOV-09
Veera 11-NOV-09 13-NOV-09 Comment2 12-NOV-09
Veera 11-NOV-09 13-NOV-09 Comment2 13-NOV-09Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
ORA-01446 Tabular Form with CONNECT BY LEVEL =2
I created a tabular form where I would like the last 2 lines to be blank so the user can enter new rows immediately (e.g., the first time when no data is found).
The following is the sql for the tabular form:
select
"ID",
"REQ_ID",
"QUANTITY",
"FRAME_SIZE",
"FRAME_TYPE",
"PROTECTIVE_COVERING",
"MATT",
"MATT_COLOR"
from "#OWNER#"."CREATIVE_SVC_DESIGN_FRAMING"
union all
select
null id,
null req_id,
null quantity,
null frame_size,
null frame_type,
null protective_covering,
null matt,
null matt_color
from dual
connect by level <= 2 I get the following error:
failed to parse SQL query:
ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT, GROUP BY, etc.
The query is not accessing ROWID from a view at all. And, as you can see, it is not using a DISTINCT, GROUP BY, etc..
My APEX is Application Express 4.0.2.00.07
My Database is 10g (10.2.0.5.0)
Please help!!
Robert
http://apexjscss.blogspot.comRobert,
I wish I had a better explanation for you, but I know something new with 4.1 broke this union trick for tabular forms. I had to make a dynamic action that runs on page load and calls the addRow() javascript function instead. I think it has something to do with tabular form validation but I am not sure.
Cheers,
Tyson Jouglet -
Hi,
My requiremet goes some thing like this : I would like to generate the dates from hiredate to systdate with a gap of one month.The code which I have return works good for single empno . I want to generate the dates from hiredate to sysdate with a gap
of one month for all the employees(empno). commenting the code for where empno = 7369 does not work and the system a lot of CPU usage.
Assume hiredate to be to_date('01-jan-10') for empno = 7369
sample codes goes something like this:
select ADD_MONTHS(trunc(to_date('01-jan-10')),LEVEL-1)
from dual
connect by level <= trunc((sysdate-to_Date('01-jan-10'))/30 )code for emp table
select ADD_MONTHS(trunc(hiredate),LEVEL-1)
from employee
where empno = 7369
connect by level <= trunc((sysdate-trunc(hiredate))/30 )Please adviceBased on emp table, something like this...
SQL> ed
Wrote file afiedt.buf
1 with emps as (select * from emp where deptno = 10)
2 ,uppr as (select to_date('31/12/1982','DD/MM/YYYY') as dt from dual)
3 -- END OF TEST DATA
4 select empno, ename, add_months(hiredate,rn-1) as dt
5 from emps, uppr
6 ,(select rownum rn from dual connect by rownum <= (select max(dt-hiredate) from emps,uppr))
7 where add_months(hiredate,rn-1) <= uppr.dt
8* order by empno, rn
SQL> /
EMPNO ENAME DT
7782 CLARK 09/06/1981 00:00:00
7782 CLARK 09/07/1981 00:00:00
7782 CLARK 09/08/1981 00:00:00
7782 CLARK 09/09/1981 00:00:00
7782 CLARK 09/10/1981 00:00:00
7782 CLARK 09/11/1981 00:00:00
7782 CLARK 09/12/1981 00:00:00
7782 CLARK 09/01/1982 00:00:00
7782 CLARK 09/02/1982 00:00:00
7782 CLARK 09/03/1982 00:00:00
7782 CLARK 09/04/1982 00:00:00
7782 CLARK 09/05/1982 00:00:00
7782 CLARK 09/06/1982 00:00:00
7782 CLARK 09/07/1982 00:00:00
7782 CLARK 09/08/1982 00:00:00
7782 CLARK 09/09/1982 00:00:00
7782 CLARK 09/10/1982 00:00:00
7782 CLARK 09/11/1982 00:00:00
7782 CLARK 09/12/1982 00:00:00
7839 KING 17/11/1981 00:00:00
7839 KING 17/12/1981 00:00:00
7839 KING 17/01/1982 00:00:00
7839 KING 17/02/1982 00:00:00
7839 KING 17/03/1982 00:00:00
7839 KING 17/04/1982 00:00:00
7839 KING 17/05/1982 00:00:00
7839 KING 17/06/1982 00:00:00
7839 KING 17/07/1982 00:00:00
7839 KING 17/08/1982 00:00:00
7839 KING 17/09/1982 00:00:00
7839 KING 17/10/1982 00:00:00
7839 KING 17/11/1982 00:00:00
7839 KING 17/12/1982 00:00:00
7934 MILLER 23/01/1982 00:00:00
7934 MILLER 23/02/1982 00:00:00
7934 MILLER 23/03/1982 00:00:00
7934 MILLER 23/04/1982 00:00:00
7934 MILLER 23/05/1982 00:00:00
7934 MILLER 23/06/1982 00:00:00
7934 MILLER 23/07/1982 00:00:00
7934 MILLER 23/08/1982 00:00:00
7934 MILLER 23/09/1982 00:00:00
7934 MILLER 23/10/1982 00:00:00
7934 MILLER 23/11/1982 00:00:00
7934 MILLER 23/12/1982 00:00:00
45 rows selected.
SQL>(I limited the upper date to 31/12/1982 rather than sysdate as I didn't want too much data... :D) -
I have recently bought the linsys wireless-G range expander (V.3), but i am having problems with it. i cant get it to stay connected with my wireless modem. the range expander 'link' led dosnt stay on blue it changes every 5 mins to red and then blue and so on. i have tried several times to reset it and configure it again but the same thing happens. i followed all the instructions in the booklet but nothing seems to work. i checked again and again that the range expander is in the wireless range and checked that it had the same ssid and channel but its still not working..
This link will give you the detailed procedure.
-
Could someone help with connect by level. Iam trying to get week ending date for all the projects in the table.
select pid , enddate + 7*level as end_date , level
from
(select enddate(somedate logic from diff tables), t1.pid
from
table1 t1 , table2 t2
where
cobditions...
t1.id = t2.id and t1.pid = '123'
) connect by level <= 4
This logic works for a single project..
If same thing i do for multiple projects, it doesnt seem to work.
for Single Project result is like is
ProjectId Date
123 17-AUG-08
123 24-AUG-08
123 31-AUG-08
123 07-SEP-08
Samething for multiple projects, it results like this
(I use the same qry but don’t send the id there)
ProjectId Date level
123 17-AUG-08 1
123 24-AUG-08 2
123 31-AUG-08 3
123 07-SEP-08 4
125 07-sep-08 4
126 09-sep-08 4
COuld someone help me to how to achieve this for multiple projects.I am not sure what are you trying to do, but week ending date for a given date can be calculated using NEXT_DAY or TRUNC functions. Issue is "week ending" itself. It is nls dependent. If you want to get client nls specific "week ending" date use:
next_date(given_date-1,7)
SQL> select next_day(sysdate-1,7) week_ending_date,
2 to_char(next_day(sysdate-1,7),'Day') week_ending_day
3 from dual
4 /
WEEK_ENDI WEEK_ENDI
20-SEP-08 Saturday
SQL> alter session set nls_territory=france;
Session altered.
SQL> select next_day(sysdate-1,7) week_ending_date,
2 to_char(next_day(sysdate-1,7),'Day') week_ending_day
3 from dual
4 /
WEEK_END WEEK_ENDI
21/09/08 Sunday
SQL> As you can see, next_date(given_date-1,7) will return nearest Saturday for client in the US and Sunday for a client in France. If you want nls independent solution you could use function TRUNC with IW format. ISO week always starts Monday. So if you consider Saturday as week ending day use:
TRUNC(given_date,'IW') + 6
SQL> alter session set nls_territory=america;
Session altered.
SQL> select trunc(sysdate,'IW') + 5 week_ending_date,
2 to_char(trunc(sysdate,'IW') + 5,'Day') week_ending_day
3 from dual
4 /
WEEK_ENDI WEEK_ENDI
20-SEP-08 Saturday
SQL> alter session set nls_territory=france;
Session altered.
SQL> select trunc(sysdate,'IW') + 5 week_ending_date,
2 to_char(trunc(sysdate,'IW') + 5,'Day') week_ending_day
3 from dual
4 /
WEEK_END WEEK_ENDI
20/09/08 Saturday
SQL> if you consider Sunday as week ending day use:
SQL> alter session set nls_territory=america;
Session altered.
SQL> select trunc(sysdate,'IW') + 6 week_ending_date,
2 to_char(trunc(sysdate,'IW') + 6,'Day') week_ending_day
3 from dual
4 /
WEEK_ENDI WEEK_ENDI
21-SEP-08 Sunday
SQL> alter session set nls_territory=france;
Session altered.
SQL> select trunc(sysdate,'IW') + 6 week_ending_date,
2 to_char(trunc(sysdate,'IW') + 6,'Day') week_ending_day
3 from dual
4 /
WEEK_END WEEK_ENDI
21/09/08 Sunday
SQL> As you can see TRUNC(given_date,'IW') + N is nls independent.
SY. -
Connect by level - with multiple inpur rows
Very simplified I have this table - it has a "table" like structure in varying length.
I need x output row for each input row (not using pipelined function og PL/SQL)
Wih only one row as input to the "connect by" - it works of course.
drop table test3;
create table test3 (id number, tekst varchar2(20));
insert into test3 values (1, 'acbdef');
insert into test3 values (2, '123');
insert into test3 values (3, 'HUUGHFTT');
insert into test3 values (4, 'A');
insert into test3 values (5, 'AKAJKSHKJASHKAJSHJKJ');
commit;
with tal as
select * from
(select a.*, rownum rn
from test3 a)
where rn < 2)
select tekst, level ,
substr(tekst,(level-1)*1+1, 1) content
from tal
connect by level < length(tekst)
;How do I achieve the same thing for multiple input rows ?
I know I can make in PL/SQL using either plan pl or just a pipelined function, but I prefer a clean SQL if possible.
I have tried to do it in a cross join test3 and (select various values from dual from the test3 table) and other versions, but all with syntax errors
with tal as
select * from
(select a.*, rownum rn
from test3 a)
where rn < 3)
select * from test3 cross join table
select tekst, level ,
substr(tekst,(level-1)*1+1, 1) content
from dual
connect by level < length(tekst)
;Oracle version will be 10.2 and 11+I think this is kind of what you're looking for:
with tal as
( select 1 id, 'acbdef' tekst from dual union
select 2 , '123' from dual union
select 3 , 'HUUGHFTT' from dual union
select 4 , 'A' from dual )
select id, tekst, level, substr(tekst,(level-1)*1+1, 1) content
from tal
connect by ( level <= length(tekst)
and prior id = id
and prior dbms_random.value is not null
ID TEKST LEVEL CONTENT
1 acbdef 1 a
1 acbdef 2 c
1 acbdef 3 b
1 acbdef 4 d
1 acbdef 5 e
1 acbdef 6 f
2 123 1 1
2 123 2 2
2 123 3 3
3 HUUGHFTT 1 H
3 HUUGHFTT 2 U
3 HUUGHFTT 3 U
3 HUUGHFTT 4 G
3 HUUGHFTT 5 H
3 HUUGHFTT 6 F
3 HUUGHFTT 7 T
3 HUUGHFTT 8 T
4 A 1 A -
Connect by level with regular expression is consuming more time,
Oracle 11g R2,
Dear EXPERTS/GURUS,
i have a table with 4 columns, say
ID number,OBJECT_NAME varchar2,OBJECT_MANUFACTURER varchar2,REGIONS varchar2.In the column REGIONS i have information like EMEA,AMERICA,CCC, etc..
The problem is this column is having redudant copy of same date like EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC
All i want to do is to remove that redundancy, and make as one like EMEA,AMERICA,CCC.
If i do a query like
select distinct regexp_substr(REGIONS,'[[:alpha:]]+',1,level),ID,OBJECT_NAME,OBJECT_MANUFACTURER from table_name connect by level<=regexp_count(REGIONS,'[[:alpha:]]+');................ then i can get data as i expected with distinct REGION information, but the heck is this column REGION is having 300 times same copy of data, and more over table is having 10000 records, so the query is not at all completing, even when i tried to limit the query to 1000 rows like where rownum<1001, still query was running for more that 30 Mins.
I need some query, which do same like above, but with alternative, faster approach.902629 wrote:
Oracle 11g R2,
Dear EXPERTS/GURUS,
i have a table with 4 columns, say
ID number,OBJECT_NAME varchar2,OBJECT_MANUFACTURER varchar2,REGIONS varchar2.In the column REGIONS i have information like EMEA,AMERICA,CCC, etc..
The problem is this column is having redudant copy of same date like EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC,EMEA,AMERICA,CCC
All i want to do is to remove that redundancy, and make as one like EMEA,AMERICA,CCC.
If i do a query like
select distinct regexp_substr(REGIONS,'[[:alpha:]]+',1,level),ID,OBJECT_NAME,OBJECT_MANUFACTURER from table_name connect by level<=regexp_count(REGIONS,'[[:alpha:]]+');................ then i can get data as i expected with distinct REGION information, but the heck is this column REGION is having 300 times same copy of data, and more over table is having 10000 records, so the query is not at all completing, even when i tried to limit the query to 1000 rows like where rownum<1001, still query was running for more that 30 Mins.
I need some query, which do same like above, but with alternative, faster approach.Sounds like a great time to revisit the data model and fix the design.
With a sub-optimal design, there's only so much performance you can coax out of anything, at some point it becomes necessary to end the madness and address the source of the problem. Perhaps you've hit that point in time? -
Using CONNECT BY LEVEL with For Loop Doesn't Work
The procedure listed below inserts only one record in table whereas i need 10 records to be inserted in table.
this is just a test procedure..
CREATE OR REPLACE PROCEDURE P_TEST
AS
BEGIN
FOR I IN (SELECT LEVEL num FROM dual CONNECT BY LEVEL <= 10)
LOOP
INSERT INTO TEMP_VMS VALUES(I.num);
END LOOP;
END;
END;
/Salim Chelabi wrote:
Or with 9ir2
INSERT INTO TEMP_VMS
SELECT COLUMN_VALUE
FROM TABLE (SYS.dbms_debug_vc2coll (24, 34, 25));
SELECT *
FROM TABLE (SYS.dbms_debug_vc2coll (24, 34, 25));
COLUMN_VALUE
24
34
25
3 rows selected.
http://laurentschneider.com/wordpress/2007/12/predefined-collections.html
That doesn't split strings...
SQL> ed
Wrote file afiedt.buf
1 with t as (select 'a,b,c,d,e' str from dual)
2 --
3 SELECT *
4* FROM t, TABLE (SYS.dbms_debug_vc2coll (t.str))
SQL> /
STR
COLUMN_VALUE
a,b,c,d,e
a,b,c,d,e... it only defines a table of values -
SQL Query help ( On connect By level clause)
Hi all,
I have this query developed with data in with clause.
With dat As
select '@AAA @SSS @DDD' col1 from dual union all
select '@ZZZ @XXX @TTT @RRR @ZZA' col1 from dual
Select regexp_substr( col1 , '[^@][A-Z]+',1,level) Show from dat
connect by level <= regexp_count(col1, '@');Current output :-
SHOW
AAA
SSS
DDD
RRR
ZZA
TTT
RRR
ZZA
XXX
DDD
RRR
SHOW
ZZA
TTT
RRR
ZZA
. . .1st row comes fine, But next row data is getting duplicated. And total record count = 30. I tried with some but didn't work.
Expected output :-
SHOW
AAA
SSS
DDD
ZZZ
XXX
TTT
RRR
ZZAI need some change on my query and I am not able to find that. So anybody can add on that or can also provide some different solution too.
Thanks!
AshutoshHi,
When you use something like "CONNECT BY LEVEL <= x", then at least one of the following must be true:
(a) the table has no more than 1 row
(b) there are other conditions in the CONNECT BY clause, or
(c) you know what you are doing.
To help see why, run this query
SELECT SYS_CONNECT_BY_PATH (dname, '/') AS path
, LEVEL
FROM scott.dept
CONNECT BY LEVEL <= 3
;and study the results:
PATH LEVEL
/ACCOUNTING 1
/ACCOUNTING/ACCOUNTING 2
/ACCOUNTING/ACCOUNTING/ACCOUNTING 3
/ACCOUNTING/ACCOUNTING/RESEARCH 3
/ACCOUNTING/ACCOUNTING/SALES 3
/ACCOUNTING/ACCOUNTING/OPERATIONS 3
/ACCOUNTING/RESEARCH 2
/ACCOUNTING/RESEARCH/ACCOUNTING 3
/ACCOUNTING/RESEARCH/RESEARCH 3
/ACCOUNTING/RESEARCH/SALES 3
/ACCOUNTING/RESEARCH/OPERATIONS 3
/ACCOUNTING/SALES 2
/ACCOUNTING/SALES/ACCOUNTING 3
84 rows selected. -
Connect by level query is taking too long time to run
Hello,
I have a query that returns quarters (YYYYQ) of a begin- and enddate within a specific id, that is built with a connect by level clause, but the query is running to long. I have used explain plan to see what the query is doing, but no silly things to see, just a full table scan, with low costs.
This is the query:
select to_char(add_months( cpj.crpj_start_date,3*(level - 1)),'YYYYQ') as sales_quarter
, cpj.crpj_id as crpj_id
from mv_gen_cra_projects cpj
where cpj.crpj_start_date >= to_date('01/01/2009','mm/dd/yyyy')
and cpj.crpj_start_date <= cpj.crpj_end_date
and cpj.crpj_routing_type = 'A'
and ( cpj.crpj_multi_artist_ind = 'N'
or cpj.crpj_multi_artist_ind is null)
connect by level <= 1 + ceil(months_between(cpj.crpj_end_date,cpj.crpj_start_date)/3);
The result have to be like this:
SALES_QUARTER CRPJ_ID
20091 100
20092 100
20093 100
20094 100
20101 100
20102 100
Can anyone help me out with this?but no silly things to see, just a full table scan, with low costs.Well, maybe an index scan would be faster?
However:
You will need to provide us some more details, like:
- database version (the result of: SQL> select * from v$version;)
- post the explain plan output (put the tag before and after it, so indentation and formatting are maintained, see the [FAQ|http://forums.oracle.com/forums/help.jspa] for more explanation regarding tags )
- what are your optimizer settings (the result of: SQL> show parameter optimizer)
- if applicable: are your table statistics up to date?
- mv_gen_cra_projects is a materialized view perhaps?
Edited by: hoek on Jan 26, 2010 10:50 AM -
SQL with connect by prior running for a long time
Hi,
We are using Oracle 10g. Below is a cursor sql which is having performance issues. The pAccountid is being passed from the output of a different cursor. But this cursor sql is running for a long time. Could you please help me in tuning this sql. I believe the subquery with connect by prior is causing the trouble.
The TRXNS is a huge table which is not partitioned. The query is forced to use the index on the accountid of the TRXNS table.
The accountlink table has 20,000 records and the TRXNStrack table has 10,000 records in total.
This sql executes for 200,000 pAccountids and runs for more than 8 hours.
SELECT /*+ INDEX(T TRXNS_ACCOUNTID_NIDX) */ AL.FROMACCOUNTID oldaccountid ,
A.ACCOUNTNUM oldaccountnum,
T.TRXNSID,
T.TRXNSTYPEID,
T.DESCRIPTION ,
T.postdt,
T.TRXNSAMT
FROM
ACCOUNTLINK AL,
TRXNS T,
ACCOUNT A
WHERE AL.TOACCOUNTID IN
(SELECT TOACCOUNTID FROM ACCOUNTLINK START WITH TOACCOUNTID = pAccountid
CONNECT BY PRIOR FROMACCOUNTID = TOACCOUNTID)
AND AL.FROMACCOUNTID = T.ACCOUNTID
AND A.ACCOUNTID = AL.FROMACCOUNTID
AND NOT EXISTS (select 1 from TRXNStrack trck where trck.TRXNSid = t.TRXNSid AND TRXNSTrackReasonid = 1)
AND T.postdt > A.CLOSEDATE
AND T.postdt >= sysdate-2
AND T.postdt <= sysdate;
Create script for trxn table:
CREATE TABLE SP.TRXNS
TRXNSID NUMBER(15) CONSTRAINT "BIN$rpIQEeyLDfbgRAAUT4DEnQ==$0" NOT NULL,
ACCOUNTID NUMBER(15) CONSTRAINT "BIN$rpIQEeyMDfbgRAAUT4DEnQ==$0" NOT NULL,
STATEMENTID NUMBER(15),
TRXNSTYPEID NUMBER(15),
DESCRIPTION VARCHAR2(80 BYTE),
postdt DATE,
TRXNSAMT NUMBER(12,2),
TRXNSREQID NUMBER(15),
LASTUPDATE DATE,
SOURCEID NUMBER(15),
HIDE VARCHAR2(1 BYTE)
TABLESPACE SO_TRXN_DATA
RESULT_CACHE (MODE DEFAULT)
PCTUSED 40
PCTFREE 10
INITRXNS 2
MAXTRXNS 255
STORAGE (
INITIAL 50M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 8
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE INDEX SP.TRXNS_ACCOUNTID_NIDX ON SP.TRXNS
(ACCOUNTID, postdt)
LOGGING
TABLESPACE SO_TRXN_INDEX
PCTFREE 10
INITRXNS 2
MAXTRXNS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
NOPARALLEL;
below is the executing plan for this sql taken from toad :
PLAN_ID
TIMESTAMP
OPERATION
OPTIONS
OBJECT_OWNER
OBJECT_NAME
OBJECT_ALIAS
OBJECT_INSTANCE
OBJECT_TYPE
OPTIMIZER
SEARCH_COLUMNS
ID
PARENT_ID
DEPTH
POSITION
COST
CARDINALITY
BYTES
CPU_COST
IO_COST
TEMP_SPACE
ACCESS_PREDICATES
FILTER_PREDICATES
PROJECTION
TIME
QBLOCK_NAME
1121
9/10/2013 3:30
FILTER
1
0
1
1
NOT EXISTS (SELECT 0 FROM "TRXNSTRACK" "TRCK" WHERE "TRXNSTRACKREASONID"=1 AND "TRCK"."TRXNSID"=:B1)
AL."FROMACCOUNTID"[NUMBER,22], "T"."TRXNSID"[NUMBER,22], "T"."TRXNSTYPEID"[NUMBER,22], "T"."DESCRIPTION"[VARCHAR2,80], "T"."POSTDT"[DATE,7], "T"."TRXNSAMT"[NUMBER,22], "A"."ACCOUNTNUM"[VARCHAR2,19]
SEL$5DA710D3
1121
9/10/2013 3:30
FILTER
2
1
2
1
SYSDATE@!-2<=SYSDATE@!
AL."FROMACCOUNTID"[NUMBER,22], "T"."TRXNSID"[NUMBER,22], "T"."TRXNSTYPEID"[NUMBER,22], "T"."DESCRIPTION"[VARCHAR2,80], "T"."POSTDT"[DATE,7], "T"."TRXNSAMT"[NUMBER,22], "A"."ACCOUNTNUM"[VARCHAR2,19]
1121
9/10/2013 3:30
NESTED LOOPS
3
2
3
1
(#keys=0) "AL"."FROMACCOUNTID"[NUMBER,22], "T"."TRXNSID"[NUMBER,22], "T"."TRXNSTYPEID"[NUMBER,22], "T"."DESCRIPTION"[VARCHAR2,80], "T"."POSTDT"[DATE,7], "T"."TRXNSAMT"[NUMBER,22], "A"."ACCOUNTNUM"[VARCHAR2,19]
1121
9/10/2013 3:30
NESTED LOOPS
4
3
4
1
5
1
119
3989858
4
(#keys=0) "AL"."FROMACCOUNTID"[NUMBER,22], "T"."TRXNSID"[NUMBER,22], "T"."TRXNSTYPEID"[NUMBER,22], "T"."DESCRIPTION"[VARCHAR2,80], "T"."POSTDT"[DATE,7], "T"."TRXNSAMT"[NUMBER,22], "A".ROWID[ROWID,10]
1
1121
9/10/2013 3:30
NESTED LOOPS
5
4
5
1
4
1
90
3989690
3
(#keys=0) "AL"."FROMACCOUNTID"[NUMBER,22], "T"."TRXNSID"[NUMBER,22], "T"."TRXNSTYPEID"[NUMBER,22], "T"."DESCRIPTION"[VARCHAR2,80], "T"."POSTDT"[DATE,7], "T"."TRXNSAMT"[NUMBER,22]
1
1121
9/10/2013 3:30
HASH JOIN
SEMI
6
5
6
1
3
2
54
3989094
2
AL."TOACCOUNTID"="TOACCOUNTID"
(#keys=1) "AL"."FROMACCOUNTID"[NUMBER,22]
1
1121
9/10/2013 3:30
INDEX
FULL SCAN
SP
ACCOUNTLINK_AK1
AL@SEL$1
INDEX (UNIQUE)
ANALYZED
7
6
7
1
1
18
252
107
1
AL."FROMACCOUNTID"[NUMBER,22], "AL"."TOACCOUNTID"[NUMBER,22]
1
SEL$5DA710D3
1121
9/10/2013 3:30
VIEW
SYS
VW_NSO_1
VW_NSO_1@SEL$5DA710D3
11
VIEW
8
6
7
2
2
18
234
107
1
TOACCOUNTID[NUMBER,22]
1
SEL$683B0107
1121
9/10/2013 3:30
CONNECT BY
NO FILTERING WITH START-WITH
9
8
8
1
TOACCOUNTID=PRIOR "FROMACCOUNTID"
TOACCOUNTID=56354162
TOACCOUNTID[NUMBER,22], "FROMACCOUNTID"[NUMBER,22], PRIOR NULL[22], LEVEL[4]
SEL$683B0107
1121
9/10/2013 3:30
INDEX
FULL SCAN
SP
ACCOUNTLINK_AK1
ACCOUNTLINK@SEL$3
INDEX (UNIQUE)
ANALYZED
10
9
9
1
1
18
252
107
1
ACCOUNTLINK.ROWID[ROWID,10], "FROMACCOUNTID"[NUMBER,22], "TOACCOUNTID"[NUMBER,22]
1
SEL$3
1121
9/10/2013 3:30
TABLE ACCESS
BY INDEX ROWID
SP
TRXNS
T@SEL$1
2
TABLE
ANALYZED
11
5
6
2
1
1
63
298
1
T."TRXNSID"[NUMBER,22], "T"."TRXNSTYPEID"[NUMBER,22], "T"."DESCRIPTION"[VARCHAR2,80], "T"."POSTDT"[DATE,7], "T"."TRXNSAMT"[NUMBER,22]
1
SEL$5DA710D3
1121
9/10/2013 3:30
INDEX
RANGE SCAN
SP
TRXNS_ACCOUNTID_NIDX
T@SEL$1
INDEX
ANALYZED
2
12
11
7
1
1
1
224
1
AL."FROMACCOUNTID"="T"."ACCOUNTID" AND "T"."POSTDT">=SYSDATE@!-2 AND "T"."POSTDT"<=SYSDATE@!
T.ROWID[ROWID,10], "T"."POSTDT"[DATE,7]
1
SEL$5DA710D3
1121
9/10/2013 3:30
INDEX
UNIQUE SCAN
SP
ACCOUNT_PK
A@SEL$1
INDEX (UNIQUE)
ANALYZED
1
13
4
5
2
1
1
90
1
A."ACCOUNTID"="AL"."FROMACCOUNTID"
A.ROWID[ROWID,10]
1
SEL$5DA710D3
1121
9/10/2013 3:30
TABLE ACCESS
BY INDEX ROWID
SP
ACCOUNT
A@SEL$1
3
TABLE
ANALYZED
14
3
4
2
1
1
29
168
1
A."CLOSEDATE"<SYSDATE@! AND "T"."POSTDT">"A"."CLOSEDATE"
A."ACCOUNTNUM"[VARCHAR2,19]
1
SEL$5DA710D3
1121
9/10/2013 3:30
INDEX
RANGE SCAN
SP
TRXNSTRACK_TRXNSID_NIDX
TRCK@SEL$6
INDEX
ANALYZED
2
15
1
2
2
1
1
10
73
1
TRCK."TRXNSID"=:B1 AND "TRXNSTRACKREASONID"=1
TRCK."TRXNSID"[NUMBER,22], "TRXNSTRACKREASONID"[NUMBER,22]
1
SEL$6
Please help me in debugging this thanks!Hi,
Thanks for your thought on this subject. Below is the trace info that I got from the DBA
SQL ID: d0x879qx2zgtz Plan Hash: 4036333519
SELECT /*+ INDEX(T TRXNS_ACCOUNTID_NIDX) */ AL.FROMACCOUNTID OLDACCOUNTID ,
A.ACCOUNTNUM OLDACCOUNTNUM, T.TRXNSID, T.TRXNSTYPEID, T.DESCRIPTION ,
T.POSTDT, T.TRXNSAMT
FROM
ACCOUNTLINK AL, TRXNS T, ACCOUNT A WHERE AL.TOACCOUNTID IN (SELECT
TOACCOUNTID FROM ACCOUNTLINK START WITH TOACCOUNTID = :B3 CONNECT BY PRIOR
FROMACCOUNTID = TOACCOUNTID) AND AL.FROMACCOUNTID = T.ACCOUNTID AND
A.ACCOUNTID = AL.FROMACCOUNTID AND NOT EXISTS (SELECT 1 FROM TRXNSTRACK
TRCK WHERE TRCK.TRXNSID = T.TRXNSID AND TRXNSTRACKREASONID = :B4 ) AND
T.POSTDT > A.CLOSEDATE AND T.POSTDT >= :B2 AND T.POSTDT <= :B1
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 17160 2.10 1.87 0 0 0 0
Fetch 17160 7354.61 7390.86 169408 5569856 883366791 0
total 34320 7356.71 7392.74 169408 5569856 883366791 0
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 38 (recursive depth: 1)
SQL ID: gs89hpavb4cts Plan Hash: 3415795327
SELECT A.ACCOUNTID, C.MEMBERID, A.PROGRAMID, A.ACCOUNTNUM
FROM
CUSTOMER C, CUSTOMERACCOUNT CA, ACCOUNT A, PROGRAMPARAMVALUE PPV,
BATCHPROCESSPROGRAM BP WHERE A.PROGRAMID = BP.PROGRAMID AND A.PROGRAMID =
PPV.PROGRAMID AND A.ACCOUNTID = CA.ACCOUNTID AND CA.PERSONID = C.PERSONID
AND PPV.PARAMID = :B2 AND PPV.VALUE = 'Y' AND BP.PROCESSID = :B1 AND BP.RUN
= 'Y' AND A.ACCOUNTTYPEID = 4 AND A.ACCOUNTSTATUSID = 1 AND C.MEMBERID IS
NOT NULL
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 172 13.14 115.34 80826 278650 0 17200
total 172 13.14 115.34 80826 278650 0 17200
Misses in library cache during parse: 0
Parsing user id: 38 (recursive depth: 1)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 17160 2.10 1.87 0 0 0 0
Fetch 17332 7367.75 7506.21 250234 5848506 883366791 17200
total 34492 7369.85 7508.09 250234 5848506 883366791 17200
Misses in library cache during parse: 0
2 user SQL statements in session.
0 internal SQL statements in session.
2 SQL statements in session.
Trace file: svoprod_ora_12346.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
2 user SQL statements in trace file.
0 internal SQL statements in trace file.
2 SQL statements in trace file.
2 unique SQL statements in trace file.
66499 lines in trace file.
7516 elapsed seconds in trace file. -
Join fact table with higher dimension level
how do i join fact tables with higher dimension levels with discoverer?
fact with detail at level C
measure X
dimension with
D->C->B->A
E->C
level
A B C
1------1------1
2------2------1
3------2------1
join between fact X and dimension level C
X=3*C because of sum(X) in discoverer and 3xC in dimension
is there a way to get correct values for X without creating a dimension like
D->C
E->another way of asking this is whether you can create a summary table in Discoverer at a higher level than a dimension's fundamental grain. In other words - the summary examples in the documentation all describe leaving out one or more of your dimensions... they are either left in or completely taken out. But, some of the most effective summarization occurs when you summarize daily data to a monthly level. Assuming that I have a sales table (at a daily level, and a key value sales_date), and a table date_dim (primary key sales_date), I would like to create a summary sales_month_summary where the sales are grouped on month_year (which is a field in the sales_date table).
How is this done? I suspect that we can't use the date_dim table with the summary (due to the problems noted by the poster above). Do we have to create another table "month_dim"? Do we have to fold all of the desired date attributes (month, quarter, year) into the summary? Obviously we'd like to re-use all of the pertinent already existing date items (quarter, month, year, etc.), not recreate them over again, which would result in essentially two sets of items in the EUL. [One used for this month summary, and another used for the detail.]
I searched the forum - someone asked this same question back in 2000 - there was no answer provided.
The only other thought I have is to "snowflake" the date_dim into two tables and two folders, one at a date level, another at the month level. Then the detail tables can connect to date_dim (which is linked to month_dim), while the summary data can connect directly to month_dim. -
Decision Service calls failing with connection refused
Hi All,
Breifing of the bpel process - Async bpel process XX_BPEL_ASYNC_002 calls decision services and a sync bpel process XX_BPEL_SYNC_001.
We have moved to cluster 10.1.3.5.0 from 10.1.3.3.0 and the deployement of the bpel process and the decision service done on two nodes i.e., node51 and node55.
But the bpel process is able to call the decision service only when the node 51 is UP and running, but if node 51 is down and node 55 is up and running it is failing with connection refused error. It is also working when 55 is DOWN and 051 is UP.
the exact error is:
exception on JaxRpc invoke: HTTP transport error: javax.xml.soap.SOAPException: java.security.PrivilegedActionException: javax.xml.soap.SOAPException: Message send failed: Connection refused.
We are using file based repository.
It is failing when it tries to call particular url:
http://loadbalancerurl:7777/orabpel/domain001/XX_BPEL_ASYNC_002/1.0/decisionservices.decs
Can anyone please help me or advice me for finding some solution.
Thanks
SreejitHi Fatsah,
It has been a while since we resolved this problem. But these were the basic steps that we had taken.
The receiver was configured a File adatpter with FTP:
Port: 10021
Connection security : FTPS (FTP using SSL/TLS) for Control and Data Connection
Command Order: AUTH TLS,USER,PASS,PBSZ,PROT
Problem: The message was blocked in our firewall.
FTPS in this case used 10021 as the control port and the firewall allowed to pass the control request.
Once the connection was accepted at the control level, it generated a random port number to pass data on the channel.
This random port number was blocked in the firewall.
We monitored the data port numbers generated at firewall. Then we configured the firewall to open any port number between the specific IP addresses of the sender and the receiver. That resolved the problem.
Hope this helps!
Please give points if this is helpful.
Thank you.
Dharmasiri Amith -
CONNECT BY LEVEL IN TRIGGER IS IN WAITING STATE
WITH emp_data
AS (SELECT 1 empno,
'mgr1' name,
NULL mgr,
5 num_of_reportees,
'Create' Status
FROM DUAL
UNION ALL
SELECT 2,
'mgr2',
NULL,
6,
'Wait'
FROM DUAL)
SELECT *
FROM emp_data;
create sequence emp_data_seq start with 3 increment by 1;
now I want to insert num of employees based on num_of_reportees column in the table for the emp status = 'Create' and below code is doing the same.
INSERT INTO emp_data
SELECT emp_data_seq.NEXTVAL,
name,
NULL,
NULL,
NULL
FROM ( SELECT DISTINCT 'emp' || LEVEL name
FROM emp_data
CONNECT BY LEVEL <= num_of_reportees AND status = 'Create');
Initiall status will be Wait, whenever user changes it from Wait to Create only this operation has to perform, when I place this insert statment in trigger it's not executing and is in waiting state
Could you please let me know how to achieve this requirement using trigger,
I couldn't use the procedure for this requirement, Its a standard application and we are doing customizationSQL> SELECT *
2 FROM emp_data
3 /
EMPNO NAME MGR NUM_OF_REPORTEES STATUS
1 mgr1 5 Create
2 mgr2 6 Wait
INSERT
INTO emp_data
SELECT emp_data_seq.NEXTVAL,
'emp' || column_value,
NULL,
NULL,
NULL
FROM emp_data,
TABLE(
CAST(
MULTISET(
SELECT level
FROM dual
CONNECT BY LEVEL <= num_of_reportees
AS sys.OdciNumberList
WHERE status = 'Create'
5 rows created.
SQL> SELECT *
2 FROM emp_data
3 /
EMPNO NAME MGR NUM_OF_REPORTEES STATUS
1 mgr1 5 Create
2 mgr2 6 Wait
3 emp1
4 emp2
5 emp3
6 emp4
7 emp5
7 rows selected.
SQL> SY.
Maybe you are looking for
-
I'm just looking for someone with a bit of experience here. I've looked around, but i'm not so clear on some things. First off, what i'm looking for is a 17 - 24' LCD that'll function as both a television (able to hook up to basic cable and my consol
-
[nQSError: 15011]: see the error details posted below.
[nQSError: 15011] The dimension table source PRODUCTS.AGG_PRODUCTS_CATEGORY_D has an aggregate content specification that specifies the level prod_subcategory. But the source mapping contains column PROD_TOTAL with a functional dependency association
-
I have a project in FCP filmed in 16:9. When I send the project to color, the footage is displayed in stretched 4:3. Any ideas?
-
Pass parameter to success method of ExecuteQueryAsync
I have seen how to pass a parameter to a callback method of the ExecuteQueryAsync SP.ClientContext object. The problem occurs when multiple calls are performed. The last value is always passed to the callback method. Below is a sample of a while l
-
Hi all, I am facing trouble in Bi 7 admin cockpit. all infocubes, datasources,infopackages,process chains are installed . Process chains are running without any errors... But the problem is DATA IS NOT GETTING UPDATED. As tehre is no errors i took on