A better way to write last_day query
Hi folks,
I am looking for a better way to write a query to find last_day in month but if its sunday or holiday it should move to day before last day.
So for example if 31 is sunday it should go for 30, if 30 is holiday it should move down to 29.
I got this so far but the connect by level is hardcoded to 15. Want to see if there is a better way to get this working:
select max(datum)
from ( select last_day(trunc(sysdate)) - level + 1 as datum
from dual
connect by level < 15)
where to_char(datum, 'day') != 'sunday'
and to_char(datum, 'DDMM') not in
('3012')Best regards,
Igor
Like this
select to_char(last_day_month, 'Day') day_is,
last_day_month,
last_day_month - case when to_char(last_day_month, 'fmday') = 'sunday' then 1
when to_char(last_day_month, 'ddmm' ) = '3012' then 1
else 0
end last_business_day_month
from (
select last_day(add_months(trunc(sysdate, 'year'), level-1)) last_day_month
from dual
connect by level <= 12
DAY_IS LAST_DAY_MONTH LAST_BUSINESS_DAY_MONTH
Tuesday 31-JAN-12 31-JAN-12
Wednesday 29-FEB-12 29-FEB-12
Saturday 31-MAR-12 31-MAR-12
Monday 30-APR-12 30-APR-12
Thursday 31-MAY-12 31-MAY-12
Saturday 30-JUN-12 30-JUN-12
Tuesday 31-JUL-12 31-JUL-12
Friday 31-AUG-12 31-AUG-12
Sunday 30-SEP-12 29-SEP-12
Wednesday 31-OCT-12 31-OCT-12
Friday 30-NOV-12 30-NOV-12
Monday 31-DEC-12 31-DEC-12
Similar Messages
-
Need help to get alternate or better way to write query
Hi,
I am on Oracle 11.2
DDL and sample data
create table tab1 -- 1 millions rows at any given time
id number not null,
ref_cd varchar2(64) not null,
key varchar2(44) not null,
ctrl_flg varchar2(1),
ins_date date
create table tab2 -- close to 100 million rows
id number not null,
ref_cd varchar2(64) not null,
key varchar2(44) not null,
ctrl_flg varchar2(1),
ins_date date,
upd_date date
insert into tab1 values (1,'ABCDEFG', 'XYZ','Y',sysdate);
insert into tab1 values (2,'XYZABC', 'DEF','Y',sysdate);
insert into tab1 values (3,'PORSTUVW', 'ABC','Y',sysdate);
insert into tab2 values (1,'ABCDEFG', 'WYZ','Y',sysdate);
insert into tab2 values (2,'tbVCCmphEbOEUWbxRKczvsgmzjhROXOwNkkdxWiPqDgPXtJhVl', 'ABLIOWNdj','Y',sysdate);
insert into tab2 values (3,'tbBCFkphEbOEUWbxATczvsgmzjhRQWOwNkkdxWiPqDgPXtJhVl', 'MQLIOWNdj','Y',sysdate);I need to get all rows from tab1 that does not match tab2 and any row from tab1 that matches ref_cd in tab2 but key is different.
Expected Query output
'ABCDEFG', 'WYZ'
'XYZABC', 'DEF'
'PORSTUVW', 'ABC'Existing Query
select
ref_cd,
key
from
select
ref_cd,
key
from
tab1, tab2
where
tab1.ref_cd = tab2.ref_cd and
tab1.key <> tab2.key
union
select
ref_cd,
key
from
tab1
where
not exists
select 1
from
tab2
where
tab2.ref_cd = tab1.ref_cd
);I am sure there will be an alternate way to write this query in better way. Appreciate if any of you gurus suggest alternative solution.
Thanks in advance.Hi,
user572194 wrote:
... DDL and sample data ...
create table tab2 -- close to 100 million rows
id number not null,
ref_cd varchar2(64) not null,
key varchar2(44) not null,
ctrl_flg varchar2(1),
ins_date date,
upd_date date
insert into tab2 values (1,'ABCDEFG', 'WYZ','Y',sysdate);
insert into tab2 values (2,'tbVCCmphEbOEUWbxRKczvsgmzjhROXOwNkkdxWiPqDgPXtJhVl', 'ABLIOWNdj','Y',sysdate);
insert into tab2 values (3,'tbBCFkphEbOEUWbxATczvsgmzjhRQWOwNkkdxWiPqDgPXtJhVl', 'MQLIOWNdj','Y',sysdate);
Thanks for posting the CREATE TABLE and INSERT statments. Remember why you go to all that trouble: so the people whop want to help you can re-create the problem and test their ideas. When you post statemets that don't work, it's just a waste of time.
None of the INSERT statements for tab2 work. Tab2 has 6 columns, but the INSERT statements only have 5 values.
Please test your code before you post it.
I need to get all rows from tab1 that does not match tab2 WHat does "match" mean in this case? Does it mean that tab1.ref_cd = tab2.ref_cd?
and any row from tab1 that matches ref_cd in tab2 but key is different.
Existing Query
select
ref_cd,
key
from
select
ref_cd,
key
from
tab1, tab2
where
tab1.ref_cd = tab2.ref_cd and
tab1.key <> tab2.key
union
select
ref_cd,
key
from
tab1
where
not exists
select 1
from
tab2
where
tab2.ref_cd = tab1.ref_cd
Does that really work? In the first branch of the UNION, you're referencing a column called key, but both tables involved have columns called key. I would expect that to cause an error.
Please test your code before you post it.
Right before UNION, did you mean
tab1.key != tab2.key? As you may have noticed, this site doesn't like to display the <> inequality operator. Always use the other (equivalent) inequality operator, !=, when posting here.
I am sure there will be an alternate way to write this query in better way. Appreciate if any of you gurus suggest alternative solution.Avoid UNION; it can be very inefficient.
Maybe you want something like this:
SELECT tab1.ref_cd
, tab1.key
FROM tab1
LEFT OUTER JOIN tab2 ON tab2.ref_cd = tab1.ref_cd
WHERE tab2.ref_cd IS NULL
OR tab2.key != tab1.key
; -
Looking for a better way to write this SQL
Oracle version 11R2
OS version (does not matter)
What I trying to do is write a query that finds Public Synonyms without a target object. I came up with this but I thinking there's a better way.
Select
s.owner, s.synonym_name, s.table_name, s.table_owner, s.db_link, InitCap(o.object_type) object_type
from
sys.DBA_SYNONYMS s, sys.DBA_OBJECTS o
where
s.synonym_name is not null
and
s.table_owner = o.owner (+)
and
s.table_name = o.object_name (+)
and
s.owner = 'PUBLIC'
and
object_type is null; object_type is null appears to be the weakness. It seems the check of the target object should be better.
Feedback, comments, queries welcome.I'm not sure exactly what "better" means in this context (faster, easier to read, etc.) but I'd tend to use a NOT EXISTS
SELECT s.*
FROM dba_synonyms s
WHERE owner = 'PUBLIC'
AND s.db_link IS NULL
AND NOT EXISTS (
SELECT 1
FROM dba_objects o
WHERE o.owner = s.table_owner
AND o.object_name = s.table_name )I added the DB_LINK criteria to filter out public synonyms that reference objects in remote databases which obviously don't exist in the local DBA_OBJECTS.
Justin -
Is there better way to write SQl Insert Script
I am running out of ideas while working on one scenario and thought you guys would be able to guide me in right way.
So, the scenario is I have a Table table1 table1(fieldkey, DisplayName, type) fieldkey - pkey This table have n number of rows. The value of fieldkey in nth row is n. So if we have 1000 record, then the last row has a fieldkey = 1000.
Below is my sample data.
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
(1001, 'COfficer',100);
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
(1002, 'PData',100);
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
(1003, 'EDate',100);
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
(1004, 'PData',200);
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
(1005, 'EDate',300);
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
(1006, 'Owner',400);This way of inserting the row with hardcoded value of fieldkey was creating the problem when the same table was used by other developer for their own new functionality.
So, I thought of using the max(fieldkey) +1 from that table and use it in my insert script. This script file runs every time during software installation.
I thought of using count to see if the row with same displaytype and type exists in that table or not. If exisits do not insert new row, if not insert new row.
It looks like i will have to query the count statement everytime before I insert the row.
select max(fieldkey) +1 into ll_fieldkey from table1
select count(*) into ll_count from table1 where display ltrim(upper(name)) = 'COFFICER' and type =100)
if ll_count >0 then
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
( ll_fieldkey, 'COfficer',100);
ll_fieldkey := ll_fieldkey +1
end if;
select count(*) into ll_count from table1 where display ltrim(upper(name)) = 'PData' and type =100)
if ll_count >0 then
Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
( ll_fieldkey, 'PData',100);
ll_fieldkey := ll_fieldkey +1
end if;
... and so on for all the insert statement. So, i was wondering if there is some better way to handle this situation of inserting the data.
Thank youHi !
For check if the same display name and type already exists in table i would use Unique Key , but then again instead of if statement you should code some exception handlers. ... Hm .. Unque key is by may opinion better solution as
codeing checks .
For faster inserts that is , smaller code .. if there is no rules for values and the values are fixed , in any case you have to do this 100 inserts. If you can "calculate" values then maybe you can figure out some code .. but effect will be the same as hundred insert stetements one after another .. Procedure with this 100 inserts is not so bad solution.
You can fill with values a nested table and then use forall ... save exceptions insert and with above mentioned UK , maybe this will be better.
T
Edited by: ttt on 10.3.2010 13:01 -
A better way to write this in pl\sql
I have a query with the following condition:
WHERE a.sortest_tesc_code in ('EB','MB','CH','CL','EP','FL','FR','GL', 'GM','IT','JL','KL','LR','LT','MI','M2', 'IC','2C','MH','PH','SL','SP','UH','WH','WR')
I am using a subquery to retrieve the maximum score:
It is possible to have a student that take up to 6 test (out of those codes) or more.
I have to store the maximum score in a column top1, then top2, top 3
For example:
APPLICANT_PIDM SORTEST_TESC_CODE SORTEST_TEST_SCORE
69136 CH 660
69136 FR 680
69136 WR 600
FR 680 will go to TOP1 and CH 660 will go to top2 and WR 600 TOP 3.
I DON’T want to create 25 variables to do the compare, is there an easy way to do this in PL\SQL, I AM THINKING IN using records, but I am not sure I can do it that way, so I am asking you? Can you give me some ideas, AGAIN, I don’t want to create all those variables.. ANY IDEAS will be appreciated.If you have all the eligible scores in the inner_query, then
select * FROM (
select applicant_pidm, sorttest_tesc_code,sorttest_test_score,
select row_number() OVER (PARTITION BY applicant_pidm ORDER BY sorttest_test_score DESC) ranking
WHERE ranking <= 3
will give you the raw information you need, then you can use CASE or DECODE to "pivot",
i.e.
select topcode1....
FROM
(select CASE WHEN ranking = 1 THEN sorttest_tesc_code ELSE NULL END TopCode1....
GROUP BY applicant_pidm
Jon -
A better way to write this.
My hope was to run through the list of inks and check each ink to see if it matched with a name in my array and then assign a value. I was attempts were unsuccessful. So I went with a bunch of if statements. I know I should be able to reduce this down and say if variable ink ==name in this array assign it this value from this array.
When I make a second loop to get to the array name the first loop does not work.
I just need a point in the right direction.
myDocument=app.activeDocument;
densityChange();
function densityChange(){
/*for (i=0; i<myDocument.swatches.length; i++){
name = myDocument.swatches[i].name;
//neutralDensity
alert(name);
var ssDensityChng=(["203 Mandarin","201 Orange","601 Purple","202 Met Orange","302 Yellow","401 Bright Grn","301 Golden Yel","106 Bright Red","104 Cadmium Red","102 Maroon","103 Dark Red","101 Burgundy","501 Met Blue","602 Met Purple","404 Met Sage","712 Bronze","704 Silver","702Met Pewter"]);
var myDensity=([0.0341,0.0699,0.1076,0.1473,0.1892,0.2337,0.281,0.3315,0.3858,0.4444,0.508,0. 5776,0.6545,0.7403,0.8375,0.9493,1.0813,1.242]);
for (i=0; i<myDocument.inks.length; i++){
var name = myDocument.inks[i].name;
var density = myDocument.inks[i].neutralDensity;
if(name== "203 Mandarin"){
density=myDocument.inks[i].neutralDensity=0.0341;
if(name=="201 Orange"){
density=myDocument.inks[i].neutralDensity=0.0699;
if(name=="601 Purple"){
density=myDocument.inks[i].neutralDensity=0.1076;
if(name=="202 Met Orange"){
density=myDocument.inks[i].neutralDensity=0.1473;
if(name=="302 Yellow"){
density=myDocument.inks[i].neutralDensity=0.1892;
if(name=="401 Bright Grn"){
density=myDocument.inks[i].neutralDensity=0.2337;
if(name=="301 Golden Yel"){
density=myDocument.inks[i].neutralDensity=0.281;
if(name=="106 Bright Red"){
density=myDocument.inks[i].neutralDensity=0.3315;
if(name=="104 Cadmium Red"){
density=myDocument.inks[i].neutralDensity=0.3858;
if(name=="102 Maroon"){
density=myDocument.inks[i].neutralDensity=0.4444;
if(name=="103 Dark Red"){
density=myDocument.inks[i].neutralDensity=0.508;
if(name=="101 Burgundy"){
density=myDocument.inks[i].neutralDensity=0.5776;
if(name=="501 Met Blue"){
density=myDocument.inks[i].neutralDensity=0.6545;
if(name=="602 Met Purple"){
density=myDocument.inks[i].neutralDensity=0.7403;
if(name=="404 Met Sage"){
density=myDocument.inks[i].neutralDensity=0.8375;
if(name=="712 Bronze"){
density=myDocument.inks[i].neutralDensity=0.9493;
if(name=="704 Silver"){
density=myDocument.inks[i].neutralDensity=1.0813
if(name=="702Met Pewter"){
density=myDocument.inks[i].neutralDensity=1.242;Sorry I didn't provide much in the way of explanation with the code. The links Jongware provided should help you understand what's happening in the loop.
i and l are variable declarations. They aren't defined until the for statement. It's considered good style in certain precincts to only have one var declaration statement per function or program, which makes explicit how JavaScript will treat them—by hoisting them to the top, that is. This is to help you not make some incorrect but perfectly natural assumptions about the scope of your variables. Douglas Crockford and his JSLint are the driving forces behind this convention.
The l in my loop is probably confusing. I'm simply caching the length of the doc.inks collection so I don't have to check it on every go-round. This can make a surprising performance difference in large loops but is totally unnecessary here. (But that's the way I have it set up in Text Expander, so that's how it comes out!)
The if statement is checking whether the name of the current ink is one of the keys in the densities object/dictionary/hash/associative array. If it is, then the ink's neutralDensity is assigned the value of that key. So if the ink's name is "203 Mandarin", its neutralDensity will be assigned the value of densities["203 Mandarin"], which is 0.0341. I think I'm not explaining very well; probably stepping through it in the ESTK and looking at your values in the console will be more enlightening.
Jeff -
Is there any better way of writing this query??
I have a query on insert which looks like this,
INSERT INTO TEMP( I1,I2)
SELECT TI1 FROM CLIENT1
WHERE R_CD ='PR' OR 'SR',
SELECT TI2 FROM CLIENT2
WHERE R_CD = 'MN' OR 'OP
There are two tables where the source data is coming from and inserted into TEMP table. I find this query to be inefficient. Anybody who can help me writing a good one?? Thanks.<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by [email protected]:
I have a query on insert which looks like this,
INSERT INTO TEMP( I1,I2)
SELECT TI1 FROM CLIENT1
WHERE R_CD ='PR' OR 'SR',
SELECT TI2 FROM CLIENT2
WHERE R_CD = 'MN' OR 'OP
There are two tables where the source data is coming from and inserted into TEMP table. I find this query to be inefficient. Anybody who can help me writing a good one?? Thanks.<HR></BLOCKQUOTE>
A possible solution,
INSERT INTO TEMP( I1,I2)
SELECT C1.TI1, C2.TI2
FROM CLIENT1 C1, CLIENT2 C2
WHERE (C1.R_CD = 'PR' OR C1.R_CD ='SR') AND
(C2.R_CD = 'MN' OR C2.R_CD = 'OP')
null -
Better to write this query -- the UNION kills the query
Is there a better way to write this qery to avoid the union?
CREATE TABLE EMP
EMP_ID NUMBER,
LAST_NAME VARCHAR2(20),
FIRST_NAME VARCHAR2(20),
MID_NAME VARCHAR2(20)
CREATE TABLE EMP_NM
EMP_ID NUMBER,
LAST_NAME VARCHAR2(20),
FIRST_NAME VARCHAR2(20),
MID_NAME VARCHAR2(20)
INSERT INTO EMP
VALUES(
1, 'ANDERSON', 'SCOTT', NULL)
INSERT INTO EMP
VALUES
(2, 'KEVINSKY', 'KEVIN', NULL
INSERT INTO EMP_NM
VALUES(
1, 'ANDERSON', 'SCOTT', NULL)
INSERT INTO EMP_NM
VALUES(
1, 'LEE', 'SCOTT', 'K')
INSERT INTO EMP_NM
VALUES
(2, 'KEVINSKY', 'KEVIN', NULL )
INSERT INTO EMP_NM
VALUES
(2, 'ANDERSON', 'KEVIN', NULL )
SELECT
E.EMP_ID ,
E.LAST_NAME ,
E.FIRST_NAME ,
E.MID_NAME FROM
EMP E
WHERE
E.LAST_NAME =:LAST_NAME
UNION
SELECT
E.EMP_ID ,
E.LAST_NAME ,
E.FIRST_NAME ,
E.MID_NAME
FROM
SELECT EN.EMP_ID
FROM
EMP_NM EN
WHERE
EN.LAST_NAME =:LAST_NAME ) EN1 ,
EMP E
WHERE
E.EMP_ID = EN1.EMP_IDEXPLAIN PLAN without sort
SELECT STATEMENT Optimizer Mode=CHOOSE 171 K 6717
FILTER
TABLE ACCESS FULL EMP 171 K 5 M 6717
TABLE ACCESS BY INDEX ROWID EMP_NM 1 14 1
INDEX RANGE SCAN IDXEMP_ID 1 3
EXPLIAN PLAN WITH SORT
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 171 K 7658
SORT ORDER BY 171 K 5 M 7658
FILTER
TABLE ACCESS FULL EMP 171 K 5 M 6717
TABLE ACCESS BY INDEX ROWID EMP_NM 1 14 1
INDEX RANGE SCAN IDXEMP_ID 1 3 -
Best way to write a function that returns query texts
Hi, I need to write a function that will take a parameter containing table name and based upon it returns the text of a query. Here is what I am thinking of doing it but would like to get feedback whether there is a better way to do it.
For example:
FUNCTION getSQL(p_1 VARCHAR2, p_2 VARCHAR2) RETURN VARCHAR2
IS
sql varchar2(2000);
BEGIN
--Here i want IF THEN ELSE part and thes based upon it return a desired sql. There will be more than a dozen sqls that the function will return.
IF p_1='Employee' Then
sql:='Select * from employee Where employee_id=p_2;'
Else If p_1='Departement' Then
sql:='Select col1, col2, col3,';
sql:=sql || ' col4, col5, col6';
sql:=sql || ' Where dept_id=p_2';
Else If.....
Else If.....
Else If.....
Else If.....
End IF;
return sql;
END;Edited by: dreporter on Sep 20, 2010 8:43 AMI'm never sure I understand the desire to put lots of cursors into a single stored procedure, given that they are inherently different things.
However, assuming you have made the above decision, you should almost certainly look at using the overload of dbms_xmlgen.newcontext that accepts a cursor parameter rather than a string. This would allow you to return cursors from your function rather than strings, giving you the option to use bind variables (and prevent associated SQL injection) and perhaps not even do dynamic SQL at all, something like this perhaps...
FUNCTION get_cursor (
p_cursor_type VARCHAR2,
p_cursor_parameter VARCHAR2)
RETURN sys_refcursor
IS
v_sys_cursor sys_refcursor;
BEGIN
CASE p_cursor_type
WHEN 'Employee' THEN
OPEN v_cursor FOR
SELECT empno, ename
FROM emp
WHERE empno = TO_NUMBER (p_cursor_parameter);
WHEN 'Department' THEN
OPEN v_cursor FOR
SELECT deptno, dname
FROM dept
WHERE deptno = TO_NUMBER (p_cursor_parameter);
END CASE;
RETURN v_sys_cursor;
END get_cursor;
/ -
Kindly help with rewriting the foll. query in a better way
IS there a better way of writing the foll query:
When I have 12,50,00,000 rows in Fact Table, the query is unable to execute. I use more than 200GB of temporary space. But I still get Temp Tablespace Full Error:
--Foll WITH Clause is to calculate Sum of Debit-Credit to calculate BLNC acc. to Group By values
WITH crnt_blnc_set
AS ( SELECT f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.pstng_crncy_id AS crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id AS txn_id,
f.acntng_entry_src AS txn_src,
f.acntng_entry_typ AS acntng_entry_typ,
f.val_dt_dmn_id,
f.revsn_dt,
SUM (
DECODE (
f.pstng_typ,
'Credit', f.pstng_amnt,
0))
- SUM (
DECODE (
f.pstng_typ,
'Debit', f.pstng_amnt,
0))
AS blnc
FROM FactTable f
GROUP BY f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.pstng_crncy_id,
f.acnt_dmn_id,
f.txn_id,
f.acntng_entry_src,
f.acntng_entry_typ,
f.val_dt_dmn_id,
f.revsn_dt),
--Foll WITH Clause calculates Min and Max Date Ids for the Group By conditions as mentioned
min_mx_dt
AS ( SELECT /*+parallel(32)*/
f.hrarchy_dmn_id AS hrarchy_dmn_id,
f.prduct_dmn_id AS prduct_dmn_id,
f.crncy_dmn_id AS crncy_dmn_id,
f.acnt_dmn_id AS acnt_dmn_id,
f.txn_id AS txn_id,
f.txn_src AS txn_src,
f.acntng_entry_typ AS acntng_entry_typ,
MIN (f.val_dt_dmn_id) AS min_val_dt,
GREATEST (MAX (f.val_dt_dmn_id), 2689) AS max_val_dt
FROM crnt_blnc_set f
GROUP BY f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id,
f.txn_src,
f.acntng_entry_typ),
/*Foll WITH Clause has a Cartesian Join on date_dmn to populate missing entries
This requirement is because if we have a distinct row for
hrarchy_dmn_id,
prduct_dmn_id,
crncy_dmn_id,
acnt_dmn_id,
txn_id,
txn_src,
acntng_entry_typ Combination and If wehave a missing entry for that in the max date provided then we actually create those missing entries*/
slctd_rcrds
AS ( SELECT /*+ ordered use_nl(d) parallel(mx, 4) */
mx.hrarchy_dmn_id AS hrarchy_dmn_id,
mx.prduct_dmn_id AS prduct_dmn_id,
mx.crncy_dmn_id AS crncy_dmn_id,
mx.acnt_dmn_id AS acnt_dmn_id,
mx.txn_id AS txn_id,
mx.txn_src AS txn_src,
mx.acntng_entry_typ AS acntng_entry_typ,
d.date_value AS val_dt,
d.date_dmn_id AS val_dt_dmn_id
FROM min_mx_dt mx, date_dmn d
WHERE mx.min_val_dt <= d.date_dmn_id
AND mx.max_val_dt >= d.date_dmn_id
--Foll. WITH clause actually has a outer Join with Firt With Clause to populate the values accordingly
cmbnd_rcrds
AS (
SELECT /*+ USE_HASH(c) */ s.hrarchy_dmn_id AS hrarchy_dmn_id,
s.prduct_dmn_id AS prduct_dmn_id,
s.crncy_dmn_id AS crncy_dmn_id,
s.acnt_dmn_id AS acnt_dmn_id,
s.txn_id AS txn_id,
s.txn_src AS txn_src,
s.acntng_entry_typ AS acntng_entry_typ,
s.val_dt_dmn_id AS val_dt_dmn_id,
NVL (c.revsn_dt, s.val_dt) AS revsn_dt,
NVL (c.blnc, 0) AS blnc,
0 AS prvs_rcrd_ind
FROM slctd_rcrds s, crnt_blnc_set c
WHERE s.hrarchy_dmn_id = c.hrarchy_dmn_id(+)
AND s.prduct_dmn_id = c.prduct_dmn_id(+)
AND s.crncy_dmn_id = c.crncy_dmn_id(+)
AND s.acnt_dmn_id = c.acnt_dmn_id(+)
AND s.txn_id = c.txn_id(+)
AND s.txn_src = c.txn_src(+)
AND s.acntng_entry_typ = c.acntng_entry_typ(+)
AND s.val_dt_dmn_id = c.val_dt_dmn_id(+))
Select * from cmbnd_rcrdsThanks for the response Alfonso. I have tried that as well. But Create Table as Also uses Temp Storage till it's created. And that again gives the same error as well.
Anyways I am now trying with a smaller set. This much piece gets executed in Half an hour but the next piece where we pivot the data is taking forever now.
That piece is as follows:
(SELECT /*+parallel(8)*/
f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id,
f.txn_src,
f.acntng_entry_typ,
f.val_dt_dmn_id,
f.revsn_dt,
SUM (
blnc)
OVER (
PARTITION BY f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id,
f.txn_src,
f.acntng_entry_typ
ORDER BY d.date_value)
AS crnt_blnc,
SUM (
blnc)
OVER (
PARTITION BY f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id,
f.txn_src,
f.acntng_entry_typ,
d.fin_mnth_num
|| d.fin_year_strt
|| d.fin_year_end
ORDER BY d.date_value)
/ d.mnth_to_dt
AS mtd_avg,
SUM (
blnc)
OVER (
PARTITION BY f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id,
f.txn_src,
f.acntng_entry_typ,
d.fin_year_strt || d.fin_year_end
ORDER BY d.date_value)
/ yr_to_dt
AS ytd_avg,
f.prvs_rcrd_ind AS prvs_rcrd_ind
FROM cmbnd_rcrds f, thor_date_dmn d
WHERE d.holidaY_ind = 0 AND f.val_dt_dmn_id = d.date_dmn_id)
SELECT f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id AS txn_id,
f.txn_src AS acntng_entry_src,
f.acntng_entry_typ AS acntng_entry_typ,
f.val_dt_dmn_id,
f.revsn_dt,
f.crnt_blnc,
f.mtd_avg,
f.ytd_avg,
'EOD TB ETL' AS crtd_by,
SYSTIMESTAMP AS crtn_dt,
NULL AS mdfd_by,
NULL AS mdfctn_dt
FROM fnl_set f
WHERE f.prvs_rcrd_ind = 0
ORDER BY f.hrarchy_dmn_id,
f.prduct_dmn_id,
f.crncy_dmn_id,
f.acnt_dmn_id,
f.txn_id,
f.txn_src,
f.acntng_entry_typ,
f.val_dt_dmn_id
Any other way to pivot this?
Also I am getting a lot of foll wait events:
PX Deq Credit :Send blkd
PX Deq :Table Q Normal
Direct Path Write Temp
And Direct Path Read Temp. -
Is there a better way to do this projection/aggregate query?
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll();Neil,
It sounds like the problem that you're running into is that Kodo doesn't
yet support the JDO2 grouping constructs, so you're doing your own
grouping in the Java code. Is that accurate?
We do plan on adding direct grouping support to our aggregate/projection
capabilities in the near future, but as you've noticed, those
capabilities are not there yet.
-Patrick
Neil Bacon wrote:
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll(); -
Hi Guys,
In 10g DB,
I am retreiving the data from value_column, the column having empids for most of the rows but for some rows emailids.
The data which I am getting like this: here I am joining three tables. based on this data I need to retreive only emailids.
VALUE_COLUMN
2959345
[email protected]
6560043
2392044
[email protected]
now, I want to get for all the employees emailids only. so where ever empid is present i need to get email for that employee.
Please can any one help me on this.
Thanks in advance!
Rgrds,
-LRKHi,
Whenever you have a problem, post a little sampel data (CREATE TABLE and INSERT statements, relevant columns only) and the results you want from that data.
How do you know if value_column is an email address, and empid, or something else? I'll assume that any string containing the cahracter '@' is an email address, and anything else is an empid.
When value_column contains an empid, how do you get the email address? I'll assume you have another column that contains the email address in those cases.
Here's one way to do that, given only value_column and that other column
SELECT CASE
WHEN INSTR (value_column, '@') != 0
THEN value_column
ELSE some_other_column
END AS email
FROM table_x
;However, it might be better to re-write some part of your existing query, to produce the same results more efficiently. -
How to write named query if we want to use IN syntax in our sql statement?
I cannot find a suitable category about named query, so please move to appropriate place if there is any.
When we write named query, below statement is fine.
Query q2 = em.createQuery("SELECT o FROM Table1 as o WHERE field1 = :input1"); q2.setParameter("input1", "value1");
Now, my question is, how can I write this type of query when we want to use the IN sql syntax? As below statement CANNOT return correct results. Even I tried to put a pair of single quote [ ':input2' ], it won't help also.
Query q2 = em.createQuery("SELECT o FROM Table1 as o WHERE field2 IN (:input2)"); q2.setParameter("input1", "3633, 3644");
Can anyone suggest? Thanks.roamer wrote:
Now, my question is, how can I write this type of query when we want to use the IN sql syntax? As below statement CANNOT return correct results. Even I tried to put a pair of single quote [ ':input2' ], it won't help also.
Query q2 = em.createQuery("SELECT o FROM Table1 as o WHERE field2 IN (:input2)");
q2.setParameter("input1", "3633, 3644");
Can anyone suggest?The above is in your code right? Not in some configuration file?
Then you do it the same way as with regular jdbc/sql.
1. You start with a collection of values - call it collection A.
1. Create a for loop that dynamically creates the string using 'bind' variables (whatever you want to call the 'colon' entity in the above).
2. Call the createQuery method using the string that was created
3. Create a second loop that iterates over A and populates with setParameter.
Pseudo code
Object[] A = ...
String sql = "SELECT o FROM Table1 as o WHERE field2 IN (";
for (int i=1; i <= A.length; i++)
if (i == 1)
sql += ":input" + i;
else
sql += ",:input" + i;
sql += ")";
Query q2 = em.createQuery(sql);
for (int i=1; i <= A.length; i++
q2.setParameter("input" + i, A[i-1]);
}By the way there is a jdbc forum. -
A better way to activate an LOV
I want to allow users to activate the LOVs and run queries from them without having to press the enter query button, then move their cursor to the appropriate text box, then press Ctrl+L, then press the execute query button. Is there some way to fire an LOV from a button press or some simpler way than I have right now?
Thats really a good way.
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Steven Lietuvnikas ([email protected]):
i follow what you're saying but i think i found a better way to do it
i made a button that says search on it and the whenbuttonpressed trigger is simply enter_query
this then moves the focus to the first item in the record block
i then made a whennewiteminstance trigger on the first item in the record block that says:
DECLARE
DUMMY BOOLEAN;
BEGIN
If (:system.Mode = 'ENTER-QUERY') Then
DUMMY := SHOW_LOV('LOV_PERSONS',15,10);
EXECUTE_QUERY;
END IF;
END;
this works good<HR></BLOCKQUOTE>
null -
A better way to Read Bytes until Done ?
My question involves reading an unknown number of bytes from the serial
port. One method is to just wait a defined amount of time to receive
the bytes and hope you caught all of them. Better: count them until
they are all received. I have attached an image which shows the method
I use to count bytes until all bytes are received. However, this method
is rather slow. It's taking me 500mS to read data that I can normally
read in 230mS. Is the shift register making this loop slow? Is there a
better way to read the bytes until all bytes are in?
Richard
Attachments:
ReadBytes.gif 5 KBThanks for your reply Chilly.
Actually, you are correct on both accounts, the delay and the wire from
the output of Write, and in fact, I usually do those things. I was just
in a hurry to get an example posted and didn't have my real data.
To put my issue in broader prospective, I have attached an image that shows boh methods I've used to count bytes. One
shows the method I use to count bytes until all bytes are received. The
other is a "hard wired" timer method that I try to avoid, but at least
it is fast. Notice the text in red. Why would I be able to read the
byes so much faster in the "hard wired" version? It has to be either
the shift register, the AND, or the compare functions, and I still
can't beleive these things would be slower than the Express timer
function in the other method.
Methodology: The 235mS timer used in the "hard wired" method was
emperically derived. The 150 bytes are received 100% of the time when
235mS is allowed. 230mS yields about 90% of the bytes, the it goes down
exponentially. I found that the shift register method method trakes
~515mS avg by putting a tick count on both sides of the loop and
finding the resulting time. I also did that with the hard wired method
and found it to be dead on 235mS which at least proves the Express vi
works very well
Message Edited by Broken Arrow on 09-05-2005 08:31 PM
Message Edited by Broken Arrow on 09-05-2005 08:32 PM
Richard
Attachments:
CoutingBytes.GIF 25 KB
Maybe you are looking for
-
4thGen iPod isn't showing up in iTunes
I have a classroom set of iPod 4th gen that I am trying to update using a different MacBook than I had used previously. I am logged into my iTunes account for the ipods but the ipods do not show up on the computer. Ideas on what I'm doing wrong?
-
Oracle Database Migration Assistant for Unicode (DMU) is now available!
Oracle Database Migration Assistant for Unicode (DMU) is a next-generation GUI migration tool to help you migrate your databases to the Unicode character set. It is free for customers with database support contracts. The DMU is built on the same GUI
-
Hi - I've been having a problem where my connection to MySQL gets closed out from under me after about 5 mins of inactivity. I'm using Toplink to manage the connection pool, but I can see via netstat that the initial connections all get closed after
-
Hi , I have some very low values like 300 or 400 for a particular measure and some very high value like 10 Million for some other measure in the graph . While displaying both in the same graph , the high values appear fine but low values do not a
-
Links in email program (Thunderbird) open in seperate Firefox window instead of one open
If Firefox is open and I click on a link in Thunderbird it opens in a new Firefox window without any of my tabs open. In Firefox I have selected open new windows in a new tab. The new window only has the one tab in it. (The link address) If Foxfire i