Max query
Hello all,
I have two table table1 and table2 with two columns
Table1 : Employee ID,Value
Table2 : UseID, value.
Now I want to create query which use the maximum value of any one value
SELECT t1.Employeeid,t2.UseID from table1 t1,table2 t2
Where t1.Employeeid=t2.userid where Employeeid=max(Employeeid,UseID)
Is it possible ?
This is what you wanted, right!
select max(empid)
from ( select empid
from table1
union all
select userid
from table2
)And what i was saying is that
SELECT col1,col2,col3
FROM table1
UNION ALL
SELECT col1,col2,null
FROM table2;That is number of columns in both SELECT list should be same.
Edited by: Mahanam on Dec 28, 2010 2:22 AM
Similar Messages
-
SELECT_LIST_FROM_QUERY max query length
Is there a max query length for SELECT_LIST_FROM_QUERY?
I have the basis of a list working OK but when I tried to make the query more complex to meet the business requirement I am getting a
ORA-06550: line 1, column 845: PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: ;
I have tried to ensure that I have all the quotes strings working and the like.
Thanks
MarkMark,
How long is your query? The SELECT_LIST_FROM_QUERY function accepts the query as a varchar2, so it’s "limited" to 32767 characters.
Regards,
Marc -
Hello colleagues,
Probably you many know there is Max Query Count setting in MII.
Does any of you experience performance concerns by dealing with large number of query count? or does any of you have a recommendation for how to set max query count, e.g. not exceeding memory size to avoid memory swapping?
my customer want to increase this max count due to fluctuating record count. but they have concerns on such disadvantage in performance.
any of your experiences are welcome. Thank you,
Shiroh Kinoshita, SAP JapanShiroh,
Good to hear from you.
All queries have the RowCount setting, but we typically discourage people from just setting it to a high number. In some cases the data servers will limit this number to a max (something like 250000) but that doesn't mean that from a memory standpoint, or a customer patience level (especially in the browser) would ever get to that number of records.
Where is it that you are seeing the record count fluctuate, applets in the browser, query actions in a transaction?
What is it that makes the customer want to increase this to a high number?
Regards,
Jeremy -
Hi
I have two tables that store following information
CREATE TABLE T_FEED (FEED_ID NUMBER, GRP_NUM NUMBER);
CREATE TABLE T_FEED_RCV (FEED_ID NUMBER, RCV_DT DATE);
INSERT INTO T_FEED VALUES (1, 1);
INSERT INTO T_FEED VALUES (2, 1);
INSERT INTO T_FEED VALUES (3, 2);
INSERT INTO T_FEED VALUES (4, NULL);
INSERT INTO T_FEED VALUES (5, NULL);
INSERT INTO T_FEED_RCV VALUES (2, '1-MAY-2009');
INSERT INTO T_FEED_RCV VALUES (3, '1-FEB-2009');
INSERT INTO T_FEED_RCV VALUES (4, '12-MAY-2009');
COMMIT;
I join these tables using the following query to return all the feeds and check when each feed was received:
SELECT
F.FEED_ID,
F.GRP_NUM,
FR.RCV_DT
FROM T_FEED F
LEFT OUTER JOIN T_FEED_RCV FR
ON F.FEED_ID = FR.FEED_ID
ORDER BY GRP_NUM, RCV_DT DESC;
Output
Line: ----------
FEED_ID GRP_NUM RCV_DT
1 1
2 1 5/1/2009
3 2 2/1/2009
5
4 5/12/2009
Actually I want the maximum date of when we received the feed. Grp_Num tells which feeds are grouped together. NULL grp_num means they are not grouped so treat them as individual group. In the example - Feeds 1 and 2 are in one group and any one of the feed is required. Feed 3, 4 and 5 are individual groups and all the three are required.
I need a single query that should return the maximum date for the feeds. For the example the result should be NULL because.. out of feed 1 and 2 the max date is 5/1/2009. For feed 3 the max date is 2/1/2009, for feed 4 it is 5/12/2009 and for feed 4 it is NULL. Since one of the required feed is null so the results should be null.
DELETE FROM T_FEED;
DELETE FROM T_FEED_RCV;
COMMIT;
INSERT INTO T_FEED VALUES (1, 1);
INSERT INTO T_FEED VALUES (2, 1);
INSERT INTO T_FEED VALUES (3, NULL);
INSERT INTO T_FEED VALUES (4, NULL);
INSERT INTO T_FEED_RCV VALUES (2, '1-MAY-2009');
INSERT INTO T_FEED_RCV VALUES (3, '1-FEB-2009');
INSERT INTO T_FEED_RCV VALUES (4, '12-MAY-2009');
COMMIT;
For above inserts, the result should be for feed 1 and 2 - 5/1/2009, feed 3 - 2/1/2009 and feed 4 - 5/12/2009. So the max of these dates is 5/12/2009.
I tried using MAX function grouped by GRP_NUM and also tried using DENSE_RANK but unable to resolve the issue. I am not sure how can I use the same query to return - not null value for same group and null (if any) for those that doesn't belong to any group. Appreciate if anyone can help me..Hi,
Kuul13 wrote:
Thanks Frank!
Appreciate your time and solution. I tweaked your earlier solution which was more cleaner and simple and built the following query to resolve the prblem.
SELECT * FROM (
SELECT NVL (F.GRP_NUM, F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY')) AS GRP_ID
,MAX (FR.RCV_DT) AS MAX_DT
FROM T_FEED F
LEFT OUTER JOIN T_FEED_RCV FR ON F.FEED_ID = FR.FEED_ID
GROUP BY NVL (F.GRP_NUM, F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY'))
ORDER BY MAX_DT DESC NULLS FIRST)
WHERE ROWNUM=1;
I hope there are no hidden issues with this query than the later one provided by you.Actually, I can see 4 issues with this. I admit that some of them are unlikely, but why take any chances?
(1) The first argument to NVL is a NUMBER, the second (being the result of ||) is a VARCHAR2. That means one of them will be implicitly converted to the type of the other. This is just the kind of thing that behaves differently in different versions or Oracle, so it may work fine for a year or two, and then, when you change to another version, mysteriously quit wiorking. When you have to convert from one type of data to another, always do an explicit conversion, using TO_CHAR (for example).
(2)
F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY)'will produce a key like '123405202009'. grp_num is a NUMBER with no restriction on the number of digits, so it could conceivably be 123405202009. The made-up grp_ids must never be the same any real grp_num.
(3) The combination (carr_id, feed_id, eff_dt) is unique, but using TO_CHAR(EFF_DT, 'MMDDYYYY) assumes that the combination (carr_id, feed_id, TRUNC (eff_dt)) is unique. Even if eff_dt is always entered as (say) midnight (00:00:00) now, you may decide to start using the time of day sometime in the future. What are the chances that you'll remember to change this query when you do? Not very likely. If multiple rows from the same day are relatively rare, this is the kind of error that could go on for months before you even realize that there is an error.
(4) Say you have this data in t_feed:
carr_id feed_id eff_dt grp_num
1 234 20-May-2009 NULL
12 34 20-May-2009 NULL
123 4 20-May-2009 NULLAll of these rows will produce the same grp_id: 123405202009.
Using NVL, as you are doing, allows you to get by with just one sub-query, which is nice.
You can do that and still address all the problems above:
SELECT *
FROM (
SELECT NVL2 ( F.GRP_NUM
, 'A' || TO_CHAR (f.grp_num)
, 'B' || TO_CHAR (f.carr_id) || ':' ||
TO_CHAR (f.feed_id) || ':' ||
TO_CHAR ( f.eff_dt
, 'MMDDYYYYHH24MISS'
) AS grp_id
, MAX (FR.RCV_DT) AS MAX_DT
FROM T_FEED F
LEFT OUTER JOIN T_FEED_RCV FR ON F.FEED_ID = FR.FEED_ID
GROUP BY NVL2 ( F.GRP_NUM
, 'A' || TO_CHAR (f.grp_num)
, 'B' || TO_CHAR (f.carr_id) || ':' ||
TO_CHAR (f.feed_id) || ':' ||
TO_CHAR ( f.eff_dt
, 'MMDDYYYYHH24MISS'
ORDER BY MAX_DT DESC NULLS FIRST
WHERE ROWNUM = 1;I would still use two sub-queries, adding one to compute grp_id, so we don't have to repeat the NVL2 expression. I would also use a WITH clause rather than in-line views.
Do you find it easier to read the query above, or the simpler query you posted in your last message?
Please make things easy on yourself and the people who want to help you. Always format your code so that the way the code looks on the screen makes it clear what the code is doing.
In particular, the formatting should make clear
(a) where each clause (SELECT, FROM, WHERE, ...) of each query begins
(b) where sub-queries begin and end
(c) what each argument to functions is
(d) the scope of parentheses
When you post formatted text on this site, type these 6 characters:
before and after the formatted text, to preserve spacing.
The way you post the DDL (CREATE TABLE ...) and DML (INSERT ...) statements is great: I wish more people were as helpful as you.
There's no need to format the DDL and DML. (If you want to, then go ahead: it does help a little.) -
Hi
i have a table with about 60 million rows and growing.(10gR2 linux x86)
a query of the max value and the count number of rows in the table takes at least 30 seconds even when doing a fast index scan.
is there any way to do things better ?
10x,
doronhow do you have the information that "N BETWEEN 83289905 AND 83289955" returns the last 50 rows form the table?
I think you may compare this rank analytic function's results;
SELECT t.*
FROM (SELECT e.*, rank() over(ORDER BY ttime DESC) AS rank
FROM event e
WHERE CATEGORY = 'Alarms') t
WHERE rank <= 50
you may use sql*plus and compare the responce times and blocks read in memory and disk;
set timing on
set autotrace traceonly
alter system flush buffer_cache; -- this is to prevent caching affect, use it if this is a test system of course
-- your query
alter system flush buffer_cache; -- this is to prevent caching affect, use it if this is a test system of course
-- query to compare -
Help with aggregate function max query
I have a large database which stores daily oil life, odo readings from thousands of vehicles being tested but only once a month.
I am trying to grab data from one month where only the EOL_READ = 0 and put all of those values into the previous month's query where EOL_READ = anything.
Here is the original query which grabs everything
(select distinct vdh.STID, vdh.CASE_SAK,
max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000))) EOL_READ,
max(to_number(decode(COMMAND_ID,'ODO_READ',COMMAND_RESULT,-100000))) ODO_READ,
max(to_number(decode(COMMAND_ID,'OIL_LIFE_PREDICTION',COMMAND_RESULT,-100000))) OIL_LIFE_PREDICTION
from veh_diag_history vdh, c2pt_data_history cdh
where vdh.veh_diag_history_sak = cdh.veh_diag_history_sak
and cdh.COMMAND_ID in ('EOL_READ','ODO_READ','OIL_LIFE_PREDICTION')
and vdh.LOGICAL_TRIGGER_SAK = 3
and cdh.CREATED_TIMESTAMP =vdh.CREATED_TIMESTAMP
and cdh.CREATED_TIMESTAMP >= to_date('03/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
and cdh.CREATED_TIMESTAMP < to_date('04/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
group by vdh.STID, vdh.case_sak
having count(distinct command_id) = 3
order by OIL_LIFE_PREDICTION)
which gives 5 columns:
STID, CASE_SAK, EOL_READ, ODO_READ, and OIL_LIFE_PREDICTION
and gives me about 80000 rows returned for the above query
I only want one reading per month, but sometimes I get two.
STID is the unique identifier for a vehicle.
I tried this query:
I tried this query which nests one request within the other and SQL times out every time:
select distinct vdh.STID, vdh.CASE_SAK,
max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000))) EOL_READ,
max(to_number(decode(COMMAND_ID,'ODO_READ',COMMAND_RESULT,-100000))) ODO_READ,
max(to_number(decode(COMMAND_ID,'OIL_LIFE_PREDICTION',COMMAND_RESULT,-100000))) OIL_LIFE_PREDICTION
from veh_diag_history vdh, c2pt_data_history cdh
where vdh.veh_diag_history_sak = cdh.veh_diag_history_sak
and cdh.COMMAND_ID in ('EOL_READ','ODO_READ','OIL_LIFE_PREDICTION')
and vdh.LOGICAL_TRIGGER_SAK = 3
and cdh.CREATED_TIMESTAMP =vdh.CREATED_TIMESTAMP
and cdh.CREATED_TIMESTAMP >= to_date('02/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
and cdh.CREATED_TIMESTAMP < to_date('03/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
group by vdh.STID, vdh.case_sak
having count(distinct command_id) = 3
and vdh.stid in (select distinct vdh.STID, vdh.CASE_SAK,
max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000))) EOL_READ,
max(to_number(decode(COMMAND_ID,'ODO_READ',COMMAND_RESULT,-100000))) ODO_READ,
max(to_number(decode(COMMAND_ID,'OIL_LIFE_PREDICTION',COMMAND_RESULT,-100000))) OIL_LIFE_PREDICTION
from veh_diag_history vdh, c2pt_data_history cdh
where vdh.veh_diag_history_sak = cdh.veh_diag_history_sak
and cdh.COMMAND_ID in ('EOL_READ','ODO_READ','OIL_LIFE_PREDICTION')
and vdh.LOGICAL_TRIGGER_SAK = 3
and cdh.CREATED_TIMESTAMP =vdh.CREATED_TIMESTAMP
and cdh.CREATED_TIMESTAMP >= to_date('03/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
and cdh.CREATED_TIMESTAMP < to_date('04/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
group by vdh.STID, vdh.case_sak
having count(distinct command_id) = 3
and (max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000)))) = 0)
order by OIL_LIFE_PREDICTION
so in summary I am trying to select values from the previous month only from those stids where this month's value for EOL_READ = 0
Any ideas.....please help.You can use row_number() within each stid and each month to determine the first read of each month. Then you can use lag() to get the previous month's reading of the current month's reading is zero.
with all_data as (
select stid,
case_sak,
eol_read,
timestamp,
row_number() over (partition by stid, trunc(timestamp,'mm') order by timestamp) AS rn
from veh_diag_history
select stid,
case_sak,
case
when eol_read = 0
then lag(eol_read) over (partition by stid order by timestamp)
else eol_read
end as eol_read
from all_data
where rn = 1; -
Rewrite the max query...
The inner select statement works fast and few columns generated based on A.QSN_DESC_X values, AND to get a single row I am doing a group by on cmpltn_i by taking the max row. When I do this group by the query takes approx 5 mnts. The inner query returns 227270 records. The final query give 37,000 records.
Can someone suggest any better way to write this query?
select
CMPLTN_I,
max(emp_i) as emp_i,
max(first_name) as first_name,
max(last_name) as last_name,
max (vendor) as vendor,
max (product_type) as product_type,
max (event_type) as event_type,
max (dollar_amt) as dollar_amt,
max (date_received) as date_received,
max (branch_code) as branch_code
from (
select /*+DRIVING_SITE(A)*/
B.CMPLTN_I,
B.emp_i,
E.EMP_1ST_NME_X as first_name,
E.EMP_LAST_NME_X as last_name,
case when substr(A.QSN_DESC_X,1,6) = 'Vendor' then A.CHCE_DESC_X else null end as vendor,
case when substr(A.QSN_DESC_X,1,12) = 'Product Type' then A.CHCE_DESC_X else null end as product_type,
case when substr(A.QSN_DESC_X,1,10) = 'Event Type' then A.CHCE_DESC_X else null end as event_type,
case when substr(A.QSN_DESC_X,1,13) = 'Dollar Amount' then A.CHCE_DESC_X else null end as dollar_amt,
case when substr(A.QSN_DESC_X,1,13) = 'Date Received' then A.CHCE_DESC_X else null end as date_received,
case when substr(A.QSN_DESC_X,1,16) = 'Branch Wire Code' then A.CHCE_DESC_X else null end as branch_code
from OAT.FORM_FACT@REMOTE_AB A, OAT.FORM_CMPLTN_FACT@REMOTE_AB B, empl_info_dimn E
where A.CMPLTN_I = B.CMPLTN_I
and B.CMPLTN_C = 'C'
and B.app_i = '20'
and E.emp_i = B.emp_i
group by
CMPLTN_I10g release 2.
cost based, statistics are good
without driving site hint, the response time is bad
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2770348679
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | SELECT STATEMENT REMOTE| | 922K| 139M| | 104K (1)| 00:20:49 | | |
| 1 | SORT GROUP BY | | 922K| 139M| 300M| 104K (1)| 00:20:49 | | |
|* 2 | HASH JOIN | | 922K| 139M| 4976K| 71818 (2)| 00:14:22 | | |
| 3 | REMOTE | EMPL_INFO_DIMN | 86311 | 3961K| | 10903 (1)| 00:02:11 | ! | R->S |
|* 4 | HASH JOIN | | 923K| 97M| | 55274 (2)| 00:11:04 | | |
|* 5 | TABLE ACCESS FULL | FORM_RECIPIENT_CMPLTN_FACT | 24223 | 331K| | 698 (3)| 00:00:09 | OATP0~ | |
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL | FORM_RESP_FACT | 10M| 1013M| | 54439 (2)| 00:10:54 | OATP0~ | |
Predicate Information (identified by operation id):
2 - access("A1"."EMP_I"="A2"."EMP_I")
4 - access("A3"."CMPLTN_I"="A2"."CMPLTN_I")
5 - filter("A2"."APP_I"=20 AND "A2"."CMPLTN_I" IS NOT NULL AND "A2"."CMPLTN_C"='C')
Remote SQL Information (identified by operation id):
PLAN_TABLE_OUTPUT
3 - SELECT /*+ USE_HASH ("A1") */ "EMP_I","EMP_LAST_NME_X","EMP_1ST_NME_X" FROM "DWCSADM"."EMPL_INFO_DIMN" "A1"
(accessing '!' )
Note
- fully remote statement
31 rows selected. -
Need a help with this max query
select SEARCH_ID, SEARCH_KEYWORD, COUNT, ASSET_TYPE from RELEVANCY_TABLE
where SEARCH_KEYWORD = 'search_keyword'
and ASSET_TYPE is not null
558
search_keyword
3
desk
559
search_keyword
7
table
I actually need to get the asset_type for which count is the maximum. In this case it should be 'table'. Any help?adfSonal wrote:
Is there any other way? I have to write this query in Java. So I will prefer avoiding rank or such functions.
Won't I get the desired result just using select, where, max, rownum clauses?
What do you mean by "I have to write this query in Java"? Any ways the query will be run against a Oracle DB, correct?
Any ways using ROWNUM
select *
from (
select search_id
, search_keyword
, count
, asset_type
from relevancy_table
where search_keyword = 'search_keyword'
and asset_type is not null
order
by count desc
where rownum = 1 -
Hai Friends,
The following is my select query
SELECT MAX( MBLNR )
FROM ZMIGO
INTO TABLE IT_MAX.
Right now my Z table is empty. but when i debug the program it showing 1 entry and when i double the internal table there is no value and it is blank. why it is showing as 1 entry.Hi,
Try using:-
DATA : v_mblnr TYPE zmigo-mblnr.
SELECT MAX( DISTINCT mblnr ) FROM zmigo INTO v_mblnr.
IF sy-subrc NE 0.
CLEAR v_mblnr.
ENDIF.
Hope this helps you.
Regards,
Tarun -
SQL MAX QUERY / SQL COUNT QUERY
Hi all,
i would like to get the value of the number of rows i have, but i dun seem to work it out with the getRow statements. i think the syntax of my statement is ok, but maybe i am doing the wrong way to get the value out.
I tried with COUNT, and it gives me the same error.
The error is: java.sql.SQLException: Column not found
This is the part of my code
Connection con;
String Q;
try{
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
con = DriverManager.getConnection("jdbc:odbc:RFID Logistics");
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT MAX(Queue) FROM Forklift1");
while(rs.next())
Q = rs.getString("Queue");
System.out.println(Q);
catch(ClassNotFoundException f)
f.printStackTrace();
catch(SQLException f)
f.printStackTrace();
Thx alot in advance =)Please use code tags when posting code. There is a code button above the message editor that inserts the tags.
Do you want to get the number of rows? If soConnection con;
int num = 0;
try{
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
con = DriverManager.getConnection("jdbc:odbc:RFID Logistics");
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT COUNT(*) FROM Forklift1");
while(rs.next())
num = rs.getInt(1);
System.out.println(num );
rs.close();
stmt.close();
catch(ClassNotFoundException f)
f.printStackTrace();
catch(SQLException f)
f.printStackTrace();
}This is untested, but should be close enough. -
hi all,
I have a query that returns the following.
SELECT id, start_time, end_time
FROM accounts
WHERE
id = 006267
ORDER BY begin_time
result is
ID BEGIN_TIME END_TIME
006267 11-12-2006 15:00:00 11-12-2006 17:30:00
006267 15-12-2006 17:00:00 15-12-2006 19:30:00
006267 18-12-2006 08:30:00 18-12-2006 12:50:00
006267 18-12-2006 13:50:00 18-12-2006 18:20:00
006267 20-12-2006 10:00:00 20-12-2006 12:30:00
006267 21-12-2006 16:15:00 21-12-2006 18:45:00
The output needed is
006267 11-12-2006 15:00:00 15-12-2006 19:30:00
006267 18-12-2006 08:30:00 18-12-2006 18:20:00
006267 20-12-2006 10:00:00 21-12-2006 18:45:00
Would really appreciate your help on this!
Thanks in advance.To get the next value, you can use LEAD() over ()<br>
So, in your case :<br>
lead(end_time) over (partition by id order by start_time)<br>
You need to define the removing row criteria. If that's only the odd, try :<br>
select id, start_time, next_end_time
from
(select id,
start_time,
lead(end_time) over (partition by id order by start_time) next_end_time,
row_number() over (partition by id order by start_time) rn
from accounts
order by id, start_time)
where mod(rn,2)=1;<br>
<br>
Nicolas. -
A bit tricky Min and Max Query
Dear Masters
I have a table
Student_ID
Semester_Name
Registration_Date
Now i want to retrieve for each student_id semester_name with lowest registration_date (First Semester) and Last Semester.
I do not want to use sub query as it has performance problem
Any Help
Thanks
ShakeelSolution:
SELECT student_id, MIN(semester_name) KEEP (DENSE_RANK FIRST ORDER BY registration_date) as First_Semester, MIN(semester_name) KEEP (DENSE_RANK LAST ORDER BY registration_date) as Last_Semester
From yourtable
GROUP BY student_id
hth -
Difference of value of a dimension based on min and max
Database: Oracle 10g
BO-BOXIr3
Let me explain the exact problem again.
As per the below code, I have the data in this format in my table:
Code:
Date Site ID KWH
1/2/2009 00:00 IN-1 22
1/2/2009 01:00 IN-1 28
1/3/2009 03:00 IN-2 25
1/3/2009 04:00 IN-2 46
1/4/2009 00:00 IN-3 28
1/4/2009 10:00 IN-3 34
1/5/2009 08:00 IN-4 31
1/5/2009 09:00 IN-4 55
1/5/2009 11:00 IN-4 77
1/6/2009 00:00 IN-5 34
Now want to build a report with following columns:
Site Count KWH
IN-1 2 6 (ex.-28-22)
IN-2 2 21
IN-3 2 6
IN-4 3 46 (ex.-77-31)
IN-5 2 34
SITE- distinct site name.
COUNT-count is number of repetitions of site id between min and max date.
KWH -(Delta between the min and max date)
To get the above result I have created 3 report from different queries since not able to get these al in a single report viz Count, Max Value and Min value. Well I have all these 3 reports or table on a single page.
Count-this report will give the count between the dates
Max Value-this report will give me the values of kwh for max dates for each site id
Min Value-this report will give me the values of kwh for min dates for each site id
Now want to create a single report based on these 3 reports which contains the column
Site|Count|KWH
IS IT POSSIBLE?
Or
Is it possible to build such report in a single one with all the required column which I mentioned?
The variables which I created to get the max & min dates,
Mx_dt= =Max([Query 2].[Hourly]) In ([Query 2].[SITE_ID])
Mn_dt= =Min([Query 3 (12)].[Hourly]) In ([Query 3 (12)].[SITE_ID])
For filtering on report used following variables:
if_st_mn=If([mn_dt])=[Hourly] Then "ok" Else "no"
if_st_mx =If([mx_dt])=[Hourly] Then "ok" Else "no"
will filter on "ok" to get the max and min date values.
rest of the variable in the snap are not usable.Yes, you can do it in one report.
I created a sample report from efashion:
Year | Lines | Sales Revenue
2001 | Accessories | $250
2003 | Accessories | $550
2001 | City Skirts | $1050
2003 | City Skirts | $1150...........
Create 2 variables 1) Count and 2) Difference:
1) Count as formula - =Count([Lines]) In Report
2) Difference as formula - =Sum([Sales revenue]) Where (Max([Year]) In Report = [Year]) - Sum([Sales revenue]) Where (Min([Year]) In Report = [Year])
You can replace the formula with your report variables. Then just drag Site ID, Count and Difference variables to your report.
Thanks
Jai -
Limit no of rows in complex query
Hi,
I have below cursor query for which I need to limit no of rows retrieved to a certain value say 5000.
I'm getting full table scan on table ltc_iss_rec if I do not specify a range
for ltc_iss_rec.
Thq query purpose get ref nos whose lc expiry date is less than or equal to
sysdate -15.
The max query below is also giving higher cost .
Question si how to limit rows in cursor without using a range for ltc_iss_rec_id.
The primary key in ltc_iss_rec is (ltc_iss_rec_id,psd_serial_num)
Index in ltc_iss_rec is (ltc_iss_rec_id,lc_expr_dt)
The primary key in psd is (psd_id,psd_serial_no)
primary key in psd_link is (psd_id,psd_serial_no,link_psd_id,link_psd_ser_no)
Index in psd_link is (link_psd_id,link_psd_ser_no)
Index on psd_lcs is (update_serial_no)
primary key on (psd_lcs) is (psd_id,psd_serial_no,other columns)
SELECT MIN(ltc_iss_rec_id)
INTO l_min_ltc_iss_rec_id
FROM ltc_iss_rec
WHERE lc_expr_dt <= ADD_MONTHS(SYSDATE,-15);
SELECT MAX(ltc_iss_rec_id)
INTO l_max_ltc_iss_rec_id
FROM
SELECT ltc_iss_rec_id
FROM ltc_iss_rec
WHERE ltc_iss_rec_id >= l_min_ltc_iss_rec_id
ORDER BY ltc_iss_rec_id
WHERE ROWNUM < l_iwh_arch_cnt;
OPEN txn_ref_cur FOR
SELECT b.ltc_iss_rec_id,b.psd_serial_num,a.sys_id,a.cosmos_ref_no,
a.src_sys_asgn_ref_no,a.bank_id,b.lc_expr_dt
FROM psd a,ltc_iss_rec b
WHERE b.ltc_iss_rec_id >= l_min_ltc_iss_rec_id
AND b.ltc_iss_rec_id <= l_max_ltc_iss_rec_id
-- and b.lc_expr_dt <= add_months(sysdate,-15)
AND a.psd_id = b.ltc_iss_rec_id
AND a.psd_serial_no = b.psd_serial_num
AND a.psd_typ_cod = 'ISS'
AND a.psd_serial_no =
SELECT NVL(MAX(link_psd_ser_no),'000') FROM psd_link c,psd_lcs d
WHERE c.psd_id = d.psd_id
AND c.psd_serial_no = d.psd_serial_num
AND c.link_psd_id = a.psd_id
AND c.link_psd_id BETWEEN l_min_ltc_iss_rec_id AND l_max_ltc_iss_rec_id
AND d.update_serial_num = (
SELECT MAX(f.update_serial_num) FROM psd_link e, psd_lcs f
WHERE e.psd_id = f.psd_id
AND e.psd_serial_no = f.psd_serial_num
AND e.link_psd_id = a.psd_id
ORDER BY b.ltc_iss_rec_id;Try using Analytic function row_number().
Regards
RK -
How Can i add "DateDiff(day, T0.DueDate" as a column in this query?
How Can i add "DateDiff(day, T0.DueDate" as a column in this query?
SELECT T1.CardCode, T1.CardName, T1.CreditLine, T0.RefDate, T0.Ref1 'Document Number',
CASE WHEN T0.TransType=13 THEN 'Invoice'
WHEN T0.TransType=14 THEN 'Credit Note'
WHEN T0.TransType=30 THEN 'Journal'
WHEN T0.TransType=24 THEN 'Receipt'
END AS 'Document Type',
T0.DueDate, (T0.Debit- T0.Credit) 'Balance'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')<=-1),0) 'Future'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>=0 and DateDiff(day, T0.DueDate,'[%1]')<=30),0) 'Current'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>30 and DateDiff(day, T0.DueDate,'[%1]')<=60),0) '31-60 Days'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>60 and DateDiff(day, T0.DueDate,'[%1]')<=90),0) '61-90 Days'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>90 and DateDiff(day, T0.DueDate,'[%1]')<=120),0) '91-120 Days'
,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>=121),0) '121+ Days'
FROM JDT1 T0 INNER JOIN OCRD T1 ON T0.ShortName = T1.CardCode
WHERE (T0.MthDate IS NULL OR T0.MthDate > [%1]) AND T0.RefDate <= [%1] AND T1.CardType = 'C'
ORDER BY T1.CardCode, T0.DueDate, T0.Ref1Hi,
As you mentioned not possible to assign the dynamic column in the query.
will give you example for generate a dynamic column name in SQL query, using this example you can achieve your requirement.
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(C.Name)
from [History]
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT [Date],' + @cols +'
from
select [Date], Name, Value
from [History]
) x
pivot
max(value)
for Name in (' + @cols + ')
) p '
execute(@query)
Maybe you are looking for
-
Raw materials with valuation category are appearing as a missing parts
Hi All, I am converting planned order to production order which is having ten raw materials. Some of them are having valuation category u2018Ru2019. All raw materials are in stock. While procuring raw materials which are having valuation category, we
-
STO in a SRM Extended classic Scenario
Hi All, Although I have implemented STO process in a classic scenario, I have not implemented or not sure if possible in a Extended Classic Scenario. Please advise on how STO could be implemented in SRM 7.0 ECS scenario. Will award appropriately.
-
I found the solution by searching on this site. Proceed inputstream.wordChars( '0', '9') with inputstream.ordinaryChars('0','9') This works for me. I would appreciate if someone could explain why this works. gc
-
Hi, I want to see existing standard IDOC. for example I want to see the HRMD_A06. So what is the transaction code to see IDOC structure. Thanks & Regards, Satish.
-
BPEL - using JAR libraries, compile OK, but error while deploying
Hi All, i have a BPEL process with a JAVA program inside. It uses 3 JAR files as libraries, so they are included in the project properties CLASSPATH. When i compile and deploy directly to the domain, i dont have problems. But I am using an ANT xml ,