Lowest value in CKF column in a query which uses structures.
Hi Experts:
I have the following user requirement.
There is a need to find the least value of the values in a calculated key figure in the columns
In rows I have structure and 6 structure elements. The structure elements are 0plant restricted with one plant each.
Plant / Period Spend / Period Volume / $/case / Best of Best
1000 / 1000 / 20 / 50 /20
2000 / 2000 / 100 /20/20
3000/ 3000/ 70 / 50 /43 /20
The column $/case is a calculated key figure and I need the best of best column to contain the lowest value of the $/Case column
This Best of Best column needs to be used in another calculated key figure.
Anyone who has solution will be awarded with SDN points.
N
Hello,
You can try out by creating a CKF and set the properties for that calculated keyfigure best of best with calculate single value as minimum and also result also minimum, then you can use this ckf in another ckf. if not going to be used globally you can try out by using formulas.
Similar Messages
-
How to return Parameters values as a column in a query?
Hi All,
I have number of parameters in a report.
I need to return its interred values as a column in a query to use this column in a chart.
I want this column to be the second one in this query:
SELECT ROWNUM
FROM ALL_OBJECTS
WHERE ROWNUM <= 10
That is if there is any way to use these parameters directly as a column in the chart no need for the previous statement.
Note: I am using Reports 6iDear sir
You can enter parameter as column in query like
select :parameter p1, &hexadecima_paramataer p2, empcode
from emps;
that query make parameter as query column .
&hexadecimal paramater can refer to database column of table emps
and :paramater can refer to static string -
Conditional formatting: greatest and lowest values in a column?
I have a column of data, with about 30 rows of values. Is it possible to setup conditional formatting so that Numbers highlights the top three highest values and the lowest three values in the column?
thanks!How's this:
1. Create the second table for the large and small formulas. I did one column, you should be able to copy it to the rest of them.
2. Create the conditional format for the first data cell in the first row of your data table (go to the cell inspector and choose "show rules" to get to the conditional formatting pane).
3. Select the cell, Copy Style, select all the other data cells in the row and Paste Style to copy that conditional format to the other columns. At this point, the conditional format in the other columns will be incorrect (they will all refer to the large/small values for the first column), you'll have to modify each manually so they refer to the correct cells in the "large/small" table.
4. Copy Style and Paste Style each of those conditional formats to the rest of the cells in each column.
One caveat: You may get more than three smallest/largest values highlighted under certain conditions. For instance if you have four 1's and "1" is your smallest value, all four would get highlighted. If you have two 1's and two 2's, and they are your smallest values all four would get highlighted. If you have 1,2,3,3 and they are your smallest values you would also get four highlighted. Similar case for the largest values. It doesn't appearthat this would be a problem for your data. -
Function o know the value of nth column of a query
Hi,
I want to create a function where the input will be some sql statement (select statement) and the output will be the the value of 2nd column(or say nth column) knowing only column position (say 2 or 3 or n)
I know this is possible through DBMS_SQL but do not know how.
Please let me know how to do this.
Regards
SurendraI'm not sure to understand as well, but you could read the following doc, especially the example 8 which seems to be close of what you want to do :
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_sql.htm#i997238
Nicolas. -
Bex Query which uses Dynamic columns to display actuals
Hi Bex experts,
I have a query issue/question.
I currently have a Bex query which shows me the the planned values for each period, spanning 6 years into the future. My Key figure columns are defined as follows:
Value type = '020'
Version = mandatory variable, entered at execution.
Posting period (FISCPER3) = These columns are fixed values using periods 1 to 12 for each column.
Fiscal year (0FISCYEAR) = Each column contains SAP exit for current year, and using the offset +1, +2, +3, +4 etc, when I define the future years coulmns.
Currency = fixed 'USD'.
Fiscal year variant = fixed 'Z4'
The above works fine for plan data.
I want to now include is:
Seperate 'Dynamic columns' to show only actuals for period ranges from period one to the previous period (or current period minus 1). Each period should have it's own column for actuals.
The dynamic actuals columns should be grouped together to the left of the plan columns.
Actuals are only for current year, so I will still use the SAP EXIT for current year in the column definition.
Example: If I am currently in period 10, the query should show me actuals from period 1 to period 9 in seperate columns, then continue to show me my plan values columns that I have in place already.
How can I construct these actuals columns in to my existing query. If you have possible screens shots.
Thanks, and maximum points will be alotted.The way I have approached this you may not like as it involves quite a bit of coding
12 CKFs
each CKF adds up 2 RKFs
So 24 RKFs
example Column 6 CKF
Adds Column 6 RKF Actual and Column 6 RKF Plan
Column 6 RKF Actual contains Actual version + key figure + Period variable column 6 Actual
Column 6 RKF Plan contains Plan version + key figure + Period Variable column 6 Plan
Period variable column 6 Actual
is a cmod variable which reads the entered date
if the period of entered date is LE 6
then return period 6 into "Period variable column 6 Actual"
else put 0 into "Period variable column 6 Actual"
Period variable column 6 Plan
is a cmod variable which reads the entered date
if the period of entered date is LE 6
then return period 0 into "Period variable column 6 Plan"
else put 6 into "Period variable column 6 Plan"
Now what happens is that if you enter period 6 in your selection screen all the Actuals of columns greater than 6 all have period 0 put into their selection so return 0 and all the columns less than or equal to 6 return the values for the fiscal period (ie column 1 gets period 1)
And in addition all the Plans columns return the value of their column ie for their period for those greater than 6 and for those less than 6 they all return 0
It's convulted - but you get the idea - and yes it works
There may be a better way to do it - and I am open to suggestions
(this does assume that NOTHING is posted to period 0 otherwise it won't work) -
How to retrieve value of one column to other in Apex using Javascript
Hi all,
Can any one help me in solving this problem.
How to send a value from one column to another column using javascript in Apex. I heard that we can use onChange().
My requirement is,
I have a column(Varchar2) in form where i need to enter a value and the data need to be sent to another column(Number) when i press apply changes.
ex: if i enter a value say 1/3 or 1/4 or 1/3*1/6
the result should be entered in the other column.
Message was edited by:
RamanTry something like
html_GetElement('P1_ITEM2').value = eval(html_GetElement('P1_ITEM1').value); -
Problem inserting value in CLOB column from an XML file using XSU
Hi,
When I try to insert CLOB value into Oracle9i database from an XML document using XSU, I get an exception as below.
09:37:32,392 ERROR [STDERR] oracle.xml.sql.OracleXMLSQLException: 'java.sql.SQLException: ORA-03237: Initial Extent of specified size cannot be allocated
ORA-06512: at "SYS.DBMS_LOB", line 395
ORA-06512: at line 1
' encountered during processing ROW element 0. All prior XML row changes were rolled back. in the XML document.
All Element tags in XML doc. is mapped to columns in the database. One of the table columns is CLOB. That is the one that gives the above exception. Here is the xml...
ID - is autogenerated value.
<?xml version="1.0" ?>
<ROWSET>
<ROW num="1">
<ID></ID>
<SEQ>
GCATAGTTGTTATGAAGAAATGGAAGAAAAATGCACTCAAAGTTGGGCTGTCAGGCTGTCTGGGGCTGAATTCTGGTGTGACAGTGTGATGAAGCCATCTTTGAGCCTAAATTTGATAATGAGCCAGTCATGATCTGGTTGTGATTACTATAACAAGATTAAATCTGAATAAGAGAGCCACAACTTCTTTAAAGACAGATTGTCAAGTCATTACATGGAAGAGGGAGATTGCTCCTTTGTAAATCAGGCTGTCAGGCCAACTGAATGAAGGACGTCATTGTACAGTAACCTGATGAAGATCAGATCAACCGCTCACCTCGCCG
</SEQ>
</ROW>
</ROWSET>
Can anyone identify what's the problem.. and suggest a solution for this..?
Thanks in advance..
VijiWould you please specify the XDK verison and database version?
-
Invalid column name in query string - using Format function
In my post just before this one the problem was solved for writing a query string using a date range. The rest of the query string includes the same date field (Call_Date) but formatted as 'MMM-YY'. I get an invalid column name error when I add this field to the query string. Here is the rest of the query string:
strSql = "SELECT Format(CALL_DATE,'mmm-yy'), " _
& "HOME_REGION FROM CCC2.CASE_EPRP " _
& "WHERE (HOME_REGION = 'NCR') AND " _
"(CALL_DATE >= to_date( '1/1/2002', 'MM/DD/YYYY' )" _
& "AND CALL_DATE <= to_date( '2/28/2003', 'MM/DD/YYYY' ))"
In the Access Query tool I can include this field
Format(CALL_DATE,'mmm-yy')
and the query runs fine (I just need to make it dynamic using ADO). But in my ADO query string above, I get the invalid column name error. Is there a way I can include
Format(CALL_DATE,'mmm-yy')
in my ADO query string? I appologize for not being more familiar with Oracle Sql. Any help greatly appreciated.
Thanks again,
RichThank you very much for your reply. I think I'm getting closer to the solution. Just I got an error message
"date format not recognized"
when I add "to_char( call_date, 'mmm-yy' )" to the query string. I tried using all uppercase, but that did not make a difference. Do I need to use to_date inside the to_char maybe?
to_char(to_date(call_date, 'mmm/yy'), 'mmm-yy')
Thanks again for your help.
Rich -
Set Column width in query (not using SQL*Plus)
How can I Set Column width in query
I understand you can set column width using
column col1 FORMAT A5
select col1 from table1;But this only works in SQL*Plus
I want to be able to do this in a regular SQL query window (not in SQL*Plus), how can I do it.....
I am using a 'SQL window' in PL/SQL Developer IDE
and when I use this syntax it says:
ORA-00900: Invalid SQL statement
Any suggestions are appreciated...
thanks,
M.Did you try using RPAD or LPAD functions? They fill the unfilled part of a string with character you provide... either on right or left side depending on what function you use.
e.g.
SELECT RPAD('Smith', 10, ' ') Name FROM dual;http://www.adp-gmbh.ch/ora/sql/rpad.html
Edited by: Zaafran Ahmed on Nov 10, 2010 11:50 AM -
Improving a simple select query, which uses all rows.
Hi All,
Please excuse me if the question is too silly. Below is my code
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
Elapsed: 00:00:00.07
SQL> show parameter optim
NAME TYPE VALUE
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
SQL> explain plan for select SUM(decode(transaction_type,'D',txn_amount,0)) payments_reversals,
2 SUM(decode(transaction_type,'C',txn_amount,0)) payments,primary_card_no,statement_date
3 from credit_card_pymt_dtls group by primary_card_no,statement_date;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2801218574
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1912K| 56M| | 21466 (3)| 00:04:18 |
| 1 | SORT GROUP BY | | 1912K| 56M| 161M| 21466 (3)| 00:04:18 |
| 2 | TABLE ACCESS FULL| CREDIT_CARD_PYMT_DTLS | 1912K| 56M| | 4863 (3)| 00:00:59 |
9 rows selected.
SQL> select index_name,index_type
2 from all_indexes
3 where table_name = 'CREDIT_CARD_PYMT_DTLS';
INDEX_NAME INDEX_TYPE
INDX_TRANTYPE BITMAP
INDX_PCARD NORMAL
INDX_PSTATEMENT_DATE NORMALThe query is using all the records in the CREDIT_CARD_PYMT_DTLS table. Transaction type will be either 'C' or 'D'.
CREDIT_CARD_PYMT_DTLS has 2 million rows and the qury will output 1.5 million rows. Table statisticas are upto date.
The query now is taking almost 5 minutes. Is thaere any way to reduce the time ?
Our DB server has 8 CPUs and 8 GB memory. Is the timing genuine ?
Thanks in Advance.
Edited by: user11115924 on Apr 29, 2009 2:43 AM
All the columns used in the query are already indexed. ( Ofcourse, not only for this query.)Hi All,
Thanks for the helps provided. Expecting it once more..
My actual query is as below
select primary_card_no,base_segment_number,atab.previous_balance,current_balance,intrest_amt_due_this_cycle,total_min_amt_due,total_credit_limit,
total_purchase_this_cycle,total_cash_trns_this_cycle,available_credit_limit,payments,utilization,payment_ratio,payments_reversals,cash_limit,
available_cash_limit, description
from
select primary_card_no,DECODE(base_segment_number,NULL,primary_card_no,base_segment_number) base_segment_number,
SUM(previous_balance) previous_balance,SUM(current_balance) current_balance ,SUM(intrest_amt_due_this_cycle) intrest_amt_due_this_cycle,
SUM(total_min_amt_due) total_min_amt_due,SUM(total_credit_limit_all) total_credit_limit,
SUM(total_purchase_this_cycle) total_purchase_this_cycle,SUM(total_cash_trns_this_cycle) total_cash_trns_this_cycle,
SUM(available_credit_limit) available_credit_limit,SUM(payments) payments,
(SUM(NVL(current_balance,0)) / SUM(total_credit_limit_all)) * 100 utilization,
(SUM(NVL(payments,0)) / DECODE(SUM(previous_balance),0,NULL,SUM(previous_balance))) * 100 payment_ratio,
SUM(payments_reversals) payments_reversals,SUM(cash_limit) cash_limit,SUM(available_cash_limit) available_cash_limit
from
( select a.*,NVL(payments_reversals,0)payments_reversals ,NVL(payments,0) payments
from
( select primary_card_no,previous_balance,current_balance,intrest_amt_due_this_cycle,total_min_amt_due,total_purchase_this_cycle,
total_cash_trns_this_cycle,statement_date,available_credit_limit,cash_limit,available_cash_limit,
(case when statement_date <= TO_DATE('301108','ddmmyy') then NULLIF(total_credit_limit,0)
else NULLIF((select credit_limit
from ccm_dbf_chtxn_v0 t1
where t1.batch_id = '011208'
and SUBSTR(t1.card_number,4) = a.primary_card_no),0)
end) total_credit_limit_all
from
( select primary_card_no,previous_balance,current_balance,INTREST_AMT_DUE_THIS_CYCLE,
TOTAL_MIN_AMT_DUE,TOTAL_PURCHASE_THIS_CYCLE,TOTAL_CASH_TRNS_THIS_CYCLE,statement_date,
AVAILABLE_CREDIT_LIMIT,cash_limit,available_cash_limit,total_credit_limit
from credit_card_master_all@FGBAPPL_LINK
) a
where statement_date between ADD_MONTHS(TRUNC(SYSDATE,'mm'),-6) and TRUNC(SYSDATE,'mm')-1
) a,
( select SUM(decode(transaction_type,'D',txn_amount,0)) payments_reversals,
SUM(decode(transaction_type,'C',txn_amount,0)) payments,primary_card_no,TO_CHAR(statement_date,'MON-RRRR') sdate
from credit_card_pymt_dtls
group by primary_card_no,TO_CHAR(statement_date,'MON-RRRR')
) b
where TO_CHAR(a.statement_date,'MON-RRRR')= b.sdate(+)
and a.primary_card_no= b.primary_card_no(+)
) a,
( select SUBSTR(a.card_number,4) card_number,base_segment_number,TO_DATE(account_creation_date,'DDMMYYYY') account_creation_date,
a.batch_id, credit_limit credit_limit_current
from
( select *
from ccm_dbf_phtxn_v0
where batch_id= (SELECT to_char(MAX(TO_DATE(SUBSTR(BATCH_ID,1,6),'DDMMRR')),'DDMMRR') FROM CCM_MST_V0)
) a,
( select *
from ccm_dbf_chtxn_v0
where batch_id=(SELECT to_char(MAX(TO_DATE(SUBSTR(BATCH_ID,1,6),'DDMMRR')),'DDMMRR') FROM CCM_MST_V0)
) b
where a.card_number=b.card_number
and TO_NUMBER(ROUND(MONTHS_BETWEEN(SYSDATE,TO_DATE(account_creation_date,'DDMMYYYY')),2)) >=6
and a.company ='BNK'
) b
where a.primary_card_no = b.card_number
group by primary_card_no,base_segment_number) atab, card_summary_param btab
where utilization between utilization_low and utilization_high
and payment_ratio between payment_ratio_low and payment_ratio_high
and SIGN(atab.previous_balance) =btab.previous_balanceWhere I have to put the PARALLEL hint for maximum performance?
Sorry for asking blindly without doing any R&D. Time is not permitting that...
Edited by: user11115924 on Apr 29, 2009 5:09 AM
Sorry for the kiddy aliases.. Query is not written by me.. -
A Column in a query which can be expand
Hi guru,
i would like to build a query in which i can expand and close columns.
The first column will have the Age( <= 20[5, 10, 15, 20]), the other columns ´will be 5, 10, 15, 20.(This columns can be expand oder close.)
How can build a report which shows me the age 20 with the number of empoloyee when it il close and when it is open each agecategory till twenty.
Thanks for your help.
GiloHi,
You can try to create a structure with the char and specific selections that you like. Right click the structure and choose Hierarchy Display. See here for details:
http://help.sap.com/saphelp_nw04/helpdata/en/4d/e2bebb41da1d42917100471b364efa/content.htm
Hope this helps... -
Wrong value in Result Row in a query which includes 1:N relationship SO/Inv
Dear all,
I have an InfoSet which includes sales order DSO, PO DSO and Invoice DSO. The purpose is to design a report to show 3rd party order flow.
Now I met an issue if one sales item contains multiple billing doc. Below is an example:
Sales doc: S001 order Qty is 600, there are two invoices for this order B001 invoice 200 and B002 invoice 400. Now in report we see below:
Sales Doc. -- Invoice no. --- Order Qty -- Invoice Qty
S001 ---B001 -
600 -
200
S002 ---B002 --- 600 -
400
Result Row -
1200 --- 600
We can see in Result Row totsl order qty is 1200 which is wrong.
I am wondering if anybody have such experience and the method soving this problem ? Many thanks in advance !Hi,
Normal aggregation calculates the result row as a summation. However, you can change this default behaviour by using Exception aggregation at "Order Qty" key figure.
You can find a detailed article with examples about Exception aggregation at:
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f0b8ed5b-1025-2d10-b193-839cfdf7362a
I hope this helps you.
Regards,
Maximiliano -
Fine tunning the query which uses /*+ USE_CONCAT */ table hint
Some of the queries in our application uses the /*+ USE_CONCAT */ table hint which takes huge time for the execution in Oracle database.when we fire the below query in 250Million production database it takes approx.3min for the execution.
Because of this we are facing a performance issue in the application.
Below is the sample query :
SELECT /*+ USE_CONCAT */ * FROM DI_MATCH_KEY WHERE NORM_COUNTRY_CD = 'US'
AND ((( NORM_CONAME_KEY1 ='WILM I' OR NORM_CONAME_KEY2 = 'WILM I' OR NORM_CONAME_KEY23 = 'WILM I'
OR NORM_CONAME_KEYFIRST ='WILLIAM' ) AND NORM_STATE_PROVINCE = 'CA' ) OR NORM_ADDR_KEY2 = 'CALMN 12 3 OSAI')
Regarding the indexes almost for all the columns combination for the above sql index already created in table.
Your suggestions will be appreciated
Thanks,
Regards,
Krishna kumarHi,
Thanks for our valuable inputs.
As suggested find attached explain plan and trace file details in the excel sheet.
TRACE FILE:
<pre>
TKPROF: Release 10.2.0.1.0 - Production on Thu Feb 14 11:11:59 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: implhr01_ora_26457.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT /*+ USE_CONCAT */ COUNT(*) FROM DI_MATCH_KEY WHERE NORM_COUNTRY_CD = 'US'
AND ((( NORM_CONAME_KEY1 ='WILM I' OR NORM_CONAME_KEY2 = 'WILM I' OR NORM_CONAME_KEY23 = 'WILM I'
OR NORM_CONAME_KEYFIRST ='WILLIAM' ) AND NORM_STATE_PROVINCE = 'CA' ) OR NORM_ADDR_KEY2 = 'CALMN 12 3 OSAI')
call count cpu elapsed disk query current rows
Parse 1 0.03 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.97 65.43 30633 64053 0 1
total 4 4.00 65.49 30633 64053 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 76
Rows Row Source Operation
1 SORT AGGREGATE (cr=64053 pr=30633 pw=0 time=65436584 us)
73914 CONCATENATION (cr=64053 pr=30633 pw=0 time=61623673 us)
0 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=4 pr=2 pw=0 time=32617 us)
0 INDEX RANGE SCAN MKADDR_KEY2 (cr=4 pr=2 pw=0 time=32599 us)(object id 122583)
75 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=80 pr=20 pw=0 time=1369427 us)
75 INDEX RANGE SCAN MK_CY_KEY1_ST_PROV (cr=5 pr=0 pw=0 time=2119 us)(object id 122666)
2 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=81 pr=2 pw=0 time=26723 us)
77 INDEX RANGE SCAN MK_CY_KEY2_ST_PROV (cr=4 pr=0 pw=0 time=3641 us)(object id 122667)
2 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=6 pr=0 pw=0 time=47 us)
2 INDEX RANGE SCAN MK_CY_KEY1_AGN (cr=4 pr=0 pw=0 time=28 us)(object id 122670)
73835 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=63882 pr=30609 pw=0 time=61503773 us)
73905 INDEX RANGE SCAN MK_CY_KEY2_AGN (cr=266 pr=176 pw=0 time=148198 us)(object id 122745)
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 76
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.03 0.05 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 3.97 65.43 30633 64053 0 1
total 5 4.00 65.50 30633 64053 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.05 0.03 0 0 0 0
Execute 23 0.05 0.08 0 0 0 0
Fetch 98 0.01 0.01 0 137 0 76
total 130 0.11 0.13 0 137 0 76
Misses in library cache during parse: 9
Misses in library cache during execute: 9
2 user SQL statements in session.
23 internal SQL statements in session.
25 SQL statements in session.
Trace file: implhr01_ora_26457.trc
Trace file compatibility: 10.01.00
Sort options: prsela exeela fchela
1 session in tracefile.
2 user SQL statements in trace file.
23 internal SQL statements in trace file.
25 SQL statements in trace file.
11 unique SQL statements in trace file.
561 lines in trace file.
91 elapsed seconds in trace file.
</pre>
EXPLAIN PLAN:
<pre>
ID OPERATION OBJECT_NAME CARDINALITY BYTES COST CPU_COST
0 SELECT STATEMENT 1 53 1886 15753172
1 SORT 1 53
2 CONCATENATION
3 TABLE ACCESS DI_MATCH_KEY 8 424 10 78334
4 INDEX MKADDR_KEY2 8 4 30086
5 TABLE ACCESS DI_MATCH_KEY 3 159 7 53574
6 INDEX MK_CY_KEY1_ST_PROV 3 4 29286
7 TABLE ACCESS DI_MATCH_KEY 3 159 7 53740
8 INDEX MK_CY_KEY2_ST_PROV 3 4 29286
9 TABLE ACCESS DI_MATCH_KEY 5 265 7 55941
10 INDEX MK_CY_KEY1_AGN 5 4 29686
11 TABLE ACCESS DI_MATCH_KEY 2192 116176 1855 15511582
12 INDEX MK_CY_KEY2_AGN 2228 12 524136
</pre>
Kindly help us regarding this.
Thanks,
Krishna Kumar. -
How to set aggregation rule (SUM) to a query which uses multiple tables
Hi,
I have a doubt like i have a query .i want to get the sum of few columns in that..how can i achieve it as it joins multiple tables . i am posting the query under this. please help me to resolve this. i need to get summation of column which is marked in bold . i am so sorry to post such a big query.. thanks in advance.
SELECT DISTINCT
SAS.ACCOUNT_MONTH_NO,
SAS.BILL_TO_MAJOR_SALES_CHANNEL,
SAS.BUS_AREA_ID,
SAS.CUST_NAME,
SAS.PART_NO,
SAS.PART_DESC,
SAS.PRODUCT_CLASS_CODE,
SAS.SUPER_FAMILY_CODE,
*SAS.NET_SALES_AMT_COA_CURR,*
*SAS.GROSS_SALES_AMT_COA_CURR*,
*SAS.SHIPPED_QTY*,
SAS.SRCE_SYS_ID,
GWS.SRC_LOCATION,
GWS.PART_CF_PART_NUMBER,
GWS.ANALYST_COMMENTS,
NVL(GWS.CLAIM_QUANTITY,0) AS *CLAIM_QUANTITY*,
GWS.CUSTOMER_CLAIM_SUBMISSION_DATE,
GWS.CLAIM_TYPE,
GWS.CREDIT_MEMO_NO,
NVL(GWS.CREDIT_MEMO_AMT,0) AS *CREDIT_MEMO_AMT*,
GWS.TRANS_CREATED_BY,
GWS.COMPONENT_CODE,
GWS.DATE_OF_THE_FAILURE,
GWS.DATE_PART_IN_SERVICE,
GWS.PROBLEM_CODE,
NVL(GWS.TOT_AMT_REVIEWED_BY_CA,0) AS *TOT_AMT_REVIEWED_BY_CA*,
GWS.REGION
FROM
SELECT
TO_CHAR(A.STATUS_DATE, 'YYYYMM') AS ACCOUNT_MONTH_NO,
A.LOCATION_ID SRC_LOCATION,
A.CF_PN PART_CF_PART_NUMBER,
A.ANALYST_COMMENTS,
A.CLAIM_QUANTITY,
A.CUST_CLAIM_SUBM_DATE CUSTOMER_CLAIM_SUBMISSION_DATE,
A.CLAIM_TYPE,
A.CREDIT_MEMO_NO,
A.CREDIT_MEMO_AMT,
A.CREATED_BY TRANS_CREATED_BY,
A.FAULT_CODE COMPONENT_CODE,
A.PART_FAILURE_DATE DATE_OF_THE_FAILURE,
A.PART_IN_SERVICE_DATE DATE_PART_IN_SERVICE,
A.FAULT_CODE PROBLEM_CODE,
A.TOT_AMT_REVIEWED_BY_CA,
A.PART_BUS_AREA_ID AS BUS_AREA_ID,
A.PART_SRC_SYS_ID,
C.CUST_NAME,
C.BILL_TO_MAJOR_SALES_CHANNEL,
P.PART_NO,
P.PART_DESC,
P.PRODUCT_CLASS_CODE,
L.REGION
FROM
EDWOWN.MEDW_BIS_DTL_FACT A,
EDWOWN.EDW_MV_DB_CUST_DIM C,
EDWOWN.EDW_BUSINESS_LOCATION_DIM L,
EDWOWN.EDW_V_ACTV_PART_DIM P
WHERE
A.PART_KEY = P.PART_KEY
AND A.CUSTOMER_KEY = C.CUSTOMER_KEY
AND A.LOCATION_KEY = L.LOCATION_KEY
AND A.PART_SRC_SYS_ID = 'SOMS'
AND A.PART_BUS_AREA_ID = 'USA'
AND C.BILL_TO_MAJOR_SALES_CHANNEL <> 'IN'
GWS,
SELECT
A.ACCOUNT_MONTH_NO,
A.BUS_AREA_ID,
A.NET_SALES_AMT_COA_CURR,
A.GROSS_SALES_AMT_COA_CURR,
A.SHIPPED_QTY,
B.BILL_TO_MAJOR_SALES_CHANNEL,
A.SRCE_SYS_ID,
B.CUST_NAME,
D.PART_NO,
D.PART_DESC,
D.PRODUCT_CLASS_CODE,
D.SUPER_FAMILY_CODE
FROM
SASOWN.SAS_V_CORP_SHIP_FACT A,
SASOWN.SAS_V_CORP_CUST_DIM B,
SASOWN.SAS_V_CORP_LOCN_DIM C,
SASOWN.SAS_V_CORP_PART_DIM D
WHERE
C.DIVISION_CODE = A.DIVISION_CODE
AND
B.B_HIERARCHY_KEY = A.B_HIERARCHY_KEY
AND
D.PART_NO = A.PART_NO
AND
A.SRCE_SYS_ID = 'SOMS'
AND
A.BUS_AREA_ID = 'USA'
AND
B.BILL_TO_MAJOR_SALES_CHANNEL <> 'IN'
SAS
WHERE
SAS.ACCOUNT_MONTH_NO = GWS.ACCOUNT_MONTH_NO(+)
AND SAS.BILL_TO_MAJOR_SALES_CHANNEL = GWS.BILL_TO_MAJOR_SALES_CHANNEL(+)
AND SAS.BUS_AREA_ID = GWS.BUS_AREA_ID(+)
AND SAS.PRODUCT_CLASS_CODE = GWS.PRODUCT_CLASS_CODE(+);thanks in advance
aswinYou get rid of the distinct.
You put sum() around your starred items.
You put all the remaining columns in a GROUP BY clause.
You hope that that none of the other tables has more than one row that matches the SAS table, which would cause you to count that row more than once. -
Cancel the query which uses full transaction log file
Hi,
We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.Hi,
We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.
Hello,
Instead of Putting limit on trn log it would be good to find out cause causing high utilization.Even if you find that your log is growing because of some transaction it would be a blunder to rollback its little easy to do it for Index rebuild but if you
cancel for some delete operation you would end up in mess.Please don't create a program to delete or kill running operation.
You can create custom job for alert for trn log file growth.That would be good.
From 2008 onwards Index rebuild is fully logged so sometimes it causes trn log issue.To solve this either you run index rebuild for specific tables or for selective tables.
Other option is widely accepted Ola Hallengren script for index rebuild.I suggest you try this
http://ola.hallengren.com/
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
Maybe you are looking for
-
Always i have the same problem with MPG2. doesn't support
anybody fixed this problem or I must change the serial numbers always again and again, and re-install ADOBE and the same problem came back after worked two or three weeks.
-
HT1459 Can Ipods be tracked by the serial number?
My 4th generation ipod was stolen last night, I have no idea who took it. However i have the original box it came which of course contains the ipods serial number. Is there anyway apple or the authorities can help me recover this ipod?
-
Reporting tool database structure
I am about to create a business intelligence tool, but I don't know how to store the created reports. I want some reports in which I have determined which tables and fields should be printed with the correct JOINS. Perhaps I shouldn't make it any mor
-
Elements 4 Compatibility issue with Vista
Everything was fine 'till my old faithful pc, and Windows XP, went belly up. Now the new pc, with Vista won't allow the complete Elements program to operate. I'm told, "This program has known compatibility issues". If the issues are known, can someon
-
My iPhone Problems, Regards, And Thoughts... Please Read!
iPhone 2G since launch AKA The Start Of Bull Shi+ the phone was 640 with 100 dollar deposit then 200 1st bill=GAY (READ TO END THEN REPLY PLEASE) Everyone iphones needs 2 push apple to update an change this "Groundbreaking interface" i have been thro