Help in improving Query Performance
Hi,
I would like to know if there a way to using so many OR's as it is causing performace issue in our application .
The value C.x1 is dynamic here . it may have values like 'yy','zz', 'xx' and number of OR's depends on number of different C.x1's based on user delection .
Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
OR
Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
OR
Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
OR
Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
sugestions please.
regards,
Kar
Select A.x1 from Table1 A , Table2 B where A.x2 =
B.x2 AND ( A.y1 = ( select C.x1 from Table C ) OR
A.y1 = ( select C.x1 from Table C )
OR A.y1 = ( select C.x1 from Table C ) OR A.y1 = (
select C.x1 from Table C ) )why use two sub-queries on your WHERE predicates when they are the same?
AND ( A.y1 = ( select C.x1 from Table C ) OR A.y1 = ( select C.x1 from Table C )
OR A.y1 = ( select C.x1 from Table C ) OR A.y1 = ( select C.x1 from Table C ) ) you can have one subqueries:
AND (A.y1 = ( select C.x1 from Table C ) OR A.y1 = ( select C.x1 from Table C ))
Similar Messages
-
Hi,
I am executing one query it takes 40-45 mins, can anybody tell me where is the issue because I have index on SUBSCRIPTION table.
Query is taking time in Nested Loop. Can anyboduy please help to improve query performance.
Select count(unique individual_id)
from SUBSCRIPTION S ,SOURCE D WHERE S.ORDER_DOCUMENT_KEY_CD=D.FULFILLMENT_KEY_CD AND prod_abbr='TOH'
and to_char(source_start_dt,'YYMM')>='1010' and mke_mag_source_type_cd='D';
select count(*) from source; ----------3,425,131
select count(*) from subscription;---------394,517,271
Below is exlain Plan
Plan
SELECT STATEMENT CHOOSECost: 219 Bytes: 38 Cardinality: 1
13 SORT GROUP BY Bytes: 38 Cardinality: 1
12 PX COORDINATOR
11 PX SEND QC (RANDOM) SYS.:TQ10001 Bytes: 38 Cardinality: 1
10 SORT GROUP BY Bytes: 38 Cardinality: 1
9 PX RECEIVE Bytes: 38 Cardinality: 1
8 PX SEND HASH SYS.:TQ10000 Bytes: 38 Cardinality: 1
7 SORT GROUP BY Bytes: 38 Cardinality: 1
6 TABLE ACCESS BY LOCAL INDEX ROWID TABLE SUBSCRIPTION Cost: 21 Bytes: 3,976 Cardinality: 284
5 NESTED LOOPS Cost: 219 Bytes: 604,276 Cardinality: 15,902
2 PX BLOCK ITERATOR
1 TABLE ACCESS FULL TABLE SOURCE Cost: 72 Bytes: 1,344 Cardinality: 56
4 PARTITION HASH ALL Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
3 INDEX RANGE SCAN INDEX XAK1SUBSCRIPTION Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
Please suggestit eliminate hidden conversation from char to numberi dont know indexes/partition on TC table, and you?
drop table test;
create table test as select level id, sysdate + level/24/60/60 datum from dual connect by level < 10000;
create index idx1 on test(datum);
analyze table test compute statistics;
explain plan for select count(*) from test where to_char(datum,'YYYYMMDD') > '20120516';
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3467505462
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TEST | 500 | 3500 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD')>'20120516')
explain plan for select count(*) from test where datum > trunc(sysdate);
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2330213601
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | INDEX FAST FULL SCAN| IDX1 | 9999 | 69993 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("DATUM">TRUNC(SYSDATE@!))
drop index idx1;
create index idx1 on test(to_number(to_char(datum,'YYYYMMDD')));
analyze table test compute statistics;
explain plan for select count(*) from test where to_number(to_char(datum,'YYYYMMDD')) > 20120516;
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 227046122
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 5 | | |
|* 2 | INDEX RANGE SCAN| IDX1 | 1 | 5 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access(TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD'))>
20120516)
explain plan for select count(*) from test where datum > trunc(sysdate);
SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3467505462
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 7 (15)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | | |
|* 2 | TABLE ACCESS FULL| TEST | 9999 | 69993 | 7 (15)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("DATUM">TRUNC(SYSDATE@!)) -
How to improve query performance using infoset
I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!
Hi
As info set itself doesn't have any data so it improves Performance
also go through the below tips.
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
How to read ST03N datasets from DB
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
How many ways can i improve query performance?
Hi All,
can any body help me
How many ways can i improve query performance in oracle ?
Thanks,
narasimhaAs many as you can think of them!!!
-
How to improve Query performance on large table in MS SQL Server 2008 R2
I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups is a best option or splitting the table into multiple smaller tables?
Hi bala197164,
First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
Partitioning:
http://msdn.microsoft.com/en-us/library/ms178148.aspx
CREATE INDEX (Transact-SQL):
http://msdn.microsoft.com/en-us/library/ms188783.aspx
TechNet
Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Allen Li
TechNet Community Support -
How to improve Query Performance
Hi Friends...
I Want to improve query performance.I need following things.
1.What is the process to findout the performance?. Any transaction code's and how to use?.
2.How can I know whether the query is running good or bad ,ie. in performance praspect.
3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
Eg..
Eg 1. Need to create aggregates.
Solution: where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
Any chenges I need to do in Development?.Because I'm in Production system.
So please tell me solution for my questions.
Thanks
Ganga
Message was edited by: Ganga Nhi ganga
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
Prakash's weblog
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
CHEERS
RAVI -
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
How to improve query performance built on a ODS
Hi,
I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
Is there any method to improve or optimize th query performance that build on ODS.
The ODS got huge volume of data ~ 300 Million records for 2 years.
Thanx in advance,
Guru.Hi Raj,
Here are some few tips which helps you in improving ur query performance
Checklist for Query Performance
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
calculations. Try to avoid calculations before restrictions.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order. -
Help to improve the performance of a procedure.
Hello everybody,
First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
The shorter calls (less than 30 minutes) have segmentID = 'C1'.
The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
So I decided to come here and ask you for some tips on how to improve the performance of this.
I think you are getting confused already, so I'm just going to put some comments in the code.
I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
DECLARE
CURSOR cur_c21 IS
select * from table1
where segmentID = 'C21'
order by start_date_of_call; // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
CURSOR cur_c22 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
CURSOR cur_c22_2 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
cursor cur_c23 is
select * from table1
where segmentID = 'C23'
order by start_date_of_call;
v_temp_rec_c22 cur_c22%ROWTYPE;
v_dur table1.duration%TYPE; // using this for storage of the duration of the call. It's number.
BEGIN
insert into table2
select * from table1 where segmentID = 'C1'; // inserting the calls which are less than 30 minutes long
-- and here starts the mess
FOR rec_c21 IN cur_c21 LOOP // taking the first part of the call
v_dur := rec_c21.duration; // recording it's duration
FOR rec_c22 IN cur_c22 LOOP // starting to check if there is a middle part for the call
IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)
/* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
THEN
v_dur := v_dur + rec_c22.duration; // updating the new duration
v_temp_rec_c22:=rec_c22; // recording the current record in another variable because I use it for the next check
FOR rec_c22_2 in cur_c22_2 LOOP
IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND
(rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)
/* logic is the same as before but comparing with the last value in v_temp...
And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
THEN
v_dur:=v_dur + rec_c22_2.duration;
v_temp_rec_c22:=rec_c22_2;
END IF;
END LOOP;
END IF;
EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);
/* exiting the loop if we have at least one middle part.
(I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
END LOOP;
FOR rec_c23 IN cur_c23 LOOP
IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration
/* we should always have one last part, so we need this check.
If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
(yes we can have these situations in calls longer than 30 and less than 60 minutes). */
THEN
v_dur:=v_dur + rec_c23.duration;
rec_c21.duration:=v_dur; // updating the duration
rec_c21.segmentID :='C1';
INSERT INTO table2 VALUES rec_c21; // inserting the whole call in table2
END IF;
EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;
// exit the loop when the last part has been found.
END LOOP;
END LOOP;
END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
It's my first post here so hope this is the right sub-forum.
I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
I know I'm still missing a lot of knowledge so every help is really appreciated.
Thank you very much in advance!Atiel wrote:
Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
SQL> ed
Wrote file afiedt.buf
1 select 'C1' as segmentid
2 ,start_date_of_call, duration, callingnumber, callednumber
3 from (
4 select distinct
5 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
6 ,sum(duration) over (partition by callingnumber, callednumber) as duration
7 ,callingnumber
8 ,callednumber
9 from table1
10* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349
Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
SQL> select * from table1;
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
C21 15-MAR-2012 09:07:26 134676480 5581790386 0113496771567 219 100 10.16
C23 11-MAY-2012 09:37:26 134676480 5581790386 0113496771567 321 73 2.71
C21 11-MAY-2012 12:13:10 3892379648 1982032041 0631432831624 959 80 2.87
C22 11-MAY-2012 12:43:10 3892379648 1982032041 0631432831624 375 57 8.91
C22 11-MAY-2012 13:13:10 117899264 1982032041 0631432831624 778 27 1.42
C23 11-MAY-2012 13:43:10 117899264 1982032041 0631432831624 308 97 3.26
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 with t2 as (
2 select 'C1' as segmentid
3 ,start_date_of_call, duration, callingnumber, callednumber
4 from (
5 select distinct
6 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
7 ,sum(duration) over (partition by callingnumber, callednumber) as duration
8 ,callingnumber
9 ,callednumber
10 from table1
11 )
12 )
13 --
14 select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
15 ,t1.col1, t1.col2, t1.col3
16 from t2
17 join table1 t1 on ( t1.start_date_of_call = t2.start_date_of_call
18 and t1.callingnumber = t2.callingnumber
19 and t1.callednumber = t2.callednumber
20* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624 959 80 2.87
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567 219 100 10.16
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows). -
How to improve query performance of an ODS- with 320 million records
<b>Issue:</b>
The reports are giving time-outs while execution.
<b>Scenario</b>:
We have an ODS having approximately 320 millions of records in it.
The reports are based on
The ODS and
InfoSets based on this ODS.
These reports are giving time-outs while execution.
<b>Few facts about this ODS:</b>
There are around 75 restricted and calculated keyfigures used in the query definition.
We cant replace this ODS by cube as there is requirement of InfoSet on it.
This is in BW 3.5 environment.
<b>Few things we tried:</b>
Secondary Indices are created on the fields which are appearing in the selection screen of the reports. Its not worked.
The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
Question:
Can you suggest the ways to improve the query performance of this ODS?
Your immediate response is highly appreciated. Thanks in advance.Hey!
I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
Is there a way to delete some data from the ODS?
Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
Best regards,
Peter -
Steps to Improve Query Performance
Hi All,
I have a request from User to improve the Query performance for few of the sales reports. Please let me know the different steps I need to perform to improve it.
The data is coming from R/3 and the query is on a Multicube modeled upon three Cubes.
It takes way lot of time to open and refresh the report an further execute. The data available is not really huge but still it is taking time.
Tell me what are the areas that i need to check to understand the issue and how should o progress to improve the performance.
It would be great help if you can help me with as much details as possible.
Thanks in advance
Pavanhttps://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695 -
Help to improve expdp performance and compression
Hi,
We have a oracle standard Edition which do not support parallel , compression features of expdp.
The expdp dmp file is around 90GB and it takes 45 minutes to create in production server . Then the scripts compresses the file with gzip utility and compression takes 80 minutes.
To copy the compressed file from prod to staging server it takes another 47 minutes .
We have automated the process but it takes long time for expdp + compression + copy ( Around 3 hrs ) . On staging server it does take more than 4 hours to create the staging db.
Is there anyway I can improve the performance of these 3 operations .
Can I do compression while file is exporting ? I tried using pipes in unix and it doesn't work for expdp.
We don't want to use network link .
Will expdp commands writes the file sequentially ? If so , can I start gzipping parallely when files are exported .
Also tried compressing with gzip -1 option , but it has increased the file size by 30% , and eventually increased the copy time to staging server .
Please help
Thanks,
Bharani J
Edited by: 973089 on Nov 27, 2012 9:40 AM
Edited by: 973089 on Nov 27, 2012 9:41 AMHi,
Why 'do not support parallel' ?
I understand you don't want to use database link, i had this problem here [i used expdp].
This is what i've done:
A script that do:
A full logic backup using expdp,
a bzip2 to compact,
and a transfer to the machine of destiny.
It would be far more easily if i could use the database link, but i couldn't.
however i used the parallel in expdp command.
Hope you find a good solution. -
How to improve query performance when reporting on ods object?
Hi,
Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
Thanks in advance,
Ravi Alakuntla.Hi Ravi,
Check these links which may cater your requirement,
Re: performance issues of ODS
Which criteria to follow to pick InfoObj. as secondary index of ODS?
PDF on BW performance tuning,
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Mani. -
Removing Stats improves query performance
Hi,
Today i have got a rare question regarding Stats collection,
In some cases removing existing stats will improve the query performance...
What is the reason behind this?
After a rigorous search i have landed here to here from you all.
Thanks in-advance.
VaraHi,
Before 11gR2 and adaptive cursor sharing, skewed data could result in wrong execution plan (histogramms and bind variable peaking could not fix it).
Sometimes statistics cannot be fresh enough, for example with temporary table, but also for regular tables which are massively loaded/modified, resulting in poor execution plan.
Also the optimizer might use guesses for complex predicates... which could have same consequences.
When no statistics are available, dynamic sampling may give a more accurate estimation of the cardinalities than statistics.
https://blogs.oracle.com/optimizer/entry/dynamic_sampling_and_its_impact_on_the_optimizer -
Needed help to improve the performance of a select query?
Hi,
I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
--null "CR_TURNOVER",
--null "DR_TURNOVER",
null "DR_OD_DAYS",
(select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
(case when (p_applDate >= sca.tod_limit_start_date and
p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
sca.tod_limit else 0 end) dd
from getm_facility gf, sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
--sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'D'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "DR_INTEREST",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'C'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "CR_INTEREST",
(select sum(amt) from ictb_entries_history ieh
where ieh.brn = sca.branch_code
and ieh.acc = sca.cust_ac_no
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C')) "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists (select 1 from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate)
or exists (select 1 from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
(select maturity_date from ictm_acc ia
where ia.brn = sca.branch_code
and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
sca.ac_stat_block "BLOCKED_ACC", Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca, sttm_customer sc
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and ( exists (select 1 from actb_daily_log adl
where adl.ac_no = sca.cust_ac_no
and adl.ac_branch = sca.branch_code
and adl.trn_dt = p_applDate
and adl.auth_stat = 'A')
or exists (select 1 from catm_amount_blocks cab
where cab.account = sca.cust_ac_no
and cab.branch = sca.branch_code
and cab.effective_date = p_applDate
and cab.auth_stat = 'A')
or exists (select 1 from ictm_td_closure_renew itcr
where itcr.acc = sca.cust_ac_no
and itcr.brn = sca.branch_code
and itcr.renewal_date = p_applDate)
or exists (select 1 from sttm_ac_stat_change sasc
where sasc.cust_ac_no = sca.cust_ac_no
and sasc.branch_code = sca.branch_code
and sasc.status_change_date = p_applDate
and sasc.auth_stat = 'A')
or exists (select 1 from cstb_acc_brn_trfr_log cabtl
where cabtl.branch_code = sca.branch_code
and cabtl.cust_ac_no = sca.cust_ac_no
and cabtl.process_status = 'S'
and cabtl.process_date = p_applDate)
or exists (select 1 from sttbs_provision_history sph
where sph.branch_code = sca.branch_code
and sph.cust_ac_no = sca.cust_ac_no
and sph.esn_date = p_applDate)
or exists (select 1 from sttms_cust_account_dormancy scad
where scad.branch_code = sca.branch_code
and scad.cust_ac_no = sca.cust_ac_no
and scad.dormancy_start_dt = p_applDate)
or sca.maker_dt_stamp = p_applDate
or sca.status_since = p_applDate
l_tb_acc_det ty_tb_acc_det_int;
l_brnrec cvpks_utils.rec_brnlcy;
l_acbr_lcy sttms_branch.branch_lcy%type;
l_lcy_amount actbs_daily_log.lcy_amount%type;
l_xrate number;
l_dt_rec sttm_dates%rowtype;
l_acc_rec sttm_cust_account%rowtype;
l_acc_stat_row ty_r_acc_stat;
Edited by: user13710379 on Jan 7, 2012 12:18 AMI see it more like shown below (possibly with no inline selects
Try to get rid of the remaining inline selects ( left as an exercise ;) )
and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
-- null "CR_TURNOVER",
-- null "DR_TURNOVER",
null "DR_OD_DAYS",
w.dd "OD_LIMIT",
-- sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
x.dr_int "DR_INTEREST",
x.cr_int "CR_INTEREST",
y.fee_amt "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists(select 1
from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate
or exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
) "RENEWED_OR_ROLLOVER",
m.maturity_date "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
-- sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca,
sttm_customer sc,
(select sca.cust_ac_no
sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
case when p_applDate >= sca.tod_limit_start_date
and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
then sca.tod_limit else 0
end
) dd
from sttm_cust_account sca
getm_facility gf,
sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no
group by sca.cust_ac_no
) w,
(select acc,
brn,
sum(decode(drcr,'D',amt)) dr_int,
sum(decode(drcr,'C',amt)) cr_int
from ictb_entries_history ieh
where ent_dt between p_st_dt and p_ed_dt
and drcr in ('C','D')
and liqn = 'Y'
and entry_passed = 'Y'
and exists(select null
from ictm_pr_int ipi,
ictm_rule_frm irf
where ipi.rule = irf.rule_id
and ipi.product_code = ieh.prod
and irf.book_flag = 'B'
group by acc,brn
) x,
(select acc,
brn,
sum(amt) fee_amt
from ictb_entries_history ieh
where ieh.ent_dt between p_st_dt and p_ed_dt
and exists(select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C'
group by acc,brn
) y,
ictm_acc m,
(select sca.cust_ac_no,
sca.branch_code
coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
nvl2(coalesce(t2.account,t2.account),'exists',null),
nvl2(coalesce(t3.acc,t3.brn),'exists',null),
nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
decode(sca.maker_dt_stamp,p_applDate,'exists'),
decode(sca.status_since,p_applDate,'exists')
) existence
from sttm_cust_account sca
left outer join
(select ac_no,ac_branch
from actb_daily_log
where trn_dt = p_applDate
and auth_stat = 'A'
) t1
on (sca.cust_ac_no = t1.ac_no
and sca.branch_code = t1.ac_branch
left outer join
(select account,account
from catm_amount_blocks
where effective_date = p_applDate
and auth_stat = 'A'
) t2
on (sca.cust_ac_no = t2.account
and sca.branch_code = t2.branch
left outer join
(select acc,brn
from ictm_td_closure_renew itcr
where renewal_date = p_applDate
) t3
on (sca.cust_ac_no = t3.acc
and sca.branch_code = t3.brn
left outer join
(select cust_ac_no,branch_code
from sttm_ac_stat_change
where status_change_date = p_applDate
and auth_stat = 'A'
) t4
on (sca.cust_ac_no = t4.cust_ac_no
and sca.branch_code = t4.branch_code
left outer join
(select cust_ac_no,branch_code
from cstb_acc_brn_trfr_log
where process_date = p_applDate
and process_status = 'S'
) t5
on (sca.cust_ac_no = t5.cust_ac_no
and sca.branch_code = t5.branch_code
left outer join
(select cust_ac_no,branch_code
from sttbs_provision_history
where esn_date = p_applDate
) t6
on (sca.cust_ac_no = t6.cust_ac_no
and sca.branch_code = t6.branch_code
left outer join
(select cust_ac_no,branch_code
from sttms_cust_account_dormancy
where dormancy_start_dt = p_applDate
) t7
on (sca.cust_ac_no = t7.cust_ac_no
and sca.branch_code = t7.branch_code
) z
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and sca.cust_ac_no = w.cust_ac_no
and sca.cust_ac_no = x.acc
and sca.branch_code = x.brn
and sca.cust_ac_no = y.acc
and sca.branch_code = y.brn
and sca.cust_ac_no = m.acc
and sca.branch_code = m.brn
and sca.cust_ac_no = z.sca.cust_ac_no
and sca.branch_code = z.branch_code
and z.existence is not nullRegards
Etbin
Maybe you are looking for
-
Ok, give me a simple flow chart on how to get HD video from and Sony XDCAM (PDW-700) to a DVD. Here's what we are doing now... We import the vids off the recording deck using Sony's XDCAM transfer utility. Once in FCS, we edit and then shoot it over
-
Received attachments on PC (Outlook) but not on my Mac (Mail)
Hello everybody, I use IMAP protocol on my Mac and PC and so far i have not had any problems synchronising them. Today I noticed that one particular mail is received by both, my PC (Outlook) and Mac (Mail) but while on PC I can see 14 attached jpg fi
-
I need help reseting y secerty questions
i need help reseting my secerty questions
-
Get-netadapter with vlan overview
Hello, with hyper-v i frequently use get-netadapter. Is there a pipleline command with a overview of the vlan-id of these netadapters? (like get-netadapter -vlanid) ? Thx
-
Importing/Exporting clock reference with 5133
Hello- I am using SignalExpress for some simple DAQ. I would like to either import or export a clock reference to lock to my signal source, but have been unsuccessful. Signal Express does have a tab in which I can choose the PFI1 as the reference cl