Help to asses the performance impact of OAS and WLS integration.
If this is not the right forum, please direct me to the right one for the below clariafication.
We have developed a small application using ADF technology and this application can be integrated with the existing EBS R12 application. So, the EBS R12 application will be running in Oracle Application Server and ADF application will be running in WebLogic Server. And the customer who is implementing this, would like to know the below impacts before they start to implement.
1) Oracle Application Server to Web Logic Server (WLS) Connection happens to view the ADF application, What would be Resource Consumption of WLS to handle this ADF application. They would like to know the WLS resource used by this ADF application.
2) How does the Network/Data Traffic impact the Utilization of Web Logic Server and Oracle Application Server.
Could you please help us in providing the feedback to the customer.
Ans Is there a way to do load testing in our internal development instance to asses the resource consumption, cpu usuage and the network traffic?
Thanks,
Sugumar.
Could some one help us?
Similar Messages
-
Need help in optimising the performance of a query
Need help in optimising the performance of a query. Below is the query that is executed on TABLE_A, TABLE_B and TABLE_C with record counts as 10M, 10m and 42 (only) respectively and it takes around 5-7 minutes to get 40 records:
SELECT DISTINCT a.T_ID_, a.FIRSTNAME, b.T_CODE, b.PRODUCT,
CASE WHEN TRUNC(b.DATE) +90 = TRUNC(SYSDATE) THEN -90 WHEN TRUNC(b.DATE) +30 = TRUNC(SYSDATE) THEN -30 ELSE 0 END AS T_DATE FROM TABLE_B b
INNER JOIN TABLE_A a ON (a.T_ID_ = b.T_ID_) LEFT JOIN TABLE_C c ON b.PRODUCT = c.PRODUCT
WHERE b.STATUS = 'T' AND (b.TYPE = 'ACTION'
AND ( TRUNC(b.DATE) + 1 = TRUNC(SYSDATE) ) ) AND b.PRODUCT = 2;
Note: Indices on the join columns are available in the respective tables
Please let me know if there is any better way to write it.
Edited by: 862944 on Aug 18, 2011 9:52 AM862944 wrote:
Need help in optimising the performance of a query. Below is the query that is executed on TABLE_A, TABLE_B and TABLE_C with record counts as 10M, 10m and 42 (only) respectively and it takes around 5-7 minutes to get 40 records:
SELECT DISTINCT a.T_ID_, a.FIRSTNAME, b.T_CODE, b.PRODUCT,
CASE WHEN TRUNC(b.DATE) +90 = TRUNC(SYSDATE) THEN -90 WHEN TRUNC(b.DATE) +30 = TRUNC(SYSDATE) THEN -30 ELSE 0 END AS T_DATE FROM TABLE_B b
INNER JOIN TABLE_A a ON (a.T_ID_ = b.T_ID_) LEFT JOIN TABLE_C c ON b.PRODUCT = c.PRODUCT
WHERE b.STATUS = 'T' AND (b.TYPE = 'ACTION'
AND ( TRUNC(b.DATE) + 1 = TRUNC(SYSDATE) ) ) AND b.PRODUCT = 2;
Note: Indices on the join columns are available in the respective tables
Please let me know if there is any better way to write it.
Edited by: 862944 on Aug 18, 2011 9:52 AM[When Your Query Takes Too Long|https://forums.oracle.com/forums/thread.jspa?messageID=1812597] -
Sorry but i have a problem my IPAD has been stolen and i want help to know where is the place of it ? can i send the serial number or anything i want help to know the place of my IPAD and thanks My name is :- Osama Rezk I'm From :- Egypt my icloud ID
<Email Edited by Host>You will only be able to track your iPad if you have find my iPhone active and the iPad is connected to a network.
Take a look at this link, http://support.apple.com/kb/PH2580 -
The performance effect of dbconsole and emagent!!
Hi, all.
How much is the performance effect of dbconsole and emagent?
Thanks and Regards.Check the installation guide for your patform for the memory requirements of the EM components.
-
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
Help to improve the performance of a procedure.
Hello everybody,
First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
The shorter calls (less than 30 minutes) have segmentID = 'C1'.
The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
So I decided to come here and ask you for some tips on how to improve the performance of this.
I think you are getting confused already, so I'm just going to put some comments in the code.
I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
DECLARE
CURSOR cur_c21 IS
select * from table1
where segmentID = 'C21'
order by start_date_of_call; // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
CURSOR cur_c22 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
CURSOR cur_c22_2 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
cursor cur_c23 is
select * from table1
where segmentID = 'C23'
order by start_date_of_call;
v_temp_rec_c22 cur_c22%ROWTYPE;
v_dur table1.duration%TYPE; // using this for storage of the duration of the call. It's number.
BEGIN
insert into table2
select * from table1 where segmentID = 'C1'; // inserting the calls which are less than 30 minutes long
-- and here starts the mess
FOR rec_c21 IN cur_c21 LOOP // taking the first part of the call
v_dur := rec_c21.duration; // recording it's duration
FOR rec_c22 IN cur_c22 LOOP // starting to check if there is a middle part for the call
IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)
/* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
THEN
v_dur := v_dur + rec_c22.duration; // updating the new duration
v_temp_rec_c22:=rec_c22; // recording the current record in another variable because I use it for the next check
FOR rec_c22_2 in cur_c22_2 LOOP
IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND
(rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)
/* logic is the same as before but comparing with the last value in v_temp...
And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
THEN
v_dur:=v_dur + rec_c22_2.duration;
v_temp_rec_c22:=rec_c22_2;
END IF;
END LOOP;
END IF;
EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);
/* exiting the loop if we have at least one middle part.
(I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
END LOOP;
FOR rec_c23 IN cur_c23 LOOP
IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration
/* we should always have one last part, so we need this check.
If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
(yes we can have these situations in calls longer than 30 and less than 60 minutes). */
THEN
v_dur:=v_dur + rec_c23.duration;
rec_c21.duration:=v_dur; // updating the duration
rec_c21.segmentID :='C1';
INSERT INTO table2 VALUES rec_c21; // inserting the whole call in table2
END IF;
EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;
// exit the loop when the last part has been found.
END LOOP;
END LOOP;
END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
It's my first post here so hope this is the right sub-forum.
I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
I know I'm still missing a lot of knowledge so every help is really appreciated.
Thank you very much in advance!Atiel wrote:
Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
SQL> ed
Wrote file afiedt.buf
1 select 'C1' as segmentid
2 ,start_date_of_call, duration, callingnumber, callednumber
3 from (
4 select distinct
5 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
6 ,sum(duration) over (partition by callingnumber, callednumber) as duration
7 ,callingnumber
8 ,callednumber
9 from table1
10* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349
Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
SQL> select * from table1;
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
C21 15-MAR-2012 09:07:26 134676480 5581790386 0113496771567 219 100 10.16
C23 11-MAY-2012 09:37:26 134676480 5581790386 0113496771567 321 73 2.71
C21 11-MAY-2012 12:13:10 3892379648 1982032041 0631432831624 959 80 2.87
C22 11-MAY-2012 12:43:10 3892379648 1982032041 0631432831624 375 57 8.91
C22 11-MAY-2012 13:13:10 117899264 1982032041 0631432831624 778 27 1.42
C23 11-MAY-2012 13:43:10 117899264 1982032041 0631432831624 308 97 3.26
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 with t2 as (
2 select 'C1' as segmentid
3 ,start_date_of_call, duration, callingnumber, callednumber
4 from (
5 select distinct
6 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
7 ,sum(duration) over (partition by callingnumber, callednumber) as duration
8 ,callingnumber
9 ,callednumber
10 from table1
11 )
12 )
13 --
14 select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
15 ,t1.col1, t1.col2, t1.col3
16 from t2
17 join table1 t1 on ( t1.start_date_of_call = t2.start_date_of_call
18 and t1.callingnumber = t2.callingnumber
19 and t1.callednumber = t2.callednumber
20* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624 959 80 2.87
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567 219 100 10.16
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows). -
Need help in analyzing the performance aspects of compounding
Hi all,
i am analyzing the performance aspects of compounding.
can anyone guide me about how do i go about?when i displayed this table, it is showing some timestamps and validity period for the queries. i am having some difficulty in understanding this table details.
can u please guide me regarding this ?
also can anyone please help me on how do i analyze the OLAP processor performance for the queries that use compounding? -
Required help in improving the performance
Hi I am very new to java concept, I am working with an API, where the records are being processed in for loop, and taking time, to process 10k records it is taking almost 35 min, and as I have incorporated in my apex, if the multiple users using the same that stage the performance even being dropped, it is taking almost near to an hour, somehow with the help of online tutors, I was able to incorporate oracle.sql.array, not able to increase the performance,
My first requirement is there is any way that I can process the records parallel in batches, or not how do I increase the performance, and I got know that by enabling setautoindex and setautobuffer on we can increase the performance, but I could not do that can anyone help me on this.Hi
I apologize for not adding the process in the initial phase
The task for me to pass the records from my table to api, and update the results given by the api, the steps involved are
1) I have created type of strarray and have assigned the same to rec1,rec2 in my stored procedure
2) rec1 is the input details which consist of batch_id unique identifier by batch,row_id unique identifier for the batch and the contact address information.
3) rec2 is the output for rec1, where i will get the batch_id,row_id, and formatted address in output form
4) I will capture the opt put in temp table and update these results to the input table
5) With this stored procedure, i am not able to allow parallel transaction i..e multiple users
6) As records are being processed row by row, consuming time
Here is the code, Please let me know if you need more information on this.
PROCESS_INT (REC_IN, REC_OUT); which will call the following process
public static int process(oracle.sql.ARRAY rec_in, oracle.sql.ARRAY[] rec_out) {
// If everything has been initialized then we want to write some data
// to the socket we have opened a connection to
if (m_clientSocket != null && m_out != null && m_in != null) {
try {
String[] record = (String[])rec_in.getArray();
for (int i = 0; i < 9; i++) {
if (record[i] != null)
m_out.println(record);
else
m_out.println("");
m_out.flush();
// Read the result
for (int i = 0; i < 14; i++) {
record[i] = m_in.readLine();
Connection conn = new OracleDriver().defaultConnection();
ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor( rec_in.getSQLTypeName(), conn );
rec_out[0] = new ARRAY( descriptor, conn, record );
} catch (UnknownHostException e) {
System.err.println("Unable to connect to lqtListener: " + e);
return -1;
} catch (IOException e) {
System.err.println("IOException in process: " + e);
return -2;
} catch (SQLException e) {
System.err.println("SQLException in process: " + e);
return -4;
else
return -3;
return 0; -
When I plug a LaCie external hard drive to work with Time Machine into my new Mac mini, [OS 10.9.1] it drastically effects the performance of the internet and email, turn off Time Machine, unplug the hard drive, internet and email go back to normal. Why?
Maybe:
http://www.macobserver.com/tmo/article/usb-3.0-hard-drives-can-cause-wi-fi-inter ference -
I need your help to release the pin from my bb and the release of this reported sim cards.
I write from Venezuela, I buy a blackberry phone on amazon and I see the difficulty when I go to the phone company Movistar that the phone is reported stolen and can not use it all funsiones blackberry plan, I say I have to communicate with my provider to release him from the United States, this has me very upset as it is possible that they will sell a phone stolen, when I trusted this company to give the Christmas gift to my daughter and now I do not I will miss serving the phone. I need you to help me release, Point me please if there is any procedure to follow and if they can solve this problem, amazon tells me they can not and I got this email to contact them.
I trust that you can help me, please answer this e-mail as soon as possible.
Atte,
MariaI write from Venezuela, I buy a blackberry phone on amazon and I see the difficulty when I go to the phone company Movistar that the phone is reported stolen and can not use it all funsiones blackberry plan, I say I have to communicate with my provider to release him from the United States, this has me very upset as it is possible that they will sell a phone stolen, when I trusted this company to give the Christmas gift to my daughter and now I do not I will miss serving the phone. I need you to help me release, Point me please if there is any procedure to follow and if they can solve this problem, amazon tells me they can not and I got this email to contact them.
I trust that you can help me, please answer this e-mail as soon as possible.
Atte,
Maria -
*Shocked* by the performance of Canon DPP, and DDP workflow with Aperture
I love Aperture. My brother mercilessly hounded me for two years, and when Aperture 2 came out, I gave it a shot. Aperture 3, despite my nightmare conversion story, has been a dream come true . . . until I discovered sharpening.
In my quest to get sharper photos, I've toyed with image stabilization, tripods, higher shutter speeds, steadying the camera, and depth of field, and even bought several professional lenses. My photos STILL did not look as sharp as those I saw in galleries and online. But wait . . . my JPEG files from my sporting events did . . .
I read that RAW files are not sharp, and sharpening is applied to JPEGs on the camera. But why is Aperture and my MBP not able to sharpen photos well using any one of the three sharpening sliders or the sharpening tool? I was then led to DPP, kicking and screaming. What I discovered was truly amazing.
Forget about the personal opinions with warmth and contrast between Aperture, ACR, and DPP. DPP is the unquestioned leader in producing sharp photos from RAW images. You drag the slider . . . it's sharp. It's even sharper than the photos I've spent 20 minutes sharpening in CS5 with sharpening masks, sharpening tool, etc. The DPP tool JUST WORKS. Even noise with high ISO is MUCH improved. High noise still can use an expensive tool to correct, but still MUCH better than Aperture.
Until Canon reveals their secrets to Apple and Adobe for RAW processing, I need to figure out a way to use DPP for RAW processing.
For those that use DPP for RAW processing, how to you work it into your workflow? I want Aperture to be a one-stop shop, but I don't want to store the original RAW, the DPP-edited RAW, and potentially a TIFF for additional editing and noise reduction?
Do you sort in Aperture first? Do you convert in DPP first? How do you maintain file integrity, and at the same time, minimize disk space usage?
If you no longer use DDP, please tell me why, and how you've worked around it?All I can say is, either I've been in the weeds all this time, or your skills with sharpening are better than most.
A couple of questions:
1-What do you use under Sharpening for and Edges under "RAW fine tuning" you thankfully shared your settings for Edge Sharpen (^s)
2-What camera and RAW format are you using (this may help me fine-tune my preferences). I've got a 7D and primarily shoot MRAW. (Not the best for a couple reasons, but I don't need or want the large file sizes.)
To be sure, default sharpening in Aperture is pretty bad. And I have played with sharpening going on 40 hours now over several months. I could not get a good result.
Your documenting the exact settings and sharpening tool is what helped me get past whatever I was doing before. Maybe I was thrown by the higher default contrast in DPP. I'm now able to produce a better result in Aperture than DPP, or even my laboriously-sharpened photos on Photoshop. There are some tradeoffs in each, but I didn't want to use DDP as part of my workflow. Now that I've used it more, I'm convinced I don't!
And the definition setting is very useful. It's the only mainstream adjustment I've never really used. -
Need help in establishing the connection between BO federator and Oracle DB
Hi Experts,
I am working on BO federator and now I need to establish the connection between BO federator and the oracle db to use this as the source for the federator,
I know we need to download the respective oracle DB connect .jar file from the oracle web site and place it under the drivers section(folder) of the BO Federator.
Now I need to understand what is/are the next steps I need to follow and what are the values for the following parameters which are required during establishing the connectivity while creation of data sources from federator end.
Host Name:
Port:
SID:
Schema:
Please share the details, any help will be highly appreciated.
Thanks with regards,
VinayHi,
Please find below details.
Host Name: Server Name or IP Address (Oracle)
Port: (oracle Server Port)
SID:SID is a unique name for an Oracle database instance. -> To switch between Oracle databases, users must specify the desired SID <-. The SID is included in the CONNECT DATA parts of the connect descriptors in a TNSNAMES.ORA file
Schema: no need to enter(Blank)
Thanks,
Amit -
Help to boost the performance of my proxy server
Out of my personal interest, I am developing a proxy server in java for enterprises.
I've made the design as such the user's request would be given to the server through the proxy software and the response would hit the user's browsers through the proxy server.
User - > Proxy software - > Server
Server -> Proxy software -> User
I've designed the software in java and it is working
fine with HTTP and HTTPS requests.The problem which i am so scared is,
for each user request i am creating a thread to serve. So concurrently if 10000 users access the proxy server in same time,
I fear my proxy server would be bloated by consuming all the resources in the machine where the proxy software is installed.This is because,i'm using threads for serving the request and response.
Is there any alternative solution for this in java?
Somebody insisted me to use Java NIO.I'm confused.I need a solution
for making my proxy server out of performance issue.I want my
proxy server would be the first proxy server which is entirely
written in java and having a good performace which suits well for
even large organisations(Like sun java web proxy server which has been written in C).
How could i boost the performace?.I want the users should have no expereience of accessing the remote server through proxy.It would be like accessing the web server without a proxy for them.There should be not performance lagging.As fast as 'C Language'.I need to do this in java.Please help.I think having a thread per request is fine.Maybe I got it wrong, but I thought the point in
using NIO with sockets was to get rid of the 1 thread
per request combo?Correct. A server which has one thread per client doesn't scale well.
Kaj -
Needed help to improve the performance of a select query?
Hi,
I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
--null "CR_TURNOVER",
--null "DR_TURNOVER",
null "DR_OD_DAYS",
(select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
(case when (p_applDate >= sca.tod_limit_start_date and
p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
sca.tod_limit else 0 end) dd
from getm_facility gf, sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
--sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'D'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "DR_INTEREST",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'C'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "CR_INTEREST",
(select sum(amt) from ictb_entries_history ieh
where ieh.brn = sca.branch_code
and ieh.acc = sca.cust_ac_no
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C')) "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists (select 1 from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate)
or exists (select 1 from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
(select maturity_date from ictm_acc ia
where ia.brn = sca.branch_code
and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
sca.ac_stat_block "BLOCKED_ACC", Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca, sttm_customer sc
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and ( exists (select 1 from actb_daily_log adl
where adl.ac_no = sca.cust_ac_no
and adl.ac_branch = sca.branch_code
and adl.trn_dt = p_applDate
and adl.auth_stat = 'A')
or exists (select 1 from catm_amount_blocks cab
where cab.account = sca.cust_ac_no
and cab.branch = sca.branch_code
and cab.effective_date = p_applDate
and cab.auth_stat = 'A')
or exists (select 1 from ictm_td_closure_renew itcr
where itcr.acc = sca.cust_ac_no
and itcr.brn = sca.branch_code
and itcr.renewal_date = p_applDate)
or exists (select 1 from sttm_ac_stat_change sasc
where sasc.cust_ac_no = sca.cust_ac_no
and sasc.branch_code = sca.branch_code
and sasc.status_change_date = p_applDate
and sasc.auth_stat = 'A')
or exists (select 1 from cstb_acc_brn_trfr_log cabtl
where cabtl.branch_code = sca.branch_code
and cabtl.cust_ac_no = sca.cust_ac_no
and cabtl.process_status = 'S'
and cabtl.process_date = p_applDate)
or exists (select 1 from sttbs_provision_history sph
where sph.branch_code = sca.branch_code
and sph.cust_ac_no = sca.cust_ac_no
and sph.esn_date = p_applDate)
or exists (select 1 from sttms_cust_account_dormancy scad
where scad.branch_code = sca.branch_code
and scad.cust_ac_no = sca.cust_ac_no
and scad.dormancy_start_dt = p_applDate)
or sca.maker_dt_stamp = p_applDate
or sca.status_since = p_applDate
l_tb_acc_det ty_tb_acc_det_int;
l_brnrec cvpks_utils.rec_brnlcy;
l_acbr_lcy sttms_branch.branch_lcy%type;
l_lcy_amount actbs_daily_log.lcy_amount%type;
l_xrate number;
l_dt_rec sttm_dates%rowtype;
l_acc_rec sttm_cust_account%rowtype;
l_acc_stat_row ty_r_acc_stat;
Edited by: user13710379 on Jan 7, 2012 12:18 AMI see it more like shown below (possibly with no inline selects
Try to get rid of the remaining inline selects ( left as an exercise ;) )
and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
-- null "CR_TURNOVER",
-- null "DR_TURNOVER",
null "DR_OD_DAYS",
w.dd "OD_LIMIT",
-- sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
x.dr_int "DR_INTEREST",
x.cr_int "CR_INTEREST",
y.fee_amt "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists(select 1
from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate
or exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
) "RENEWED_OR_ROLLOVER",
m.maturity_date "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
-- sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca,
sttm_customer sc,
(select sca.cust_ac_no
sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
case when p_applDate >= sca.tod_limit_start_date
and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
then sca.tod_limit else 0
end
) dd
from sttm_cust_account sca
getm_facility gf,
sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no
group by sca.cust_ac_no
) w,
(select acc,
brn,
sum(decode(drcr,'D',amt)) dr_int,
sum(decode(drcr,'C',amt)) cr_int
from ictb_entries_history ieh
where ent_dt between p_st_dt and p_ed_dt
and drcr in ('C','D')
and liqn = 'Y'
and entry_passed = 'Y'
and exists(select null
from ictm_pr_int ipi,
ictm_rule_frm irf
where ipi.rule = irf.rule_id
and ipi.product_code = ieh.prod
and irf.book_flag = 'B'
group by acc,brn
) x,
(select acc,
brn,
sum(amt) fee_amt
from ictb_entries_history ieh
where ieh.ent_dt between p_st_dt and p_ed_dt
and exists(select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C'
group by acc,brn
) y,
ictm_acc m,
(select sca.cust_ac_no,
sca.branch_code
coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
nvl2(coalesce(t2.account,t2.account),'exists',null),
nvl2(coalesce(t3.acc,t3.brn),'exists',null),
nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
decode(sca.maker_dt_stamp,p_applDate,'exists'),
decode(sca.status_since,p_applDate,'exists')
) existence
from sttm_cust_account sca
left outer join
(select ac_no,ac_branch
from actb_daily_log
where trn_dt = p_applDate
and auth_stat = 'A'
) t1
on (sca.cust_ac_no = t1.ac_no
and sca.branch_code = t1.ac_branch
left outer join
(select account,account
from catm_amount_blocks
where effective_date = p_applDate
and auth_stat = 'A'
) t2
on (sca.cust_ac_no = t2.account
and sca.branch_code = t2.branch
left outer join
(select acc,brn
from ictm_td_closure_renew itcr
where renewal_date = p_applDate
) t3
on (sca.cust_ac_no = t3.acc
and sca.branch_code = t3.brn
left outer join
(select cust_ac_no,branch_code
from sttm_ac_stat_change
where status_change_date = p_applDate
and auth_stat = 'A'
) t4
on (sca.cust_ac_no = t4.cust_ac_no
and sca.branch_code = t4.branch_code
left outer join
(select cust_ac_no,branch_code
from cstb_acc_brn_trfr_log
where process_date = p_applDate
and process_status = 'S'
) t5
on (sca.cust_ac_no = t5.cust_ac_no
and sca.branch_code = t5.branch_code
left outer join
(select cust_ac_no,branch_code
from sttbs_provision_history
where esn_date = p_applDate
) t6
on (sca.cust_ac_no = t6.cust_ac_no
and sca.branch_code = t6.branch_code
left outer join
(select cust_ac_no,branch_code
from sttms_cust_account_dormancy
where dormancy_start_dt = p_applDate
) t7
on (sca.cust_ac_no = t7.cust_ac_no
and sca.branch_code = t7.branch_code
) z
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and sca.cust_ac_no = w.cust_ac_no
and sca.cust_ac_no = x.acc
and sca.branch_code = x.brn
and sca.cust_ac_no = y.acc
and sca.branch_code = y.brn
and sca.cust_ac_no = m.acc
and sca.branch_code = m.brn
and sca.cust_ac_no = z.sca.cust_ac_no
and sca.branch_code = z.branch_code
and z.existence is not nullRegards
Etbin -
Can anyone help me know the difference b/w application and web server?
i tried reading about application and web servers. it appears to me to be the same. please do help me to differentiate. Thanks :-)
An application server hosts business logic components for an application. A web server is an application which accepts HTTP requests.
An application server may come packaged with a web server.
A web server is a very simple process. It's HTTP daemon process that listens for incoming requests over HTTP protocol on a specified port usually, 80. For simple, static web pages the web server has the built in logic to serve them but for a complex operation(say read from database and display some records), it routes the URL to a component like the servlet engine....
An application server is a much broader term. For example the servlet may need to invoke certain business logic components like beans or activex dlls. The server that hosts these components is the application server.
Hope you are clear now.
Maybe you are looking for
-
Edit Button Not Working in PDF Portfolio
The Edit function is greyed out whenever I try to create a PDF portfolio directly from an Outlook mail folder. We are using Adobe Pro X. Any suggestions on why this is and how to fix? I've been able to create other portfolios and use the Edit butto
-
Cannot Create the Model in PI 7.1
Hi All, when i try to create the Model in PI 7.1, there is no value from the Model Type drop down list. So the Create button is disable. what's more, when i double click the Sell From Stock (ESM) under SAP BASIS 7.10 -> Modeling -> Models -> ESA Sale
-
Changing color of lines in line chart
Okay this is a dumb question but how do I change the color of lines in a line chart?
-
Crystal Reports Formatting issues when exporting to Word Editable document
Version: CR XI, I am editing a document exported from CR to Word Editable When exporting to a Word Editable (.rtf) document the formatting changes. For example: lines that were drawn in the report do not come across, there is a tab in front of each
-
Output type configuration for Purchase order
Dear Friends, I want to configure output type for Purchase order in retail system and later want to trigger it in AFS(apparel, footwear solution) system. There is integration in IS retail and AFS system. I created a PO in IS retail system. Let me kno