SQL Query execution with LISTAGG
Hello,
I am trying to get some data using LISTAGG functionality.
I have below query with works fine with the table, but not with the rowsource which is based on same table.
SELECT COL1,LISTAGG(COL3, ',') WITHIN GROUP (ORDER BY COL2) AS fieldName FROM IOP_Sample_RS GROUP BY COL1it gives parsing error
line 1:22: expecting "FROM", found '('col 22 is '(' of LISTAGG function.
One more Question
Does UNION, MINUS works with RSQL in IOP?
Thanks,
Sumant Chhunchha.
Edited by: Sumant on Jun 15, 2011 5:04 AM
LISTAGG, UNION, MINUS is not supported with RSQL, instead you can massage your SQL query to load data into IOP and then use a simple RSQL query.
Similar Messages
-
How to numberformat when using sql:query alogn with c:forEach JSTL tags
Is there anyway to format the numeric values returned from the database when using <sql:query> alogn with <c:forEach> tags
Here is my jsp code
<sql:query..../>
<c:forEach var="row" items="${queryResults.rows}">
<tr>
<td><c:out value="${row.COL1}" /></td>
<td><c:out value="${row.COL2}" /></td>
</tr>
</c:forEach>
Col1 values are numeric without any formats Eg: 1000, 10000, 1000000 etc.
how can i format them to 1,000 , 10,1000 , 100,000 etcIt is polite to mention what your answer was. These posts are not just here for you to ask questions, but to be used as a resource for other people to find answers. Saying "I solved it" with no details helps noone.
I presume you discovered the JSTL <fmt:formatNumber> tag? -
Excel ADODB Sql Query Execution taking hours when manipulate excel tables
Hello All
I have 28000 records with 8 column in an sheet. When I convert the sheet into ADODB database and copy to new
excel using below code it is executing in less than a min
Set Tables_conn_obj = New ADODB.Connection
Tables_conn_str = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & Table_Filename & ";Extended Properties=""Excel 12.0;ReadOnly=False;HDR = Yes;IMEX=1"""
Tables_conn_obj.Open Tables_conn_str
First_Temp_sqlqry = "Select * INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1] Table [first - Table$];" Tables_conn_obj.Execute First_Temp_sqlqry
But when I change the query to manipulate one column in current table based on another table in the same excel
and try to copy the results in another excel, it is taking more than one hour.. why it is taking this much time when both the query results returns the same number of rows and column. I almost spend one week and still not able to resolve this issue.
Even I tried copyfromrecordset, getrows(), getstring(), Looping each recordset fields options all of them taking
same amount of time. Why there is huge difference in execution time.
Important note: Without into statement even below query is executing in few seconds.
select ( ''''''manipulating first column based on other table data''''''''''''''
iif(
[Second - Table$].[Policy Agent] = (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01') ) , (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01')) ,
iif( [Second - Table$].[Policy Agent] = (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$]where [ACA T$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01') ), (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01')) ,
[Second - Table$].[Policy Agent] ))) as [Policy Agent],
''''''summing up all other columns''''''''''''''
(iif(isnull(sum([Second - Table$].[Auto BW-Line Of Business Detail])),0,sum([Second - Table$].[Auto BW-Line Of Business Detail]))) as [Auto BW-Line Of Business Detail],(iif(isnull(sum([Second - Table$].[Auto Farmers])),0,sum([Second - Table$].[Auto Farmers]))) as [Auto Farmers],(iif(isnull(sum([Second - Table$].[MCA])),0,sum([Second - Table$].[MCA]))) as [MCA],(iif(isnull(sum([Second - Table$].[CEA])),0,sum([Second - Table$].[CEA]))) as [CEA],(iif(isnull(sum([Second - Table$].[Commercial P&C])),0,sum([Second - Table$].[Commercial P&C]))) as [Commercial P&C],(iif(isnull(sum([Second - Table$].[Comm WC])),0,sum([Second - Table$].[Comm WC]))) as [Comm WC],(iif(isnull(sum([Second - Table$].[Fire Farmers])),0,sum([Second - Table$].[Fire Farmers]))) as [Fire Farmers],(iif(isnull(sum([Second - Table$].[Flood])),0,sum([Second - Table$].[Flood]))) as [Flood],(iif(isnull(sum([Second - Table$].[Kraft Lake])),0,sum([Second - Table$].[Kraft Lake]))) as [Kraft Lake],(iif(isnull(sum([Second - Table$].[Life])),0,sum([Second - Table$].[Life]))) as [Life],(iif(isnull(sum([Second - Table$].[Foremost])),0,sum([Second - Table$].[Foremost]))) as [Foremost],(iif(isnull(sum([Second - Table$].[Umbrella])),0,sum([Second - Table$].[Umbrella]))) as [Umbrella],(iif(isnull(sum([Second - Table$].[MCNA])),0,sum([Second - Table$].[MCNA]))) as [MCNA]
INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1]
from [Second - Table$] group by [Second - Table$].[Policy Agent] ;Hi Fei,
Thank you so much for the reply post. I just executed the same above SQL without INTO Statement and assigned the SQL result to ADODB recordset as below. If the time difference is due to the SQL query then below statements also should execute for hours
right, but it gets executed in seconds. But to copy the recordset to excel again it is taking hours. I tried copyfromrecordset,
getrows(), getstring(), Looping each recordset fields options and all of them taking same amount of time. Please let me know there is delay in time for this much small data
Even I tried to typecast all columns to double, string in SQL and still the execution time is not reduced.
First_Temp_Recordset.Open sql_qry, Tables_conn_obj, adOpenStatic, adLockOptimistic ''' OR SET First_Temp_Recordset = Tables_conn_obj.Execute sql_qry -
Excel ADODB Sql Query Execution taking hours when manipulate excel tables why?
I have 28000 records with 8 column in an sheet. When I convert the sheet into ADODB database and copy to new excel using below code it is executing in less than a min
Set Tables_conn_obj = New ADODB.Connection Tables_conn_str = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & Table_Filename & ";Extended Properties=""Excel 12.0;ReadOnly=False;HDR = Yes;IMEX=1""" Tables_conn_obj.Open
Tables_conn_str First_Temp_sqlqry = "Select * INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1] Table [first - Table$];" Tables_conn_obj.Execute First_Temp_sqlqry
But when I change the query to manipulate one column in current table based on another table in the same excel and try to copy the results in another excel, it is taking more than one hour.. why it is taking this much time when both the query results returns
the same number of rows and column. I almost spend one week and still not able to resolve this issue.
Even I tried copyfromrecordset, getrows(), getstring(), Looping each recordset fields options all of them taking same amount of time. Appreciate any inputs...
select ( ''''''manipulating first column based on other table data''''''''''''''
iif( [Second - Table$].[Policy Agent] = (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where
[ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01') ) , (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate]
= ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[new_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] > '2014-10-01')) ,
iif( [Second - Table$].[Policy Agent] = (select max([ACAT$].[Old_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate] = ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$]where
[ACA T$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01') ), (select max([ACAT$].[new_Agent_number]) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent] and [ACAT$].[ACAT_EffectiveDate]
= ( select MAX([ACAT$].[ACAT_EffectiveDate] ) from [ACAT$] where [ACAT$].[Old_Agent_number] = [Second - Table$].[Policy Agent]and [ACAT$].[ACAT_EffectiveDate] <= '2014-10-01')) ,
[Second - Table$].[Policy Agent] ))) as [Policy Agent],
''''''summing up all other columns''''''''''''''
(iif(isnull(sum([Second - Table$].[Auto BW-Line Of Business Detail])),0,sum([Second - Table$].[Auto BW-Line Of Business Detail]))) as [Auto BW-Line Of Business Detail],(iif(isnull(sum([Second - Table$].[Auto Farmers])),0,sum([Second - Table$].[Auto Farmers])))
as [Auto Farmers],(iif(isnull(sum([Second - Table$].[MCA])),0,sum([Second - Table$].[MCA]))) as [MCA],(iif(isnull(sum([Second - Table$].[CEA])),0,sum([Second - Table$].[CEA]))) as [CEA],(iif(isnull(sum([Second - Table$].[Commercial P&C])),0,sum([Second
- Table$].[Commercial P&C]))) as [Commercial P&C],(iif(isnull(sum([Second - Table$].[Comm WC])),0,sum([Second - Table$].[Comm WC]))) as [Comm WC],(iif(isnull(sum([Second - Table$].[Fire Farmers])),0,sum([Second - Table$].[Fire Farmers]))) as [Fire
Farmers],(iif(isnull(sum([Second - Table$].[Flood])),0,sum([Second - Table$].[Flood]))) as [Flood],(iif(isnull(sum([Second - Table$].[Kraft Lake])),0,sum([Second - Table$].[Kraft Lake]))) as [Kraft Lake],(iif(isnull(sum([Second - Table$].[Life])),0,sum([Second
- Table$].[Life]))) as [Life],(iif(isnull(sum([Second - Table$].[Foremost])),0,sum([Second - Table$].[Foremost]))) as [Foremost],(iif(isnull(sum([Second - Table$].[Umbrella])),0,sum([Second - Table$].[Umbrella]))) as [Umbrella],(iif(isnull(sum([Second - Table$].[MCNA])),0,sum([Second
- Table$].[MCNA]))) as [MCNA]
INTO [Excel 12.0;DATABASE=C:\Prod Validation\Database\Second Acat Table.xlsb].[Sheet1]
from [Second - Table$] group by [Second - Table$].[Policy Agent] ;Hi Fei,
Thank you so much for the reply post. I just executed the same above SQL without INTO Statement and assigned the SQL result to ADODB recordset as below. If the time difference is due to the SQL query then below statements also should execute for hours
right, but it gets executed in seconds. But to copy the recordset to excel again it is taking hours. I tried copyfromrecordset,
getrows(), getstring(), Looping each recordset fields options and all of them taking same amount of time. Please let me know there is delay in time for this much small data
Even I tried to typecast all columns to double, string in SQL and still the execution time is not reduced.
First_Temp_Recordset.Open sql_qry, Tables_conn_obj, adOpenStatic, adLockOptimistic ''' OR SET First_Temp_Recordset = Tables_conn_obj.Execute sql_qry -
SQL query execution in DB02 hangs if record set is more than 50000
Hi,
We are facing issue in a report performance. The return is using native SQL query.
There are custom views created ar database level for pricing/maetrial and stock. The native sql query is written on these views. The report takes around 15 mins to run in background .
We are trying to analyse the native SQL query through DB02. I tried fetching records for one particular
custom view to make out if its indexing issue or something else.When i using TOP 35000 records with select query runs fine with this dataset or less than this . If i increase it to 40000 system doesn;t show anything in SQL ouptut. And above one lakh records system gives timeout.
The count in this view gives some 10 lakh records which I don't feel is v.v.huge that query that too native sql takes so much time.
Any help on this will be highly appreciated.
Regards
Madhuwhat do you expect from that poor information.
do you change data or onyl select.
If you use SAP and ABAP, then you should also use Open SQL.
Otherwise it is possible to run the SQL Trace with Native SQL, it is anyway only Native SQL, what the trace sees.
Use package size and it will probably work fine.
Siegfried -
SQL query slow with call to function
I have a SQL query that will return in less than a second or two with a function in-line selected in the "from" clause of the statement. As soon as I select that returned value in the SQL statement, the statement takes from anywhere from 2 to 5 minutes to return. Here is a simplified sample from the statement:
This statement returns in a second or 2.
select A.pk_id
from stu_schedule A, stu_school B, stu_year C, school_year D,
(select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
from stu_schedule Z) II
where B.pk_id = A.fk_stu_school
and C.pk_id = B.fk_stu_year
and D.pk_id = C.year
and D.school_year = '2011';
if I add this function call in, the statement runs extremely poor.
select A.pk_id,
II.ytd_unx
from stu_schedule A, stu_school B, stu_year C, school_year D,
(select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
from stu_schedule Z) II
where B.pk_id = A.fk_stu_school
and C.pk_id = B.fk_stu_year
and D.pk_id = C.year
and D.school_year = '2011';
Here is the function that is called:
create or replace FUNCTION calc_ytd_class_abs2 (p_fk_stu_schedule in varchar2,
p_legality in varchar2) return number IS
l_days_absent number := 0;
CURSOR get_class_abs IS
select (select nvl(max(D.days_absent),'0')
from cut_code D
where D.pk_id = C.fk_cut_code
and (D.legality = p_legality
or p_legality = '%')) days_absent
from stu_schedule_detail B, stu_class_attendance C
where B.fk_stu_schedule = p_fk_stu_schedule
and C.fk_stu_schedule_detail = B.pk_id;
BEGIN
FOR x in get_class_abs LOOP
l_days_absent := l_days_absent + x.days_absent;
END LOOP;
return (l_days_absent);
END calc_ytd_class_abs2;Query returns anywhere from 6000 to 32000 rows. For each of those rows a parameter is passed in to 4 different functions to get ytd totals. When I call the functions in the in-line view but do not select from them in the main SQL, the report (oh, this is Application Express 4.0 interactive reports, just an FYI) runs fast. The report comes back in a few seconds. But when I select from the in-line view to display those ytd totals, the report runs extremely slow. I know there are the articles about context switching and how mixing SQL with PL/SQL performs poorly. So I tried a pipeline table function where the function for the ytd totals populate the columns of the pipeline table and I select from the pipeline table in the SQL query in the interactive report. That seemed to perform a little worse from what I can tell.
Thanks for any help you can offer. -
Problem in executeQuery(SQL query sentence with a ' );
for example:
String strName="";
//here strName is searching key value.
String strQueryString="SELECT * FROM Authors WHERE name='" strName "'";
ResultSet rs=st.executeQuery(strQueryString);
if strName value is "yijun_lee",that will return all information which the name columns value is "yijun_lee" with SQL Query sentence SELECT * FROM Authors WHERE name='yijun_lee'
but if strName value is "yijun ' lee",that value contains a '.that will error!
how to do?
thanks very much!You could parse <strName > and insert another ' for each '
A concrete example would be SELECT * FROM Authors WHERE name='yijun '' lee'HTH -
SQL Query Result with Random Sorting
Hi Experts,
My Oracle Version : Oracle9i
I have three tables which are given below,
Table Name: check_team
team_id team_code
100 A
101 B
102 C
103 D
Table Name: check_product
product_id product_code
1 XXX
2 XYZ
Table Name: check_team_products
tprod_id tprod_team_id tprod_product_id
1 100 1
2 100 2
3 101 1
4 101 2
5 102 1
6 102 2
7 103 1
8 103 2
Required Output First Time:
team_id team_code product_id product_code
100 A 1 XXX
101 B 2 XYZ
102 A 1 XXX
103 B 2 XYZ
Required Output Second Time:
team_id team_code product_id product_code
100 B 2 XYZ
101 A 1 XXX
102 B 2 XYZ
103 A 1 XXXI need the result as Required Output specified above and also the result has to be random too.. Can someone help me in writing a SQL Query to get results as that?
Added Oracle VersionSo, is it something like this you want?
SQL> ed
Wrote file afiedt.buf
1 with check_team as (select 100 as team_id, 'A' as team_code from dual union all
2 select 101, 'B' from dual union all
3 select 102, 'C' from dual union all
4 select 103, 'D' from dual)
5 ,check_product as (select 1 as product_id, 'XXX' as product_code from dual union all
6 select 2, 'XYZ' from dual)
7 ,check_team_products as (select 1 as tprod_id, 100 as tprod_team_id, 1 as tprod_product_id from dual union all
8 select 2, 100, 2 from dual union all
9 select 3, 101, 1 from dual union all
10 select 4, 101, 2 from dual union all
11 select 5, 102, 1 from dual union all
12 select 6, 102, 2 from dual union all
13 select 7, 103, 1 from dual union all
14 select 8, 103, 2 from dual)
15 --
16 -- end of test data
17 --
18 select team_id, team_code, product_id, product_code
19 from (
20 select t.team_id, t.team_code, p.product_id, p.product_code
21 ,row_number() over (partition by team_id order by dbms_random.random()) as rn
22 from check_team t join check_team_products tp on (tp.tprod_team_id = t.team_id)
23 join check_product p on (p.product_id = tp.tprod_product_id)
24 )
25* where rn = 1
SQL> /
TEAM_ID T PRODUCT_ID PRO
100 A 2 XYZ
101 B 1 XXX
102 C 2 XYZ
103 D 1 XXX
SQL> /
TEAM_ID T PRODUCT_ID PRO
100 A 2 XYZ
101 B 1 XXX
102 C 2 XYZ
103 D 1 XXX
SQL> /
TEAM_ID T PRODUCT_ID PRO
100 A 1 XXX
101 B 2 XYZ
102 C 1 XXX
103 D 1 XXX -
SQL query result with HTML Data in output
Hello,
I have a SQL table , in one column I store HTML data. I need to query the table and get the HTML data in the columns that have 'HREF'. The output shows as grid on the sql management studio, however when I export it to excel, the HTML data does not get copied
correctly, since there are HTML tags etc.
How can I export the report correctly from SQL ?Hello,
The HTML data is stored in a column with datatype as nvarchar(max). Sample data in the column is shown below. It is with formatting etc and is rendered as is on the web page. the business wants to generate a quick report so that they can see the pages that
have links displayed. I can do that by querying the columns that have a 'HREF' in the text.
Can I get the exact HREF values using just sql query? There can be more than one links on a page.
Also, If I just want to copy the whole column and paste it on excel, how can I do that? If I copy the data below and paste, it does not get copied in one cell.. it spreads across multiple cells, so the report does not make any sense.
<br />
<table border="0" cellpadding="0" cellspacing="0" style="width: 431pt; border-collapse: collapse;" width="574">
<tbody>
<tr height="19" style="height: 14.25pt; ">
<td height="19" style="border: 0px blue; width: 431pt; height: 14.25pt; background-color: transparent;" width="574"><a href="https:"><u><font color="#0066cc" face="Calibri">ax </font></u></a></td>
</tr>
</tbody>
<colgroup>
<col style="width: 431pt; " width="574" />
</colgroup>
</table> -
Exact time of SQL query execution.
Hi,
Is there any way to get the exact time of execution of particular SQL query.
Oracle Version : 10.2.0.4.0
OS : Sun OS.
Thx,
Gowin.In general it's pretty hard.
Look at V$SQLSTAT.ELAPSED_TIME and DBA_HIST_SQLSTAT.ELAPSED_TIME_TOTAL/DELTA (need a license).
It will give you accurate results for a non-parallel query. For parallel queries you'll get a total time spent by all slaves.
Also you can enable tracing either on database or session level and analyze the trace files generated.
Edited by: Max Seleznev on Nov 4, 2011 12:06 PM -
Clustering of SQL query execution times
In doing some query execution experiments I have noted a curious (to me, anyhow) clustering of execution times around two distinct points. Across about 100 tests each running 1000 queries using (pseudo-)randomly generated IDs the following pattern emerges. The queries were run from Java using all combinations of pooled/non-pooled and thin/oci driver combinations:
100 *
90 *
R 80 *
u 70 *
n 60 *
s 50 *
40 * *
30 * *
20 * * * *
10 * * * * * *
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time(ms)Where about half of the total execution times cluster strongly about a given (short) time value with a smaller but broader clustering at a significantly slower mark, with zero intermediate values. The last point is the one I find most curious.
What I would have expected is something like this:
100
90
R 80
u 70
n 60
s 50
40 *
30 * * *
20 * * * * * *
10 * * * * * * * * * *
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
Time(ms)The variables I have tentatively discounted thus far:
-query differences (single query used)
-connection differences (using single pooled connection)
-garbage collection (collection spikes independent of query execution times)
-amount of data returned in bytes (single varchar2 returned and size is independent of execution time)
-driver differences (thin and oci compared, overall times differ but pattern of clustering remains)
-differences between Statement and PreparedStatement usage (both show same pattern)
I know this is a rather open-ended question, but does the described pattern seem faniliar or spark any thoughts?
DB-side file I/O?
Thread time-slicing variations (client or DB-side)?
FWIW, the DB is 9.2.0.3 DB and the clients are running on WinXP with Java 5.0 and 9i drivers.
Thanks and regards,
MFurther context:
Are your queries only SELECT queries ?
Yes, the same SELECT query is used for all tests. The only variable is the bind variable used to identify the primary key of the selection set (i.e. SELECT a.* from a, b, c where a.x = b.x and b.y = c.y and c.pk = ?) where all PKs and FKs are indexed.Do the queries always use the same tables, the same where clauses ?
Yes, the same tables are always invoked. The where clauses invoked are identical with the excepton of the single bind variable as described above.Do your queries always use bind variables ?
A single bind variable is used in all invocations as described above.Are your queries also running in single user mode or multi user mode (do you use SELECT FOR UPDATE ?) ?
We are not using SELECT FOR UPDATEDid something else run on the database/on the server hosting the database on the same time ?
I have not eliminated the idea, but the test has been repeated roughly 100 times over the course of a week and at different times of day with the same pattern emerging. I suppose it is not out of the question that a resource-hogging process is running consistently and constantly on the DB-side box.Thanks for the input,
M -
Hi,
Facing Database performance issue while runing overnight batches.
Generate tfprof output for that batch and found some sql query which is having high elapsed time. Could any one please let me know what is the issue for this. It will also be great help if anyone suggest what need to be done as per tuning of this sql queries so as to get better responce time.
Waiting for your reply.
Effected SQL List:
INSERT INTO INVTRNEE (TRANS_SESSION, TRANS_SEQUENCE, TRANS_ORG_CHILD,
TRANS_PRD_CHILD, TRANS_TRN_CODE, TRANS_TYPE_CODE, TRANS_DATE, INV_MRPT_CODE,
INV_DRPT_CODE, TRANS_CURR_CODE, PROC_SOURCE, TRANS_REF, TRANS_REF2,
TRANS_QTY, TRANS_RETL, TRANS_COST, TRANS_VAT, TRANS_POS_EXT_TOTAL,
INNER_PK_TECH_KEY, TRANS_INNERS, TRANS_EACHES, TRANS_UOM, TRANS_WEIGHT,
TRANS_WEIGHT_UOM )
VALUES
(:B22 , :B1 , :B2 , :B3 , :B4 , :B5 , :B21 , :B6 , :B7 , :B8 , :B20 , :B19 ,
NULL, :B9 , :B10 , :B11 , 0.0, :B12 , :B13 , :B14 , :B15 , :B16 , :B17 ,
:B18 )
call count cpu elapsed disk query current rows
Parse 722 0.09 0.04 0 0 0 0
Execute 1060 7.96 83.01 11442 21598 88401 149973
Fetch 0 0.00 0.00 0 0 0 0
total 1782 8.05 83.06 11442 21598 88401 149973
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
UPDATE /*+ ROWID(TRFDTLEE) */TRFDTLEE SET TRF_STATUS = :B2
WHERE
ROWID = :B1
call count cpu elapsed disk query current rows
Parse 635 0.03 0.01 0 0 0 0
Execute 49902 14.48 271.25 41803 80704 355837 49902
Fetch 0 0.00 0.00 0 0 0 0
total 50537 14.51 271.27 41803 80704 355837 49902
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
DECLARE
var_trans_session invtrnee.trans_session%TYPE;
BEGIN
-- ADDED BY SHANKAR ON 08/29/97
-- GET THE NEXT AVAILABLE TRANS_SESSION
bastkey('trans_session',0,var_trans_session,'T');
-- MAS001
uk_trfbapuo_auto(var_trans_session,'UPLOAD','T',300);
-- MAS001 end
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 24191.23 24028.57 8172196 10533885 187888 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 24191.23 24028.57 8172196 10533885 187888 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
SELECT INNER_PK_TECH_KEY
FROM
PRDPCDEE WHERE PRD_LVL_CHILD = :B1 AND LOOSE_PACK_FLAG = 'T'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 56081 1.90 2.03 0 0 0 0
Fetch 56081 11.07 458.58 53792 246017 0 56081
total 112163 12.98 460.61 53792 246017 0 56081
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
******************First off, be aware of the assumptions I'm making. The SQL you presented above strongly suggests (to me at least) that you have cursor for loops. If that's the case, you need to review what their purpose is and look to convert them into single statement DML commands. For example if you have something like this
DECLARE
ln_Count NUMBER;
ln_SomeValue NUMBER;
BEGIN
FOR lcr_Row IN ( SELECT pk_id,col1,col2 FROM some_table)
LOOP
SELECT
COUNT(*)
INTO
ln_COunt
FROM
target_table
WHERE
pk_id = lcr_Row.pk_id;
IF ln_Count = 0 THEN
SELECT
some_value
INTO
ln_SomeValue
FROM
some_other_table
WHERE
pk_id = lcr_Row.col1
INSERT
INTO
target_table
( pk_id,
some_other_value,
col2
VALUES
( lcr_Row.col1,
ln_SomeValue,
lcr_Row.col2
ELSE
UPDATE
target_table
SET
some_other_value = ln_SomeValue
WHERE
pk_id = lcr_Row.col1;
END IF;
END LOOP;
END; it could be rewritten as
DECLARE
BEGIN
MERGE INTO target_table b
USING ( SELECT
a.pk_id,
a.col2,
b.some_value
FROM
some_table a,
some_other_table b
WHERE
b.pk_id = a.col1
) e
ON (b.pk_id = e.pk_id)
WHEN MATCHED THEN
UPDATE SET b.some_other_value = e.some_value
WHEN NOT MATCHED THEN
INSERT ( b.pk_id,
b.col2,
b.some_other_value)
VALUES( b.pk_id,
b.col2,
b.some_value);
END;It's going to take a bit of analysis and work but the fastest and most scalable way to approach processing data is to use SQL rather than PL/SQL. PL/SQL data processing i.e. cursor loops should be an option of last resort.
HTH
David -
ORA-06502 on 'SQL query' report with sort in 'Report Attributes'
Hi All,
We get the next error if we set sorting on a column in a 'Report' based on a 'SQL query'. Removing the sort, error disappeares:
failed to parse SQL query:
ORA-06502: PL/SQL: numeric or value error: NULL index table key value
Any suggestions?
ErikErik,
Thanks, this explains it. By specifying the request as part of your URL you run into the recently uncovered issue. The request you're setting is REMFROMLIST and ADD2LIST. You probably either have links that include those requests or you have branches where you specify them. Either way, in order to get reports sorting to work, you'll have to make sure that the request strings are not part of your URL. This is a work-around and the upcoming HTML DB patch release will solve this issue.
One way of avoiding this is to have computations on the previous pages that set a napplication level or page level item to the REMFROMLIST and ADD2LIST values and then you can use those items for your conditions that are currently evaluating those strings.
Hope this helps and sorry for the inconvenience,
Marc -
can u help me to print multiple rows resulted from a sql query using jsp.
Map the ResultSet to a Collection of DTO's, use the JSTL's c:forEach tag to iterate through it inside a JSP page, use the HTML table, tr and td elements to present the data in a table.
-
SQL query problem with sorting
Hi,
I have question regarding sql query . Right now I am getting the results like this if i use this sql query
select ID,Name,Desc,Priority from emp order by Priority ;
Priority is varchar field. I don't want to change the Priority field and cannot add a new column in the table. Because i don't have permission to do that.
ID Name Desc Priority
=============================================
234 paul paul desc Highest
3452 mike mike desc High
4342 smith smith desc Low
6565 kelly kelly desc Low
9878 nate nate desc Medium
3223 deb deb desc High
============================================
I need a query to get the results like that.
ID Name Desc Priority
=============================================
234 paul paul desc Highest
3452 mike mike desc High
3223 deb deb desc High
9878 nate nate desc Medium
4342 smith smith desc Low
6565 kelly kelly desc Low
============================================
If any one knows about this one, please let me know.
Thanks,
BalaYou are aware that there are differences in the SQL implementation between Sqlserver and Oracle? You could try something like this, if there's a INSTR function:
ORDER BY INSTR('Highest,High,Medium,Low,', Priority || ',')You may have to change the "Priority || ," to a "Priority + ','), if string concatenation is done differently in sqlserver. Don't know about the ('), maybe you need (").
C.
Maybe you are looking for
-
Thank you for the opportunity to submit this query and hurray for the volunteers who will tackle it! Thank you! on opening FFox, I routinely run Tools/View Cookies and clear my Error Console. I INVARIABLY find over 400 cookie directories arranged in
-
Hi, setting up my new apple tv, and "activating/setting date and time" is taking a very long time...40minutes and counting, now what?
-
You can't open the application "(null)" because it may be damaged or incomplete.
I'm getting this problem when im trying to open certain apps Can you please help?
-
Rebuilding iTunes Library File
I have been using iTunes since the beginning and would like some feedback about the itunes Library file. I have been using the same iTunes Library file since the beginning of the program and as my library grows and the version upgrades continue to mo
-
In Acrobat Pro is it possible to see a ratio of ink to paper in relation to the surface area?
I need to be able to create/alter a design to have 50% ink coverage, but am unsure of how to measure this. Is there somewhere you can see the ratio of ink to paper in Acrobat Pro? I have access to both Acrobat Pro 9 and Acrobat Pro X Thanks, Victoria