Trying to optimize this simple query
Hi,
I am trying to optimize this simple query but the two methods I am trying actually make things worse.
The original query is:
SELECT customer_number, customer_name
FROM bsc_pdt_account_mv
where rownum <= 100
AND Upper(customer_name) like '%SP%'
AND customer_id IN
SELECT cust_id FROM bsc_pdt_assoc_sales_force_mv
WHERE area_identifier IN (
SELECT area_identifier FROM bsc_pdt_assoc_sales_force_mv
WHERE ad_identifier = '90004918' or rm_identifier = '90004918' or tm_identifier = '90004918'
The result set of this query returns me the first 100 rows in 88 seconds and they are all distinct by default (don't know why they are distinct).
My first attempt was to try to use table joins instead of the IN conditions:
SELECT
distinct -- A: I need to use distinct now
customer_number, customer_name
FROM bsc_pdt_account_mv pdt,
bsc_pdt_assoc_sales_force_mv asf,
SELECT distinct area_identifier FROM bsc_pdt_assoc_sales_force_mv
WHERE ad_identifier = '90004918' or rm_identifier = '90004918' or tm_identifier = '90004918'
) area
where
area.area_identifier = asf.area_identifier
AND asf.cust_id = pdt.customer_id
AND Upper(customer_name) like '%SP%'
AND rownum <= 100 -- B: strange when I comment this out
order by 1
I dont understand two things with this query. First issue, I now need to put in the distinct because the result set is not distinct by default. Second issue (very strange), when I put the rownum condition (<100) I get two rows in 1.5 seconds. If I remove the condition, I get 354 rows (whole result set) in 326 seconds.
My second attempt was to use EXISTS instead of IN:
SELECT
customer_number, customer_name
FROM bsc_pdt_account_mv pdt
where Upper(customer_name) like '%SP%'
AND rownum <= 100
AND EXISTS
select 1 from
bsc_pdt_assoc_sales_force_mv asf,
SELECT distinct area_identifier FROM bsc_pdt_assoc_sales_force_mv
WHERE ad_identifier = '90004918' or rm_identifier = '90004918' or tm_identifier = '90004918'
) area
where
area.area_identifier = asf.area_identifier
AND asf.cust_id = pdt.customer_id
This query returns a similar distinct result set as teh original one but takes pretty much the same time (87 seconds).
The query below hangs when run in TOAD or PL/SQL Dev. I noticed there is no rows returned from the inner table for this condition.
SELECT customer_number, customer_name
FROM
bsc_pdt_account_mv pdt_account
where rownum <= 100
AND exists (
SELECT pdt_sales_force.cust_id
FROM bsc_pdt_assoc_sales_force_mv pdt_sales_force
WHERE pdt_account.customer_id = pdt_sales_force.cust_id
AND (pdt_sales_force.rm_identifier = '90007761' or pdt_sales_force.tm_identifier = '90007761') )
ORDER BY customer_name
-- No rows returned by this query
SELECT pdt_sales_force.cust_id
FROM bsc_pdt_assoc_sales_force_mv pdt_sales_force
WHERE pdt_sales_force.rm_identifier = '90007761' or pdt_sales_force.tm_identifier = '90007761'
Similar Messages
-
This simple query takes 2 hrs. How to improve it??
This is a simple query. It takes 2 hours to run this query. Tables have over 100,000 rows.
SELECT
TO_CHAR(BC_T_ARRIVALS.ARR_FLIGHT_DATE,'DD/MM/YYYY') ARR_FLIGHT_DATE
FROM
BC_T_ARRIVALS a, BC_M_FLIGHTS f
WHERE
a.ARR_FLT_SEQ_NO = f.FLT_SEQ_NO AND
f.FLT_LOC_CODE = PK_BC_R_LOCATIONS.FN_SEL_LOC_CODE('BANDARANAYAKE INTERNATIONAL AIRPORT') AND TO_CHAR(a.ARR_FLIGHT_DATE,'YYYY/MM/DD') >= TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD')
AND TO_CHAR(a.ARR_FLIGHT_DATE,'YYYY/MM/DD') <= TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')
UNION
SELECT
TO_CHAR(BC_T_DEPARTURES.DEP_FLIGHT_DATE,'DD/MM/YYYY') DEP_FLIGHT_DATE
FROM
BC_T_DEPARTURES d, BC_M_FLIGHTS f
WHERE
d.DEP_FLT_SEQ_NO = BC_M_FLIGHTS.FLT_SEQ_NO AND
f.FLT_LOC_CODE = PK_BC_R_LOCATIONS.FN_SEL_LOC_CODE('BANDARANAYAKE INTERNATIONAL AIRPORT') AND TO_CHAR(d.DEP_FLIGHT_DATE,'YYYY/MM/DD') >= TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD')
AND TO_CHAR(d.DEP_FLIGHT_DATE,'YYYY/MM/DD') <= TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')As I see it, this query will not make the DB engine use any indexes since expressions are used in the 'WHERE' clause. Am I correct?
How can we improve the performance of this query???Maybe (do you really need to convert dates to chars ? That might prevent index use ...)
select f.BC_M_FLIGHTS,
TO_CHAR(BC_T_DEPARTURES.DEP_FLIGHT_DATE,'DD/MM/YYYY') DEP_FLIGHT_DATE,
TO_CHAR(BC_T_ARRIVALS.ARR_FLIGHT_DATE,'DD/MM/YYYY') ARR_FLIGHT_DATE
from (select BC_M_FLIGHTS,
FLT_LOC_CODE
from BC_M_FLIGHTS
where FLT_LOC_CODE = PK_BC_R_LOCATIONS.FN_SEL_LOC_CODE('BANDARANAYAKE INTERNATIONAL AIRPORT')
) f,
BC_T_ARRIVALS a,
BC_T_DEPARTURES d
where f.BC_M_FLIGHTS = a.ARR_FLT_SEQ_NO
and f.BC_M_FLIGHTS = d.DEP_FLT_SEQ_NO
and (TO_CHAR(a.ARR_FLIGHT_DATE,'YYYY/MM/DD') between TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD') and TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')
or TO_CHAR(d.DEP_FLIGHT_DATE,'YYYY/MM/DD') between TO_CHAR(:P_FROM_DATE,'YYYY/MM/DD') and TO_CHAR(:P_TO_DATE,'YYYY/MM/DD')
)Regards
Etbin
Edited by: Etbin on 2.3.2012 18:44
select column list altered -
How I can optimize this SQL query
I require your help, I want to know how I can optimize this query
SELECT
"F42119". "SDLITM" as "Code1"
"F42119". "SDAITM" as "Code2"
"F42119". "SDDSC1" as "Product"
"F42119". "SDMCU" as "Bodega"
Sum ("F42119". "SDSOQS" / 10000) as "Number",
Sum ("F42119". "SDUPRC" / 10000) as "preciou"
Sum ("F42119". "SDAEXP" / 100) as "Value",
Sum ("F42119". "SDUNCS" / 10000) as "CostoU"
Sum ("F42119". "SDECST" / 100) as "Cost"
"F4101". "IMSRP1" as "Division"
"F4101". "IMSRP2" as "classification",
"F4101". "IMSRP8" as "Brand"
"F4101". "IMSRP9" as "Aroma"
"F4101". "IMSRP0" as "Presentation"
"F42119". "SDDOC" as "Type",
"F42119". "SDDCT" as "Document",
"F42119". "SDUOM" as "Unit"
"F42119". "SDCRCD" as "currency"
"F0101". "ABAN8" as "ABAN8"
"F0101". "ABALPH" as "Customer"
"F0006". "MCRP22" as "Establishment"
from "PRODDTA". "F0101" "F0101"
"PRODDTA". "F42119" "F42119"
"PRODDTA". "F4101" "F4101"
"PRODDTA". "F0006" "F0006"
where "F42119". "SDAN8" = "F0101". "ABAN8"
and "F0006". "MCMCU" = "F42119". "SDMCU"
and "F4101". "IMITM" = "F42119". "SDITM"
and "F42119". "SDDCT" in ('RI', 'RM', 'RN')
and CAST (EXTRACT (MONTH FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ') + substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in : Month
and CAST (EXTRACT (YEAR FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ')+ Substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in: Year
and trim ("F0006". "MCRP22") =: Establishment
and trim ("F4101". "IMSRP1") =: Division
Group By "F42119". "SDLITM"
"F42119". "SDAITM"
"F42119". "SDDSC1"
"F4101". "IMSRP1"
"F42119". "SDDOC"
"F42119". "SDDCT"
"F42119". "SDUOM"
"F42119". "SDCRCD"
"F0101". "ABAN8"
"F0101". "ABALPH"
"F4101". "IMSRP2"
"F4101". "IMSRP8"
"F4101". "IMSRP9"
"F4101". "IMSRP0"
"F42119". "SDMCU"
"F0006". "MCRP22"
I appreciate the help you can give meIt seems to me that part of fixing it could be how you join the tables.
Instead of the humongous where clause, put the applicable conditions on the join.
You have
from "PRODDTA". "F0101" "F0101"
"PRODDTA". "F42119" "F42119"
"PRODDTA". "F4101" "F4101"
"PRODDTA". "F0006" "F0006"
where "F42119". "SDAN8" = "F0101". "ABAN8"
and "F0006". "MCMCU" = "F42119". "SDMCU"
and "F4101". "IMITM" = "F42119". "SDITM"
and "F42119". "SDDCT" in ('RI', 'RM', 'RN')
and CAST (EXTRACT (MONTH FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ') + substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in : Month
and CAST (EXTRACT (YEAR FROM TO_DATE (substr ((to_date ('01-01-'| | to_char (round (1900 + (CAST ("F42119". "SDDGL" as int) / 1000))),' DD- MM- YYYY ')+ Substr (to_char (CAST ("F42119". "SDDGL" as int)), 4,3) -1), 1,10))) AS INT) in: Year
and trim ("F0006". "MCRP22") =: Establishment
and trim ("F4101". "IMSRP1") =: Division
INSTEAD try something like
from JOIN "PRODDTA". "F0101" "F0101" ON "F42119". "SDAN8" = "F0101". "ABAN8"
JOIN "PRODDTA". "F42119" "F42119" ON "F0006". "MCMCU" = "F42119". "SDMCU"
JOIN "PRODDTA". "F4101" "F4101" ON join condition
JOIN "PRODDTA". "F0006" "F0006" ON join condition.
Not sure exactly how you need things joined, but above is the basic idea. Remove criteria for joining the tables from the WHERE clause and put them
in the join statements. That might clean things up and make it more efficient. -
Cant get this simple query!
Hi Guys,
There is this simple requirement of writing a query which will select most of the columns from a table but grouped on 3 columns from same table.
Table Str:
co11 col2 col3 col4 col5 col6 col7 col8 col9 col10
Required :
Group By: Col9, col10
Columns to be selected : co11 col2 col3 col4 col5 col6 col7 col8
I know there is something simple that I am missing.
any help will be appreciated.
Thanks!Hi,
This produces the output you requested fro the data you posted:
SELECT MIN (col1)
, MIN (col2)
, MIN (col3)
, MIN (col4)
, MIN (col5)
, MIN (col6)
, MIN (col7)
, MIN (col8)
, col9
, col10
FROM str
GROUP BY col9
, col10
;So does this
WITH got_rnum AS
SELECT str.*
, ROW_NUMBER () OVER ( PARTITION BY col9
, col10
ORDER BY col1
, col2
, col3
, col4
, col5
, col6
, col7
, col8
) AS rnum
FROM str
SELECT col1
, col2
, col3
, col4
, col5
, col6
, col7
, col8
, col9
, col10
FROM got_rnum
WHERE rnum = 1
;With the sample data you posted, the two queries produce the same results.
With some other data, the two queries will produce different results. -
What is wrong with this simple query
Hi,
I am writting a simple code just to get the maximum no values from a database table
The query is
ResultSet = stm.executeQuery("SELECT MAX(column_name) FROM Database_table ");
it seems to be a simple one but i am getting the message
column not found
Please answer soonWell, it depends on how your resultset is retrieving the results. If you retrieve by column name, then that's your problem. You need to do something like this:
ResultSet = stm.executeQuery("SELECT MAX(column_name) AS myColumnName FROM Database_table ");
String myResult = ResultSet.getString(myColumnName);Using MAX, COUNT, etc, will return your result with a mangled or no actual column name to retrieve from. Optionally, you can solve your problem by:
ResultSet.getString(1);Michael Bishop -
Why is this simple query failing?
Select T0.[docentry], charindex('-', T0.[U_I_LongDesc])
from [dbo].[RDR1] T0
Gives "Must specify table to select from".
This works fine (without charindex):
Select T0.[docentry], T0.[U_I_LongDesc]
from [dbo].[RDR1] T0
Also the original query works fine in MS SQL Server Management Studio.
What behind-the-scenes garbage is SAP doing now (like adding "FOR BROWSE" to every select)?Thanks Gordon. You're right it does work on system fields. After some further digging it appears the problem must be that all alphanumeric UDF's are created as nvarchar(max) in SQL Server, regardless of the length you specify.
This seems to be a bug in SAP. The charindex query above fails on all UDF's.
I defined U_I_LongDesc as Alphanumeric (100) in SAP. Here's what I see defined in SQL Server Management Studio:
Dscription (nvarchar(100), null) /* SAP field with correct length */
U_I_LongDesc (nvarchar(max), null) /* UDF. Gets set to max for all Alphanumeric fields */ -
Wat is wrong with this simple query ???
I am using 10gxe.
Below is the query which is not working
Whenever i am executing it a pop up windows is coming up
and asking me to enter bind variables ..wat shall i do ???
Here is a prntscrn of the issue .
http://potupaul.webs.com/at.html
VARIABLE g_message VARCHAR2(30)
BEGIN
:g_message := 'My PL/SQL Block Works';
END;
PRINT g_message
Edited by: user4501184 on May 18, 2010 12:42 AMsqlplus "system/sm@test"
SQL*Plus: Release 10.2.0.2.0 - Production on Tue May 18 12:45:05 2010
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> VARIABLE g_message VARCHAR2(30)
SQL> BEGIN
2 :g_message := 'My PL/SQL Block Works';
3 END;
4 /
PL/SQL procedure successfully completed.
SQL> PRINT g_message;
G_MESSAGE
My PL/SQL Block Works
SQL> -
Hi,
I'm using Oracle 10g r2.
I have this simple query that seems to take too much time to execute :
DECLARE
nb_mesures INTEGER;
min_day DATE;
max_day DATE;
BEGIN
SELECT
COUNT(meas_id),
MIN(meas_day),
MAX(meas_day)
INTO
nb_mesures,
min_day,
max_day
FROM
geodetic_measurements gm
INNER JOIN
operation_measurements om
ON gm.meas_id = om.ogm_meas_id
WHERE ogm_op_id = 0;
htp.p(nb_mesures||' measurements from '||min_day||' to '||max_day);
END;- Tables (about 11.000 records for the "Operations" table, and 800.000 for the 2 others) :
"Operation_measurements" is the table who makes the link between the 2 others (get the 2 keys).
SQL> DESCRIBE OPERATIONS
Nom NULL Type
OP_ID NOT NULL NUMBER(7)
OP_PARENT_OP_ID NUMBER(7)
OP_RESPONSIBLE NOT NULL VARCHAR2(10)
OP_DESCRIPT VARCHAR2(80)
OP_VEDA_NAME NOT NULL VARCHAR2(10)
OP_BEGIN NOT NULL DATE
OP_END DATE
OP_INSERT_DATE DATE
OP_LAST_UPDATE DATE
OP_INSERT_BY VARCHAR2(50)
OP_UPDATE_BY VARCHAR2(50)
SQL> DESCRIBE OPERATION_MEASUREMENTS
Nom NULL Type
OGM_MEAS_ID NOT NULL NUMBER(7)
OGM_OP_ID NOT NULL NUMBER(6)
OGM_INSERT_DATE DATE
OGM_LAST_UPDATE DATE
OGM_INSERT_BY VARCHAR2(50)
OGM_UPDATE_BY VARCHAR2(50)
SQL> DESCRIBE GEODETIC_MEASUREMENTS
Nom NULL Type
MEAS_ID NOT NULL NUMBER(7)
MEAS_TYPE NOT NULL VARCHAR2(2)
MEAS_TEAM NOT NULL VARCHAR2(10)
MEAS_DAY NOT NULL DATE
MEAS_OBJ_ID NOT NULL NUMBER(6)
MEAS_STATUS VARCHAR2(1)
MEAS_COMMENT VARCHAR2(150)
MEAS_DIRECTION VARCHAR2(1)
MEAS_DIST_MODE VARCHAR2(2)
MEAS_SPAT_ID NOT NULL NUMBER(7)
MEAS_INST_ID NUMBER(7)
MEAS_DECALAGE NUMBER(8,5)
MEAS_INST_HEIGHT NUMBER(8,5)
MEAS_READING NOT NULL NUMBER(11,5)
MEAS_CORRECT_READING NUMBER(11,5)
MEAS_HUMID_TEMP NUMBER(4,1)
MEAS_DRY_TEMP NUMBER(4,1)
MEAS_PRESSURE NUMBER(4)
MEAS_HUMIDITY NUMBER(2)
MEAS_CONSTANT NUMBER(8,5)
MEAS_ROLE VARCHAR2(1)
MEAS_INSERT_DATE DATE
MEAS_LAST_UPDATE DATE
MEAS_INSERT_BY VARCHAR2(50)
MEAS_UPDATE_BY VARCHAR2(50)
MEAS_TILT_MODE VARCHAR2(4000) - Explain plan (I'm not familiar with explain plans...) :
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | 19 | 256 (10)| 00:00:02 |
| 1 | SORT AGGREGATE | | 1 | 19 | | |
| 2 | NESTED LOOPS | | 75 | 1425 | 256 (10)| 00:00:02 |
|* 3 | TABLE ACCESS FULL | OPERATION_MEASUREMENTS | 75 | 600 | 90 (27)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| GEODETIC_MEASUREMENTS | 1 | 11 | 3 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | MEAS_PK_2 | 1 | | 2 (50)| 00:00:01 |
--------------------------------------------------------------------------------------------------------How can I optimize this query ?
Thanks.
Yann.Looks like you are missing an FK-index on the middle table, for the FK going to OPERATIONS.
Currently this:
WHERE ogm_op_id = 0;Is computed via a full table scan followed by a filter operation. Assuming OP_ID is rather selective, an index on OGM_OP_ID could do the trick here. -
Dear Experts,
Not able to Execute this simple query :
Select T1.JobID , T1.BudgetValue,T1.ActualValue FROM [dbo].[Enprise_JobCost_ActualBudgetView] T1 WHERE T1.TransType = '[%0]'
RegardsHello,
View - A View in simple terms is a subset of a 'virtual table. It can be used to retrieve data from the tables, Insert, Update or Delete from the tables. The Results of using View are not permanently stored in the database.
Stored Procedure - A stored procedure is a group of SQL statements which can be stored into the database and can be shared over the netwrok with different users.
http://www.geekinterview.com/question_details/65914
Better make a UDT for your requirement.
Thanks
Manvendra Singh Niranjan -
consider this situation,
Two or more productid will be accquired by same customerid, by same shipvia, on the same day of the week of shipped date. i want the simple query for this.
my tables are from northwind:
[orders] = OrderID, CustomerID, EmployeeID, OrderDate, RequiredDate, ShippedDate, ShipVia, Freight, ShipName, ShipAddress, ShipCity, ShipRegion, ShipPostalCode, ShipCountry.
[orders details] = OrderID, ProductID, UnitPrice, Quantity, Discount.
i tried some but it is not exact, it gives wrong result.
select pd.CustomerID,pd.ProductID, pd.no_of_time_purchased, sd.ShipVia, sd.same_ship_count, shipped_day from
select ProductID,o.CustomerID,COUNT(productid) as no_of_time_purchased
from orders o join [Order Details] od on o.OrderID=od.OrderID group by ProductID,o.CustomerID
having count(od.ProductID) >1) pd
join
(select OrderID,customerid, shipvia, count(shipvia)as same_ship_count, DATENAME(DW,ShippedDate)as shipped_day from orders
group by customerid, ShipVia, ShippedDate having COUNT(ShipVia) > 1 ) sd
on sd.CustomerID=pd.CustomerIDHi,
I think I have a solution that will at least give you a clue how to go about it. I have simplified the tables you mentioned and created them as temporary tables on my side, with some fake data to test with. I have incldued the generation of these temporary
tables for your review.
In my example I have included:
1. A customer which has purchased the same product on the same day, using the same ship 3 times,
2. Another example the same as the first but the third purchase was on a different day
3. Another example the same as the first but the third purchase was a different product
4. Another example the same as the first but the third purchase was using a different "ShipVia".
You should be able to see that by grouping on all of the columns that you wich to return, you should not need to perform any subselects.
Please let me know if I have missed any requirements.
Hope this helps:
CREATE TABLE #ORDERS
OrderID INT,
CustomerID INT,
OrderDate DATETIME,
ShipVia VARCHAR(5)
CREATE TABLE #ORDERS_DETAILS
OrderID INT,
ProductID INT,
INSERT INTO #ORDERS
VALUES
(1, 1, GETDATE(), 'ABC'),
(2, 1, GETDATE(), 'ABC'),
(3, 1, GETDATE(), 'ABC'),
(4, 2, GETDATE() - 4, 'DEF'),
(5, 2, GETDATE() - 4, 'DEF'),
(6, 2, GETDATE() - 5, 'DEF'),
(7, 3, GETDATE() - 10, 'GHI'),
(8, 3, GETDATE() - 10, 'GHI'),
(9, 3, GETDATE() - 10, 'GHI'),
(10, 4, GETDATE() - 10, 'JKL'),
(11, 4, GETDATE() - 10, 'JKL'),
(12, 4, GETDATE() - 10, 'MNO')
INSERT INTO #ORDERS_DETAILS
VALUES
(1, 1),
(2, 1),
(3, 1),
(4, 2),
(5, 2),
(6, 2),
(7, 3),
(8, 3),
(9, 4),
(10, 5),
(11, 5),
(12, 5)
SELECT * FROM #ORDERS
SELECT * FROM #ORDERS_DETAILS
SELECT
O.CustomerID,
OD.ProductID,
O.ShipVia,
COUNT(O.ShipVia),
DATENAME(DW, O.OrderDate) AS [Shipped Day]
FROM #ORDERS O
JOIN #ORDERS_DETAILS OD ON O.orderID = OD.OrderID
GROUP BY OD.ProductID, O.CustomerID, O.ShipVia, DATENAME(DW, O.OrderDate) HAVING COUNT(OD.ProductID) > 1
DROP TABLE #ORDERS
DROP TABLE #ORDERS_DETAILS -
Can anyone tell me how can i optimize this query...
Can anyone tell me how can i optimize this query ??? :
Select Distinct eopersona.numident From rscompeten , rscompet , rscv , eopersona , rscurso , rseduca , rsexplab , rsinteres
Where ( ( (LOWER (rscompeten.nombre LIKE '%caracas%') AND ( rscompeten.id = rscompet.idcompeten ) AND ( rscv.id = rscompet.idcv ) AND ( eopersona.id = rscv.idpersona ) )
OR ( (LOWER (rscurso.nombre) LIKE '%caracas%') AND ( rscv.id = rscurso.idcv ) AND ( eopersona.id = rscv.idpersona ) )
OR ( (LOWER (rscurso.lugar) LIKE '%caracas%') AND ( rscv.id = rscurso.idcv ) AND ( eopersona.id = rscv.idpersona ) )
OR ( (LOWER (rseduca.univinst) LIKE '%caracas%)' AND ( rscv.id = rseduca.idcv ) AND ( eopersona.id = rscv.idpersona ) )
OR ( (LOWER (rsexplab.nombempre) LIKE '%caracas%' AND ( rscv.id = rsexplab.idcv ) AND ( eopersona.id = rscv.idpersona ) )
OR ( (LOWER (rsinteres.descrip) LIKE '%caracas%' AND ( rscv.id = rsinteres.idcv ) AND ( eopersona.id = rscv.idpersona ) )
OR ( (LOWER (rscv.cargoasp) LIKE '%caracas%' AND ( eopersona.id = rscv.idpersona ) )
OR ( LOWER (eopersona.ciudad) LIKE '%caracas%' AND ( eopersona.id = rscv.idpersona )
PLEASE IF YOU FIND SOMETHING WRONG.. PLEASE HELP ME.. this query takes me aproximatelly 10 minutes and the database is really small ( with only 200 records on each table )You are querying eight tables, however in any of your OR predicates you're only restricting 3 or 4 of those tables. That means that the remaining 4 or 5 tables are generating cartesian products. (n.b. the cartesian product of 5 tables with 200 rows each results in g 200^5 = 320,000,000,000 rows) Then you casually hide this behind "distinct".
A simple restatement of your requirements looks like this:
Select eopersona.numident
From rscompeten,
rscompet,
rscv,
eopersona
Where LOWER (rscompeten.nombre) LIKE '%caracas%'
AND rscompeten.id = rscompet.idcompeten
AND rscv.id = rscompet.idcv
AND eopersona.id = rscv.idpersona
UNION
Select eopersona.numident
From rscurso ,
rscv,
eopersona
Where LOWER (rscurso.nombre) LIKE '%caracas%'
AND rscv.id = rscurso.idcv
AND eopersona.id = rscv.idpersona
UNION
Select eopersona.numident
From rscurso ,
rscv,
eopersona
Where LOWER (rscurso.lugar) LIKE '%caracas%'
AND rscv.id = rscurso.idcv
AND eopersona.id = rscv.idpersona
UNION
...From there you can eliminate redundancies as desired, but I imagine that the above will perform admirably with the data volumes you describe. -
Trying to optimize the following update query
I am trying to update a table based on the values of another table in the following manner. I have two tables A and B. Say A has 300,000 records and B has 100,000 which is basically a set of the 300,000 in A. Therefore, B has the exact same records as A. There are two columns in both tables (status and my_status). Currently for all the records in A these two values are 0 and null respectively. In B, the same records have different values for the two columns. Table A needs to updated with these values currently in B. I have the following the query that I am hesitant to use since the explain shows a very high cost and a full table scan of B.
update A a
set (status,my_status) = (select b.status, b.my_status from B b, A a where a.id = b.id)
where a.date >= '01-JAN-2003' and a.cd = 'FD'.
As the above query shows, the where condition in the outer part(where a.date >= '01-JAN-2003' and a.cd = 'FD'.) ensures that only those records present in B are updated in A. Is there any way to join at the outer part where I can just specify A.id = B.id rather than having two conditions? Or is there any other route that would optimize this sort of a query?Hi,
Be sure to put unique constraints on A.ID and B.ID like so before running the UPDATE view statement, like this:
ALTER TABLE a ADD CONSTRAINT a_uk1 UNIQUE (id);
ALTER TABLE b ADD CONSTRAINT b_uk1 UNIQUE (id);Then, remember to gather stats like so:
BEGIN
FOR x IN (SELECT table_name
FROM user_tables
WHERE table_name IN ('A', 'B')) LOOP
DBMS_STATS.gather_table_stats (ownname => USER
, tabname => x.table_name
, partname => NULL
, estimate_percent => DBMS_STATS.auto_sample_size
, block_sample => FALSE
, method_opt => 'FOR ALL INDEXED COLUMNS SIZE 254'
, degree => NULL
, granularity => 'ALL'
, cascade => TRUE
, no_invalidate => FALSE
END LOOP;
END;
/Good luck! -
Error when trying to use this query in report region
Hi ,
I am getting "1 error has occurred
Query cannot be parsed within the Builder. If you believe your query is syntactically correct, check the ''generic columns'' checkbox below the region source to proceed without parsing. ORA-00933: SQL command not properly ended"
while trying to use this query in reports region .
Pls help.
Thanks ,
Madhuri
declare
x varchar2(32000);
begin
x := q'!select (first_name||' '|| last_name)a ,
count(distinct(session_id)),manager_name
from cappap_log,
MIS_CDR_HR_EMPLOYEES_MV
where DECODE(instr(upper(userid),'@ORACLE.COM',1),0,upper(userid)||'@ORACLE.COM',upper(userid)) = upper(email_address)!';
if :P1_ALL = 'N' then
x:= x||q'!and initcap(first_name ||' '|| last_name)=:P1_USERNAME!';
else
x:= x||q'!and initcap(first_name ||' '|| last_name)like '%'|| :P1_USERNAME||'%'!';
end if;
if :P1_APP_NAME = '%' then
x:= x||q'! and flow_id like '%'!';
else
x:= x||'flow_id = :P1_APP_NAME';
end if;
x:= x||q'! group by first_name||' '|| last_name , manager_name!';
return x;
end;Hi, I am actually stuck here. Can you please let me know which among these is the higher version.
1) Final Release 3.50
Version 3500.3.016
2) Final Release 3.50
Version (Revision 481)
Because it is working fine in the 1st one whereas its throwing that error pop-up in 2nd one(as soon as we select the Change query global definition option) . -
How to optimize this select statement its a simple select....
how to optimize this select statement as the records in earlier table is abt i million
and this simplet select statement is not executing and taking lot of time
SELECT guid
stcts
INTO table gt_corcts
FROM corcts
FOR all entries in gt_mege
WHERE /sapsll/corcts~stcts = gt_mege-ctsex
and /sapsll/corcts~guid_pobj = gt_Sagmeld-guid_pobj.
regards
AroraHi Arora,
Using Package size is very simple and you can avoid the time out and as well as the problem because of memory. Some time if you have too many records in the internal table, then you will get a short dump called TSV_TNEW_PAGE_ALLOC_FAILED.
Below is the sample code.
DATA p_size = 50000
SELECT field1 field2 field3
INTO TABLE itab1 PACKAGE SIZE p_size
FROM dtab
WHERE <condition>
Other logic or process on the internal table itab1
FREE itab1.
ENDSELECT.
Here the only problem is you have to put the ENDSELECT.
How it works
In the first select it will select 50000 records ( or the p_size you gave). That will be in the internal table itab1.
In the second select it will clear the 50000 records already there and append next 50000 records from the database table.
So care should be taken to do all the logic or process with in select and endselect.
Some ABAP standards may not allow you to use select-endselect. But this is the best way to handle huge data without short dumps and memory related problems.
I am using this approach. My data is much more huge than yours. At an average of atleast 5 millions records per select.
Good luck and hope this help you.
Regards,
Kasthuri Rangan Srinivasan -
Hi
I am trying to wright a SQL query that will return the date the timesheet was submitted and date/time it was approved, can anyone guide me on this?
I basically need person name who submitted, date - time it was submitted for approval, then the person who approved it and the date - time that it was approved
Thanks
RubyRuby,
you can start with HXC_TIMECARD_SUMMARY table for submitter detail. But for approver details, i think you need WF tables to get the data for item type HXCEMP.
Maybe you are looking for
-
All SAP gurus, Is there any standard report available, which displays the delivery schedules for the scheduling agreements? Regards,
-
Mac equivalent to address grabber?
Is there a Mac application that will allow me to select a name/address/phone/email on a webpage, email, document, etc and then import that data into Address Book? AddressGrabber and ListGrabber by egrabber.com work exactly that way with Windows and t
-
How to get iStore template name from jsp filename?
Hi, We have implemented Oracle iStore R12. We have a requirement where we want to get template access name based on jsp file name. e.g. If jsp is ibeCCkdBHdrShip.jsp it should return STORE_CHKOUT_B2B_SHIP_HEADER. Is there any java API available for t
-
During Foreach read another foreach - PLZ HELP!
Hi all powershell gurus out there. I have a foreach which opens a URL from URLListl.txt and doing a Measure-command on them. The result from Measure-command writes to event-log. In another text file i have country list, like: USA Canada Brazil and s
-
Contacts- icloud sync + deleting duplicates?
I am ripping my hair out. I have lived with duplicate contacts on my iphone for years. Today I started to clear out the duplicates by hand. But as soon as I would delete one duplicate contact on my iphone, I would be brought back to the list of con