How to run query by quarter
Hi all,
I would like to run this query based on the users selection from the interface. For example if the user select year 2011 and quarter 1, then this code will be like this
AND (B.DATELET >= TO_DATE ('2011-01-01', 'YYYY-MM-DD')
AND B.DATELET <= TO_DATE ('2011-03-31', 'YYYY-MM-DD'))
When the user select year 2011 and quarter 2 then this code will be like this
AND (B.DATELET >= TO_DATE ('2011-04-01', 'YYYY-MM-DD')
AND B.DATELET <= TO_DATE ('2011-06-30', 'YYYY-MM-DD'))
Will this possible to change my code here or I need to code this in PL/SQL? any clue will be appreciated.
SELECT DISTINCT
COUNT (P.CPROJNUM) ProjectAwarded,
SUM (MIN (c.calcbtot)) AwardedDollarAmount,
SUM (COUNT (C.VENDOR)) NumberOfBidders,
(SUM (COUNT (C.VENDOR)) / (COUNT (P.CPROJNUM))) AverageNumberOfBidder
FROM BIDDERS C,
BIDLET B,
LETPROP L,
PROPOSAL P
WHERE C.LETTING = B.LETTING
AND P.CONTID = L.LCONTID
AND C.LETTING = L.LETTING
AND C.CALL = L.CALL
AND l.lcontid IN
(SELECT lcontid
FROM letprop c, PROPOSAL d
WHERE datestat IS NOT NULL
AND UPPER (letstat) <> 'R'
AND UPPER (letstat) <> 'B'
AND TRIM (UPPER (TIMELET)) = TRIM ('9:30 A.M.')
AND c.LCONTID = d.CONTID)
AND (B.DATELET >= TO_DATE ('2011-04-01', 'YYYY-MM-DD')
AND B.DATELET <= TO_DATE ('2011-06-30', 'YYYY-MM-DD'))
GROUP BY l.call
Hi,
If :p_year and :p_quarter are NUMBER variables, containing the year and quarter you want, then you can do something like this:
WITH got_report_start_date AS
SELECT ADD_MONTHS ( DATE '2000-01-01'
, (12 * (:p_year - 2000)) +
( 3 * (:p_quarter - 1))
) AS report_start_date
FROM dual
) -- Everything above is NEW
SELECT DISTINCT
COUNT (P.CPROJNUM) ProjectAwarded,
SUM (MIN (c.calcbtot)) AwardedDollarAmount,
SUM (COUNT (C.VENDOR)) NumberOfBidders,
(SUM (COUNT (C.VENDOR)) / (COUNT (P.CPROJNUM))) AverageNumberOfBidder
FROM BIDDERS C,
BIDLET B,
LETPROP L,
PROPOSAL P
, got_report_start_date r -- NEW
WHERE C.LETTING = B.LETTING
AND P.CONTID = L.LCONTID
AND C.LETTING = L.LETTING
AND C.CALL = L.CALL
AND l.lcontid IN
(SELECT lcontid
FROM letprop c, PROPOSAL d
WHERE datestat IS NOT NULL
AND UPPER (letstat) <> 'R'
AND UPPER (letstat) 'B'
AND TRIM (UPPER (TIMELET)) = TRIM ('9:30 A.M.')
AND c.LCONTID = d.CONTID
AND B.DATELET >= r.report_start_date -- CHANGED
AND B.DATELET < ADD_MONTHS (r.report_start_date, 3) -- CHANGEDLook for comments that say NEW or CHANGED; the rest is just what you posted.
Note that you should use < (not <=) when testing the end of the date range, and the date after the < sign is the first date NOT to be included in the results.
Similar Messages
-
How to run query in parallel to improve performance
I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
To be specific, I will call below operation to get data:
declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
src0:SOA_1MIN_POOL_METRIC(),
src1:SOA_1MIN_POOL_METRIC(),
src2:SOA_1MIN_POOL_METRIC(),
src3:SOA_1MIN_POOL_METRIC(),
src4:SOA_1MIN_POOL_METRIC(),
src5:SOA_1MIN_POOL_METRIC(),
src6:SOA_1MIN_POOL_METRIC(),
src7:SOA_1MIN_POOL_METRIC(),
src8:SOA_1MIN_POOL_METRIC(),
src9:SOA_1MIN_POOL_METRIC(),
src10:SOA_1MIN_POOL_METRIC(),
src11:SOA_1MIN_POOL_METRIC()
This method acts as a proxy, it aggregates data from 12 data tables
src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance?Thanks Mike.
The async function works, from the log, I could see the queries are executed in parallel.
but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
java.lang.IllegalStateException
at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
at RDSWS.getMetricAggNumber(RDSWS.jws:240)
at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
let $METRIC_RESULT :=
fn-bea:async(
for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
where
$SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
and $SOA_POOL_METRIC/CAL_CUBE_ID ge fn-bea:fence($soaDbRequest/ns16:StartTime)
and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
return
$SOA_POOL_METRIC
fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
where
$SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
and $SOA_POOL_METRIC/CAL_CUBE_ID ge fn-bea:fence($soaDbRequest/ns16:StartTime)
and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
return
$SOA_POOL_METRIC
... //12 similar queries
for $Metric_data in $METRIC_RESULT
group $Metric_data as $Metric_data_Group
by $Metric_data/ROLE_TYPE as $role_type_id
return
<ns0:RawMetric>
<ns0:endTime?></ns0:endTime>
<ns0:target?>{$role_type_id}</ns0:target>
<ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
<ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
<ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
<ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
</ns0:RawMetric>
could you tell me why the result is unstable? thanks! -
Hi i have complex select query with aggergate functions. the results of query update two tables. This query is causing very high resource comsumption .
[select campaignid, adgroupid, advid, count(*), sum(cost) as 'cost',
sum(billed) as 'billed', sum(unbilled) as 'unbilled',
avg(avgpos) as 'avgpos'
from (select campaignid, adgroupid, advid, count(*), sum(cost) as 'cost',
sum(billed) as 'billed', sum(unbilled) as 'unbilled',
avg(avgpos) as 'avgpos'
from test1
group by 1,2,3
union all
select campaignid, adgroupid, advid, count(*), sum(cost) as 'cost',
sum(billed) as 'billed', sum(unbilled) as 'unbilled',
avg(avgpos) as 'avgpos'
from test2
group by 1,2,3
what is wrong with ?What exactly is your question? Is it related to SQL Developer?
The SQL you attached as a few syntax prolems with it so it wouldn't run for me even if I had your tables and data. The syntax is at least fixed in this version if that is the problem:
select campaignid,
adgroupid,
advid,
count,
cost,
billed,
unbilled,
avgpos
from (select campaignid,
adgroupid,
advid,
count (*) count,
sum (cost) cost,
sum (billed) billed,
sum (unbilled) unbilled,
avg (avgpos) avgpos
from test1
group by 1, 2, 3
union all
select campaignid,
adgroupid,
advid,
count (*) count,
sum (cost) cost,
sum (billed) billed,
sum (unbilled) unbilled,
avg (avgpos) as avgpos
from test2
group by 1, 2, 3); -
Dear Expert,
It easy to write a query in SAP but I have to run many query at same time,is it possible
eg
select name,sal from emp
select sum(sal) from emp
i want two different ouput at same windows is it possible by query generator ?
ThanksHi Kevin,
I believe that your purpose is to get the total amount of field Sal at the end of your query. If that's the case, then you can simply CTRL + click the column title and the total will be displayed at the bottom bar.
Anyway, if you still need it at the last row then you can use UNION like,
select name,sal from emp
union all
select '' as name, sum(sal) as sal from emp group by name
Hope it helps.
Cheers,
Marini -
Once the aggregated cube how to run the query
hai ,
i had cube havind lot of data .
so i was used aggregation .
after that how to run query from aggragated cube
when ever i went to rrmx . but it has showing not aggregated cube.
once aggregate the cube where is stored
plz let me knowInfoCube aggregates are <b>separate database tables</b>.
Aggregates are more summarized versions of the base InfoCube. There is an aggregate fact table, e.g. /BIC/E1##### ===> /BIC/E100027. If you don't automatically compress your aggregates, there would also be an F fact table /BIC/F100027,
There are aggregate dimension tables that are also created, e.g. /BIC/D1000271. If a dimension for the aggregate is the same as the base InfoCube, then there is no aggregate dimension table for that dimension and the queries will use that dimension table from the base cube.
As long as the agggregate is active, the BW automatically will use it instead of the base cube as long as the aggregate contains all the characteristics necessary to satisfy the query.
You can verify the aggregate's usage by looking at info in table RSDDSTAT - it will show the Aggregate number if used (will not show aggregate usage for queries on a MultiProvider if you are on a more recent Svc Pack).
You can also run the query thru RSRT, using the Exec & Debug option - check the "Display aggregate found" option and it will display what aggregate(s) it found and which one(s) it used. -
Hello Gurus,
Could you send me how to run query in background. There is a record which uses 2-3 years old data which takes 4-5 hrs to run, so we decided to run this query as soon as ODS is loaded with data everyday. This will solve the cache issue as the query already exists in cache.
I know we can do it with broadcaster or running webtemplates..... but step by step process will be a great help.
Thanks a lot.
ChrisDear Follow this link...
http://help.sap.com/saphelp_nw2004s/helpdata/en/a4/1be541f321c717e10000000a155106/frameset.htm
Regards
venu -
How to change the explain plan for currently running query?
Hi All,
I am using Oracle enterprise 9i edition. I have a query which frames dynamically and running in the database. I noticed a table with 31147758 rows
in the query which has no indexes and taking more time to process. I tried to create an INdex on that table. I know the query is already running with a FULL table scan. Is it possible to change the explain plan for the current running query to consider the INDEX?
[code]
SELECT /*+ USE_HASH (c,e,b,a) */
d.att_fcc extrt_prod_dim_id,
d.att_fcc compr_prod_dim_id,
a.glbl_uniq_id glbl_uniq_id,
to_date(c.dit_code,'RRRRMMDD')STRT_DT,
(to_date(c.dit_code,'RRRRMMDD')+150)END_DT,
a.pat_nbr pat_id,
a.rxer_id rxer_id,
e.rxer_geog_id rxer_geog_id,
a.pharmy_id pharmy_id,
a.pscr_pack_id pscr_pack_id,
a.dspnsd_pack_id dspnsd_pack_id,
DENSE_RANK () OVER (PARTITION BY a.pat_nbr ORDER BY c.dit_code) daterank,
COUNT( DISTINCT d.att_fcc ) OVER (PARTITION BY a.pat_nbr, c.dit_code) event_cnt
DENSE_RANK () OVER (PARTITION BY a.pat_nbr,
d.att_fcc
ORDER BY c.dit_code) prodrank,
DENSE_RANK () OVER (PARTITION BY a.pat_nbr,
d.att_fcc
ORDER BY c.dit_code DESC) stoprank
FROM
pd_dimitems c,
pd_pack_attribs d ,
lrx_tmp_rxer_geog e,
lrx_tmp_pat_daterank p,
lrx_tmp_valid_fact_link a
WHERE c.dit_id = a.tm_id
AND e.rxer_id = a.rxer_id
AND a.glbl_uniq_id = p.glbl_uniq_id
AND p.daterank > 1
AND a.pscr_pack_id = d.att_dit_id
[/code]
The table lrx_tmp_pat_daterank is having that 31147758 rows. So I am wondering how to make the query to use the newly created index on the table?Why do you think using Indexes will improve the performance of the query? How many rows this query is returning? Optimizer might chose a Full table scan when it finds out that Index plan might not be useful. Why are you using /*+ USE_HASH (c,e,b,a) */ hint? This Hint will force oracle to use Full table scan instead of using the index. Try removing it and see if the plan changes.
Regards, -
How to run the query in screen painter
i am using the patch 36 in business one so pls give information about
how to run the query in screen painter
regard
sandip adhavHope u have reached Screen painter interface,
1. Click 'Add Grid' from tool bar.
2. Go to 'Collections' tab in 'Properties' window.
3. Choose 'Data Tables' from the Drop down list.
4. Click 'New' found at the bottom of the Properties Window(same window)
5. U'll find the place to insert ur query.
6. U can rename the table name from 'DT_0'
7. Choose type as 'Query'
8. Clear content from box 'Query'
9. Enter ur query there. Dont forget to Click 'SET'
10. Go to Preview option from tool bar.
now ur query will be displayed as table format.
Note: First try with simple query b4 going for linking option.
Regards,
Dhana. -
How to run a search query for a particular folder in KM related to portal
Hi,
Can any one tell me the steps for : how to run a search query for a particular folder in knowledge management related to portal.
Answers will be rewarded.
Thanks in advance.
KN
Edited by: KN on Mar 18, 2008 6:33 AMOk u may not require a coding
But u req configuration
U should first make a search option set
Link: [Search Option set|http://help.sap.com/saphelp_nw04/helpdata/en/cc/f4e77ddef1244380b06fee5f8b892a/frameset.htm]
Then u need 2 duplicate a KM Command by the name Search From here
and customize it to include the Search Option that u have created
Link: [Search from here|http://help.sap.com/saphelp_nw04/helpdata/en/2a/4ff640365d8566e10000000a1550b0/frameset.htm]
Then in the layout add this command.
Regards
BP -
How to run an update query on results from a spreadsheet
Hey there,
I am new to this kinda thing.
I received a spreadsheet that has 2 tabs, 1 is called SQL Results, and has a ton of data. 1 is called SQL Statement and has a select statement in the first cell.
I was told to run an update query using the spreadsheet, and was given this:
= CONCATENATE("Update CARDMEMBERISSUE set CURRSTATUSCD = 'ACT', DATELASTMAINT = sysdate where AGREENBR = ",A2," and MEMBERNBR = ",B2," and ISSUENBR = ",C2,";")
= CONCATENATE("Insert into CARDMEMBERISSUEHIST (AGREENBR, MEMBERNBR, ISSUENBR, EFFDATETIME, CARDSTATCD, STATREASON, DATELASTMAINT, DATESENT) values (",A2,",",B2,",",C2,",sysdate,'ACT',null,sysdate,null);"
I am not sure what to do or how to run this.
This is the what the lines in the spreadsheet look like, including column header, A1 is blank.
A B C etc
AGREE NBR MEMBERNBR ISSUE NBR CURRSTATUSCD PREFIX CARD NBR AGREETYPCD OWNER PERS NBR EXT CARD ISSUE DATE
2 12 1 44 ISS g 22 22 19/10/2011The =concatenate bits are Excel formulae. Assuming they correctly written, they will generate a set of individual sql statements. The first concaenatewill generate a set of update statements against the CardMemberIssue table, and the seond will generate a set of insert statement to the CardMemberIssueHist table.
You should be able to just paste the generated statements into whatever tool you are using to run sql to execute them. Before you do that though, make sure that you issue:
alter session set cursor_sharing=force;before pasting anything in.
john -
How to save a document programatically when u click on "Run Query" button
Hi all,
I am new to BO Web Intelligence java SDK. I am using Java Report Panel to edit and save the document.
(As per my knowledge the default functionality of the Java Panel is, once u edited the document query and then run the query and as when as u click on the "Save" icon then only the report will be saved into the repository )
What my doubt is ?
Once i edited the query in java query panel, when i click on the "Run Query" button the following operation needs to be performed
1) The doucment has to be saved in the specified reposiory folder.
2) The SQL for the docment has to be captured.
how to do the above operations programatically? can any body help me please.... thanks in advance
Thanks & Regards,
Madan KumarYou would have to modify the Java Report Panel to be able to do this. There is no documentation on how to make modifications to the Java Report Panel, and it is beyond the scope of what assistance support would be able to provide you. There may be people in the community that have done this before that can assist you.
-
How to measure query run time and mnitor performance
Hai All,
A simple question. How to measure query run time and mnitor performance? I want to see the parameters like how long it took to execute, how much space it took etc.
Thank you.hi,
some ways
1. use transaction st03, expert mode.
2. tables rsddstat*
3. install bw statistics (technical content)
there are docs on this, also bi knowledge performance center.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
BW Performance Tuning Knowledge Center - SAP Developer Network (SDN)
Business Intelligence Performance Tuning [original link is broken]
also take a look
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/31b6b490-0201-0010-e4b6-a1523327025e
Prakash's weblog on this topic..
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
oss note
557870 'FAQ BW Query Performance'
and 567746 'Composite note BW 3.x performance Query and Web'. -
How to get query execution time without running...?
Hi ,
I had one requirement .... as follows ......
i had 3 sql statements . I need to execute only one sql which execution time is very less.
Can any one help me , how to get query execution time without running that query and without using explain plan..?
Thanks,
RajeshKim Berg Hansen wrote:
But you have ruled out explain plan for some reason, so I cannot help you.OP might get some answers if query was executed before - but since restart. Check V$SQL dynamic performance view for SQL_TEXT = your query. Then ROUND(ELAPSED_TIME / EXECUTIONS / 1000000) will give you average elapsed time.
SY.
Edited by: Solomon Yakobson on Apr 3, 2012 8:44 AM -
How fast a query can run on dedicated server?
I have a server on which we have installed several clients database isntances. Recently there were several performance issue for one of the client while running few reports at which time I heard that was becasue of their instance is an shared server and if we make it dedicated the performance will improve. How do you find out whether an instance running on a particular box is a dedicated server connection or shared server connection. What is the definition of those in simple lay mans terms. How do you change a shared connection to dedicated connection and vice versa? Does it depend on the memory/cpu size etc. Changing it to dedicated server helps the performance? if so by how much Any help is higly appreciated.
user5846372 wrote:
I have a server on which we have installed several clients database instances. Recently there were several performance issue for one of the client while running few reports at which time I heard that was because of their instance is an shared server and if we make it dedicated the performance will improve. I start to reach for the old lead pipe when I hear stuff like that, that is seemingly based on hearsay and rumour and not a single shred of evidence is provided.
I will tell "+those people+" that claims your performance problems are due to shared server to put up or shut up.
There is very little performance difference (in the vast majority of cases) between shared and dedicated server. The biggest difference is that with a shared server you talk to the process servicing your SQL via a dispatcher process and then via a virtual circuit. With a dedicated server, you talk directly to the process servicing you.
Both shared and dedicated server runs the SAME CBO code , the SAME SQL engine.. so how can a query be faster in one and not the other? And if that was the case. If there was a performance difference.. Surely Oracle will recommend not using the "bad" one? Or even better, Oracle would discontinue and deprecated the "bad" one?
There is no difference in whether a query is executed by shared or dedicated server. The same CBO does the same execution plan and creates the same cursor. What is different is how the communication and interaction works with the client session/process. What is different is where UGA (User Global Area) memory resides. In other words, very technical execution environment differences - not differences to HOW the query is parsed and HOW the query is executed. Which then begs the question how can the query be faster in one and not the other?
But shared server and dedicated server aside. What is the "+First and Fundamental Rule Of Software Engineering?+"
It is.. "{color:blue}*UNDERSTAND THE PROBLEM*{color}"
If you do not know WHAT the problem is, WHY the problem is, how on earth can you hope to address the problem and resolve it?
Thus my "unhappiness" of those that make performance resolution claims by sucking their thumbs, repeating what they may have read somewhere, and not back it up with evidence and fact - like those who told you that your problem is using shared server.
What can be a problem is that you may not have enough dispatchers. Or insufficient shared servers in the shared server pool. Or being to tight with the SGA that now also needs to cater to the shared server sessions' UGA memory. Or shared servers being used (incorrectly and explicitly by client software) for running long and slow queries (thus tying up that shared server with servicing a single session and making it non-sharable and unable to service others too). Etc. Etc.
There's a whole wackload of potential issues... and many of them are solved in different ways. Simply changing shared server for dedicated servers and expecting a performance improvement as those hearsayers you deal with are claiming...? That is not just ignorance, but borders dangerously on stupidity. (dedicated servers can also kill performance when incorrectly used) -
How to run two query in the same preparedStatement
Hi all,
Please tell me how to run two queries in the same preparedStatement object ?
In my module I have to run two queries and I don't want to make two methods for running two different queries.
I just want to call this method for both of my queries.
methodName(long param)
Connection conn=null;
PreparedStatement pstmt=null;
//////////////// some coding
Please Help !
Thanks in advance
amitindiapublic void foo()
Connection connection = null;
PreparedStatement stmt = null;
try {
connection = ...get from pool...;
stmt = connection.prepareStatement("...");
...fetch results...;
stmt.close();
stmt = null; // So the finally statement works if there is an exception
stmt = connection.prepareStatement("...");
...fetch results...;
} finally {
if (stmt != null) ..close it...;
if (connection != null) ..return it to pool...;
}
Maybe you are looking for
-
ASSUMING THAT SOMEBODY HASN'T BACKED UP THEIR CONTACTS ALREADY SOMEPLACE ELSE, CAN CONTACTS BE RECOVERED AFTER BEING LOST AFTER THE 1ST TIME??? They also turned the few CONTACTS they left behind into GROUP CLASS? I only have GROUPS to choose from in
-
Is it normal for compressor to jump dramatically in time remaining at 50%
I have been trying to export a H.264 HD format (90 minute std setting) It processes pretty smoothly for approx. a hour with one hour remaining. At approx. one hour in it jumps to 5 1/2 hours at 55% complete and just sits at 55% complete. Seems frozen
-
Vendor list with complete address with contact nos.
dear i want to export the vendor list with complete address and contact nos. i use <b>mkvz</b> for list of all the vendors but i do not get the address and contact no. pls send the t-code or any setting parameters for the same regds devesh
-
I need a report of PS Text assigned to Project Definition or WBS Elements.
Dear Friend's I need a report of PS Text assigned to Project Definition or WBS Elements. By putting Project Definition or WBS Elements i must get all PS Text & print option is also required. How i can do this in PS module. Regard's Sandeep
-
Am looking for any example vi's for impact/mod​al analysis testing. I am
LAbview VI or C++ application in labview CVI