Poor Response Time- Total on a column for n number of rows
I have a table with column cost on my custom OAF page.
When I query for configuration it returns me many rows. so I have set the default rows for table = 10 and then I neatly have next button to go to other rows.
I have enabled totalling on cost column.
Row for Total appears showing sum for the costs only on that page( for only 10 rows). I click for next from drop down , and I see total for the costs for the second page.
Ex:
table has 17 rows and
page 1 :
total row at the end saying 1000.00
page 2 :
total = 1500.00
I want to display Total Cost by summing up the costs for say 300 items returned by the query on all the pages , in above case 2500.00.
I thought of a way to do it ;
I added a sumVO with query "select sum(...) from table" .
Added a new region in my page , added a messageStyleText based on the sumVO, and pulled the total cost in.
It shows me the right result, but my problem is performance.
It is getting very slow. I am using the same query as for displaying the results in table, but summing on cost column.
Can I avoid writing the sum query and do it programmatically in OAF ??
Thanks in advance.
Even if you use programmatic approach, what do you think program will do?
Program has to fetch all the rows in the middle tier and sum it up using for loop. No way its going to solve your problem.
First find out the reason for the slow performance using Trace option. and fix the query.
If your not able to fix it, try materialized view for the summation query.
To take sql trace for OAF page refer this link infor http://prasanna-adf.blogspot.com/2009/01/sql-trace.html
--Prasanna
Similar Messages
-
The issue:
Two servers setup with a BO cluster (BoxA and BoxB).
Content switch setup to round-robin users to BoxA or BoxB. (E.g.: first
user request is sent to BoxA. Second user request is sent to BoxB, etc.).
Sticky bit is turned on.
When BoxA is running the CMS server, all requests sent to BoxA run
perfectly.
Requests sent to BoxB have poor response time. Our thought is that the
request gets sent to BoxB and then has to talk to BoxA for processing (since
this is the primary CMS server in charge). The communication between the
two machines is causing a significant lag time.
Please advise on the best way to setup the server and the content switch
to alleviate this issue.Hi Ray,
can you please check the priority order of the NICs on both your clusters?
My Network Places->Properties->Advanced->Advanced Settings
Make sure that the NICs which the network traffic of the BOBJ cluster passes through get the highest priority. In your case choose one subnet you want to get your trafic through and set the highest priority for the NICs that connecte to the choosen subnet on both nodes.
If you have to change the order of the NICs then please restart the BO services on both nodes if possible.
Regards,
Stratos
Edited by: Efstratios Karaivazoglou on Jun 9, 2009 7:20 PM -
How to determine count for the number of rows
Appreciate if any of you could think of a way of determining the count for the number of rows in the subquery without having to run another query.
SELECT *FROM
(SELECT rownum, rn, rlp_id, rlp_notes, cad_pid, status, jurisdiction_id, s.state_abbr, rlp_address, rlp_route_id, rlp_route_section, psma_version FROM ipod.relevant_land_parcels r, state s WHERE s.state_pid = r.state_pid(+) AND rlp_route_id = 'SM1' AND status = 'CURRENT')WHERE rn > 200 AND rn < 216
And I want to import this into.net and C# environment.Something like this,.....????
SQL> select * from emp;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17/12/1980 800,00 20
7499 ALLEN SALESMAN 7698 20/02/1981 1600,00 300,00 30
7521 WARD SALESMAN 7698 22/02/1981 1250,00 500,00 30
7566 JONES MANAGER 7839 02/04/1981 2975,00 20
7654 MARTIN SALESMAN 7698 28/09/1981 1250,00 1400,00 30
7698 BLAKE MANAGER 7839 01/05/1981 2850,00 30
7782 CLARK MANAGER 7839 09/06/1981 2450,00 10
7788 SCOTT ANALYST 7566 19/04/1987 3000,00 20
7839 KING PRESIDENT 17/11/1981 5000,00 10
7844 TURNER SALESMAN 7698 08/09/1981 1500,00 0,00 30
7876 ADAMS CLERK 7788 23/05/1987 1100,00 20
7900 JAMES CLERK 7698 03/12/1981 950,00 30
7902 FORD ANALYST 7566 03/12/1981 3000,00 20
7934 MILLER CLERK 7782 23/01/1982 1300,00 10
14 rows selected
SQL>
SQL> select max(rw) from
2 (
3 select empno , row_number () over (order by empno) rw from emp
4 where job='CLERK'
5 )
6 /
MAX(RW)
4Greetings...
Sim -
Read Database Column According to Number of Rows
Hi guys,
I am currently stuck at the problem of retrieving data from my database (MS Access) according to the number of rows it has at any one time (without having to know how many rows there are going to be while programming this part).
Firstly, I have to introduce how my program works. I am working on an automated food ordering system and after the customer has selected his/her food, information such as the food name, food price and quantity will be written to the database MS Access table. (for e.g. table name "Orderingtable" in MS Access) For my case, 1 order of food item will occupy 1 row of the database table. In other words, under the same customer, if he orders 3 different food items, 3 rows will be occupied in my database table.
I would then like to retrieve the number of the "Quantity" part for each order from the database and sum up the quantity eventually to count the number of total orders in the database table at any point of time. This addition of result will then be shown on the Front Panel to inform the customer how many pending orders there are just before his order. In this case, he can back out if he wants to if the number of orders is too huge for his waiting time.
However, I do not know how many rows my "Orderingtable" will have at any one time because the "Orderingtable" will accumulate rows and rows of data until being commanded to be deleted. Therefore, I cannot predict the number of rows as I program the part to sum up the number of quantity for each and every row.
Is there a way that I can retrieve the "Quantity" part without having to know the number of rows in the database so that I can count the total number of current orders just by summing up the value of quantity for each row?
I do not wish to "hardcode" my program by limiting the database tables to let's say, only 50 rows at any one time.
Below attached is how my "Orderingtable" database table looks like, which is going to be used to retrieve the column "Quantity" so that it can count the total number of orders and be shown/indicated on the Front Panel of my Labview program.
I hope you guys are able to help me!
Thank you so much in advance.
Cheers,
Miu
Solved!
Go to Solution.
Attachments:
database table.JPG 320 KB
Front Panel.JPG 78 KB>>I just want to copy everything from 1 table to another
Absolutely. But as far as I know, there is no "move" command. So you will need to do this in two separate operations: first copy, and then delete.
To copy data: SELECT * INTO target_table FROM source_table
NOTE: By specifying * you are copy all the columns. And the columns must match up. If your source and target tables do not match, then you'll need to replace * with a list of columns. See here for more info.
To delete the data in the original table: DELETE * FROM source_table
NOTE: This will delete the entire contents of the source table. Which may not be ideal. Depending on your application, it could potentially leave you with the possibility of data loss. For example, what if "someone" manages to insert some data into the source_table inbetween your copy and delete operations? That data would be lost (you'd delete it). But that would only be a concern if there were multiple people accessing your database simultaneously. If that could be a problem, let me know, and we can consider a different solution, or consider using locks/transactions to prevent that situtation. -
Af:table Scroll bars not displayed in IE11 for large number of rows
Hi. I'm using JDeveloper 11.1.2.4.0.
The requirements of our application are to display a table potentially displaying very large numbers of rows (sometimes in excess 3 million). While the user does not need to scroll through this many rows, the QBE facility allows drill-down into specific information in the rowset. We moved up to JDeveloper 11.1.2.4.0 primarily so IE11 could be used instead of IE8 to overcome input latency in ADF forms.
However, it seems that IE11 does not enable the vertical or horizontal scroll bars for the af:table component when the table contains greater than (approx) 650,000 rows. This is not the case when the Chrome browser is used. Nor was this the case on IE8 previously (using JDev 11.1.2.1.0).
When the table is filtered using the QBE (to a subset < 650,000 rows), the scroll bars are displayed correctly.
In the code the af:table component is surrounded by an af:panelCollection component which is itself surrounded by an af:panelStretchLayout component.
Does anyone have any suggestions as to how this behaviour can be corrected? Is it purely a browser problem, or might there be a programmatic workaround in ADF?
Thanks for your help.Thanks for your response. That's no longer an option for us though...
Some further investigation into the generated HTML has yielded the following information...
The missing scroll bars appear to be as a consequence of the setting of a style for the horizontal and vertical scroll bars (referenced as vscroller and hscroller in the HTML). The height of the scrollbar appears to be computed by multiplying the estimated number of rows in the iterator on which the table is based by 16 to give a scrollbar size proportional to the amount of data in the table, although it is not obvious why that should be done for the horizontal scroller. If this number is greater than or equal to 10737424 pixels then the scroll bars do not display in IE11.
It would seem better to be setting this height to a sensible limiting number of pixels for a large number of rows?
Alternatively, is it possible to find where this calculation is taking place and override its behaviour?
Thanks. -
Total sum of column for a af:table
Hi all,
I am using JDeveloper 11g with ADF BC.
I've try to create a method in VO to iterate through all the rows to calculate sum as following:
Row curItem = first();
while (curItem!=null) {
Long l = DbsDelegate.attributeAsLong(curItem.getAttribute("field1"));
if (l != null)
result+= l.longValue();
curItem = next();
} I found it is not a good practice. Since the cursor will jump to the last record after that. And most important, the next() method call will fire a NavigationEvent to registered RowSetListeners, by calling RowSetListener.navigated() as documented here: http://www.bisnis.com/doc/rt/oracle/jbo/server/ViewObjectImpl.html#next%28%29
Is there any better way to iterate through a VO and calculate some summary data?
Regards,
Samson FuLet the DB do the calculation. Implement a method in your VO which returns the 'select sum(col) from table'.
Timo -
Setup for discoverer table for showing number of Rows and columns of Report
As oracle discoverer report show "Rows 1-25 of"(Total rows) and "Column 1-6 Of"(Total Column).
This total rows and columns information's is not appearing on our reports.
Kindly let us know its setting/setups .
This is very urgent to us, Any help will be highly appreciated.
Thanks, AvaneeshHmm, what version of Discoverer are you on? Do I understand you correctly that you are able to run a Discoverer report and see this rows and columns information? What software are you running when you do this - Viewer, Plus, or Desktop? Where is this showing up - the top of the report maybe? Or maybe the bottom of the report? The only thing I can think of to handle this is the Page Setup for a workbook, and looking at the Header and Footer sections of that setup. But I am on Discoverer 10.1.2.2 and I don't see anything I can insert on the header/footer that would show this kind of information. Desktop will let you do Page x of y pages (Plus does not), but that is not what you are seeing. You can maybe look at the page setup and see if there is something there not documented in the Discoverer guides.
John Dickey -
Since downloading the new iTunes my column for the number of plays no longer work
Hi
Apart from the fact that I don't like the new set up I also find the the column which keeps count of the number of times a music track is played no longer responds. I tried to re-download my old version of iTunes but received a message saying the files for my music could not be found, so had to go back to the new version. Can anyone help.
PS. I'm quite dismayed at reading some of the comments especially about those who do finally get through on the phone, find that the people who are supposed to help have not received adequate training.Try a force restart by holding the power and home button down at the same time. One of two things will PROBABLY happen; 1) It will boot into iOS as normal. 2) You will get an iTunes logo with a USB cable at the bottom. If the second happens you will need to plug it into iTunes to do a restore. If neither of these happen, force restart again. Once the phone powers off, let go of both buttons and push and hold the home button and plug the phone into iTunes. This will force the phone into recovery mode. Proceed with a restore at that point.
-
Rows to column for huge number of records
my database version is 10gr2
i want to transfer the rows to column .....i have seen the examples for small no of records but how can it be done if there are more the 1000 records in a table ...???
here is the sample data that i would like to change it to column
SQL> /
NE RAISED CLEARED RTTS_NO RING
10100000-1LU 22-FEB-2011 22:01:04/28-FEB-20 22-FEB-2011 22:12:27/28-FEB-20 SR-10/ ER-16/ CR-25/ CR-29/ CR-26/ RIDM-1/ NER5/ CR-31/ RiC600-1
11 01:25:22/ 11 02:40:06/
10100000-2LU 01-FEB-2011 12:15:58/06-FEB-20 05-FEB-2011 10:05:48/06-FEB-20 RIMESH/ RiC342-1/ 101/10R#10/ RiC558-1/ RiC608-1
11 07:00:53/18-FEB-2011 22:04: 11 10:49:18/18-FEB-2011 22:15:
56/19-FEB-2011 10:36:12/19-FEB 17/19-FEB-2011 10:41:35/19-FEB
-2011 11:03:13/19-FEB-2011 11: -2011 11:08:18/19-FEB-2011 11:
16:14/28-FEB-2011 01:25:22/ 21:35/28-FEB-2011 02:40:13/
10100000-3LU 19-FEB-2011 20:18:31/22-FEB-20 19-FEB-2011 20:19:32/22-FEB-20 INR-1/ ISR-1
11 21:37:32/22-FEB-2011 22:01: 11 21:48:06/22-FEB-2011 22:12:
35/22-FEB-2011 22:20:03/28-FEB 05/22-FEB-2011 22:25:14/28-FEB
-2011 01:25:23/ -2011 02:40:20/
10100000/10MU 06-FEB-2011 07:00:23/19-FEB-20 06-FEB-2011 10:47:13/19-FEB-20 101/IR#10
11 11:01:50/19-FEB-2011 11:17: 11 11:07:33/19-FEB-2011 11:21:
58/28-FEB-2011 02:39:11/01-FEB 30/28-FEB-2011 04:10:56/05-FEB
-2011 12:16:21/18-FEB-2011 22: -2011 10:06:10/18-FEB-2011 22:
03:27/ 13:50/
10100000/11MU 01-FEB-2011 08:48:45/22-FEB-20 02-FEB-2011 13:15:17/22-FEB-20 1456129/ 101IR11 RIMESH
11 21:59:28/22-FEB-2011 22:21: 11 22:08:49/22-FEB-2011 22:24:
52/01-FEB-2011 08:35:46/ 27/01-FEB-2011 08:38:42/
10100000/12MU 22-FEB-2011 21:35:34/22-FEB-20 22-FEB-2011 21:45:00/22-FEB-20 101IR12 KuSMW4-1
11 22:00:04/22-FEB-2011 22:21: 11 22:08:21/22-FEB-2011 22:22:
23/28-FEB-2011 02:39:53/ 26/28-FEB-2011 02:41:07/
10100000/13MU 22-FEB-2011 21:35:54/22-FEB-20 22-FEB-2011 21:42:58/22-FEB-20 LD MESH
11 22:21:55/22-FEB-2011 22:00: 11 22:24:52/22-FEB-2011 22:10:could you do something like this?
with t as (select '10100000-1LU' NE, '22-FEB-2011 22:01:04/28-FEB-2011 01:25:22/' raised , '22-FEB-2011 22:12:27/28-FEB-2011 02:40:06/' cleared from dual union
select '10100000-2LU', '01-FEB-2011 12:15:58/06-FEB-2011 07:00:53/18-FEB-2011 22:04:56/19-FEB-2011 10:36:12/19-FEB-2011 11:03:13/19-FEB-2011 11:16:14/28-FEB-2011 01:25:22/',
'05-FEB-2011 10:05:48/06-FEB-2011 10:49:18/18-FEB-2011 22:15:17/19-FEB-2011 10:41:35/19-FEB-2011 11:08:18/19-FEB-2011 11:21:35/28-FEB-2011 02:40:13/' from dual
select * from(
select NE, regexp_substr( raised,'[^/]+',1,1) raised, regexp_substr( cleared,'[^/]+',1,1) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,2) , regexp_substr( cleared,'[^/]+',1,2) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,3) , regexp_substr( cleared,'[^/]+',1,3) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,4) , regexp_substr( cleared,'[^/]+',1,4) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,5) , regexp_substr( cleared,'[^/]+',1,5) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,6) , regexp_substr( cleared,'[^/]+',1,6) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,7) , regexp_substr( cleared,'[^/]+',1,7) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,8) , regexp_substr( cleared,'[^/]+',1,8) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,9) , regexp_substr( cleared,'[^/]+',1,9) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,10) , regexp_substr( cleared,'[^/]+',1,10) cleared from t
union
select NE, regexp_substr( raised,'[^/]+',1,11) , regexp_substr( cleared,'[^/]+',1,11) cleared from t
where nvl(raised,cleared) is not null
order by ne
NE RAISED CLEARED
10100000-1LU 28-FEB-2011 01:25:22 28-FEB-2011 02:40:06
10100000-1LU 22-FEB-2011 22:01:04 22-FEB-2011 22:12:27
10100000-2LU 28-FEB-2011 01:25:22 28-FEB-2011 02:40:13
10100000-2LU 19-FEB-2011 10:36:12 19-FEB-2011 10:41:35
10100000-2LU 19-FEB-2011 11:03:13 19-FEB-2011 11:08:18
10100000-2LU 19-FEB-2011 11:16:14 19-FEB-2011 11:21:35
10100000-2LU 06-FEB-2011 07:00:53 06-FEB-2011 10:49:18
10100000-2LU 01-FEB-2011 12:15:58 05-FEB-2011 10:05:48
10100000-2LU 18-FEB-2011 22:04:56 18-FEB-2011 22:15:17you should be able to do it without all those unions using a connect by but I can't quite get it to work
the following doesn't work but maybe someone can answer.
select NE, regexp_substr( raised,'[^/]+',1,level) raised, regexp_substr( cleared,'[^/]+',1,level) cleared from t
connect by prior NE = NE and regexp_substr( raised,'[^/]+',1,level) = prior regexp_substr( raised,'[^/]+',1,level + 1)Edited by: pollywog on Mar 29, 2011 9:38 AM
here it is with the model clause which gets rid of all the unions.
WITH t
AS (SELECT '10100000-1LU' NE,
'22-FEB-2011 22:01:04/28-FEB-2011 01:25:22/' raised,
'22-FEB-2011 22:12:27/28-FEB-2011 02:40:06/' cleared
FROM DUAL
UNION
SELECT '10100000-2LU',
'01-FEB-2011 12:15:58/06-FEB-2011 07:00:53/18-FEB-2011 22:04:56/19-FEB-2011 10:36:12/19-FEB-2011 11:03:13/19-FEB-2011 11:16:14/28-FEB-2011 01:25:22/',
'05-FEB-2011 10:05:48/06-FEB-2011 10:49:18/18-FEB-2011 22:15:17/19-FEB-2011 10:41:35/19-FEB-2011 11:08:18/19-FEB-2011 11:21:35/28-FEB-2011 02:40:13/'
FROM DUAL)
SELECT *
FROM (SELECT NE, raised, cleared
FROM t
MODEL RETURN UPDATED ROWS
PARTITION BY (NE)
DIMENSION BY (0 d)
MEASURES (raised, cleared)
RULES
ITERATE (1000) UNTIL raised[ITERATION_NUMBER] IS NULL
(raised [ITERATION_NUMBER + 1] =
REGEXP_SUBSTR (raised[0],
'[^/]+',
1,
ITERATION_NUMBER + 1),
cleared [ITERATION_NUMBER + 1] =
REGEXP_SUBSTR (cleared[0],
'[^/]+',
1,
ITERATION_NUMBER + 1)))
WHERE raised IS NOT NULL
ORDER BY NEEdited by: pollywog on Mar 29, 2011 10:34 AM -
Logical column for sum number of occurences
Hi,
In Discoverer I have a column: "Number of Employees" and it is the expression like this: DECODE(Polisaid,Polisaid,1). I tried the same in OBI repository: CASE WHEN "Number of Employees" IS NOT NULL THEN 1 END. In query, How much employees are in HR Department The OBI gave me only number 1?? (I cannot use Aggregate function) while for the same query Discoverer gave me correct answer.
How to solve the problem?
Thanks in advance.Hi stanisa,
DECODE(Polisaid,Polisaid,1)here decodeand case are same...he meant column equals polisaid then give me polisaid if not display 1.
Case when "Number of Employees" is not null then "Number of Employees" else 1 end;
Hope it helps you.
By,
KK -
Jtable on JScrollPane get corrupted for large number of rows
Hi I have problem with vertical scroll bars of JScrollPane.
When move the scroll bar(faster) for a Jtable (with 2000 rows) the rows get
corupted.
Please let me know how can I fix this problem?Hi,
I have just recompiled my (previously 1.3.1) application with 1.4.2 and notice the same problem. The problem starts somewhere between 1700 and 2500 rows.
Its not just the scroll bar for me - the display corrupts whereever i click the mouse on the table area.
Did you manage to diagnose??
Thanks, Dave -
Strange response time for an RFC call viewed from STAD on R/3 4.7
Hello,
On our R/3 4.7 production system, we have a lot of external RFC calls to execute an abap module function. There are 70 000 of these calls per day.
The mean response time for this RFC call is 35 ms.
Some times a few of them (maybe 10 to 20 per day) are way much longer.
I am currently analysing with STAD one of these long calls which lasted 10 seconds !
Here is the info from STAD
Response time : 10 683 ms
Total time in workprocess : 10 683 ms
CPU time : 0 ms
RFC+CPIC time : 0 ms
Wait for work process 0 ms
Processing time 10.679 ms
Load time 1 ms
Generating time 0 ms
Roll (in) time 0 ms
Database request time 3 ms
Enqueue time 0 ms
Number Roll ins 0
Roll outs 0
Enqueues 0
Load time Program 1 ms
Screen 0 ms
CUA interf. 0 ms
Roll time Out 0 ms
In 0 ms
Wait 0 ms
Frontend No.roundtrips 0
GUI time 0 ms
Net time 0 ms
There is nearly no abap processing in the function module.
I really don't uderstand what is this 10 679 ms processing time especially with 0 ms cpu time and 0 ms wait time.
A usual fast RFC call gives this data
23 ms response time
16 ms cpu time
14 ms processing time
1 ms load time
8 ms Database request time
Does anybody have an idea of what is the system doing during the 10 seconds processing time ?
Regards,
OlivierHi Graham,
Thank you for your input and thoughts.
I will have to investigate on RZ23N and RZ21 because I'm not used to use them.
I'm used to investigate performance problems with ST03 and STAD.
My system is R/3 4.7 WAS 6.20. ABAP and BASIS 43
Kernel 6.40 patch level 109
We know these are old patch levels but we are not allowed to stop this system for upgrade "if it's not broken" as it is used 7/7 24/24.
I'm nearlly sure that the problem is not an RFC issue because I've found other slow dialog steps for web service calls and even for a SAPSYS technical dialog step of type <no buffer>. (what is this ?)
This SAPSYS dialog step has the following data :
User : SAPSYS
Task type : B
Program : <no buffer>
CPU time 0 ms
RFC+CPIC time 0 ms
Total time in workprocs 5.490 ms
Response time 5.490 ms
Wait for work process 0 ms
Processing time 5.489 ms
Load time 0 ms
Generating time 0 ms
Roll (in+wait) time 0 ms
Database request time 1 ms ( 3 Database requests)
Enqueue time 0 ms
All hundreds of other SAPSYS <no buffer> steps have a less than 5 ms response time.
It looks like the system was frozen during 5 seconds...
Here are some extracts from STAD of another case from last saturday.
11:00:03 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 13 13 0 0
11:00:03 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 19 19 0 16
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 77 77 0 16
11:00:04 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 14 14 0 16
11:00:05 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 12 12 0 16
11:00:05 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 53 53 0 0
11:00:06 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 76 76 0 0
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 20 20 0 31
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 12 12 0 0
11:00:06 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 13 13 0 0
11:00:06 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 34 34 0 16
11:00:07 bt1sqkvh_PLG_18 RFC R 0 USER_LECDIS 15 15 0 0
11:00:07 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 13 13 0 16
11:00:07 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 19 19 0 0
11:00:07 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 23 13 10 0
11:00:07 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 38 38 0 0
11:00:08 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 20 20 0 16
11:00:09 bt1sqkvg_PLG_18 RFC R 0 USER_LECDIS 9 495 9 495 0 16
11:00:09 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 9 404 9 404 0 0
11:00:09 bt1sqkvh_PLG_18 RFC R 1 USER_LECKIT 9 181 9 181 0 0
11:00:10 bt1fsaplpr02_PLG RFC R 3 USER_LECDIS 23 23 0 0
11:00:10 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 8 465 8 465 0 16
11:00:18 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 18 18 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 89 89 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 75 75 0 0
11:00:18 bt1sqkvh_PLG_18 RFC R 1 USER_LECDIS 43 43 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 1 USER_LECDIS 32 32 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 1 USER_LECDIS 15 15 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 14 14 0 0
11:00:18 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 69 69 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 49 49 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 5 USER_LECKIT 19 19 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 15 15 0 16
The load at that time was very light with only a few jobs starting :
11:00:08 bt1fsaplpr02_PLG RSCONN01 B 31 USER_BATCH 39
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 31 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG /SDF/RSORAVSH B 33 USER_BATCH 64
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 33 USER_BATCH 43
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 34 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 35 USER_BATCH 37
11:00:09 bt1fsaplpr02_PLG RVV50R10C B 34 USER_BATCH 60
11:00:09 bt1fsaplpr02_PLG ZLM_HDS_IS_PURGE_RESERVATION B 35 USER_BATCH 206
I'm thinking also now about the message server as there is load balancing for each RFC call ?
Regards,
Olivier -
Ranking response times for Fire Department
Hello,
I'm in the middle of powerpivot pro's online course, but I'm looking for a little feedback to see if I'm on the right track with a project. I work for a fire department and we collect the response times of our units. As you can guess, we have goals we measure
against and I'm hoping powerpivot will be a good choice to work with. Simply put, for each incident (fire, medical) I create a response time using a calculated column in powerpivot window subtracting the arrival time from the dispatch time (format HH:MM:SS).
I am populating the workbook with a database connection to our SQL database. I group the incidents using the unique incident number in a pivot table, so it list every unit that was on the run. My idea would be to create a measure listing each unit's response
time, looking for the minimum time and compare the minimum against our goal. The result returns either true or false.
I"m working on the solution, but I'm just wondering if my idea sound plausible using powerpivot?
I'm not looking for anyone to do my work, but looking for any suggestions to push me in the right direction.
Thank you for looking , BrentHello Paul,
The calendar table was something I just put in because of a lesson I had with Rob's course, so I was in the processing of adding it to the project as a means of reinforcing the lesson. I will take another look at our SQL database, but
in the database, the incident and apparatus tables are primarily linked by the incident key. I will double check to see if the apparatuses in the table have another relationship with a look up table and I can change the data model. Initially, I was looking
for the simplest model to start with and then build from there.
As far as the error, the min function is working correctly to find the minimum time of the incident, but I think my confusion comes with the grouping of the rows. As you can probably tell, there is one incident key for each fire call
and many apparatus are assigned to the call with the incident key as the primary relationship. If I add all the incidents to the pivot table, it correctly identifies each incident as being met or not met. But, if I remove the incident keys from the pivot and
just keep the months, the pivot table correctly filters down to the month and find the minimum of the entire month and I loose the details of each incident's value of met or not met. Thank to Rob, I understand what is happening, but not how to fix it (I'm
finishing up the calendar table now and the X factors are next, which will probably help me with this issue).
The reporting for now, is pretty simplistic. It is being able to identify each incident, if it met the goal and then able to report out of the incidents how many we made and how many we missed. If we had 1,000 calls and we made 500 of them, we would be at
50%. The goal is a simple true or false based on the minimum time of the first arriving unit ( I used 200 seconds for this project).
I do have Excel 2013, so I can open the file. I created the original at home with 2010.
I hope that make sense. Thank you for the help. I have the day off and I'm working through Rob's video course most of the day today. Thanks, Brent -
Slow Response Time Joining Multiple Domain Indexes
Hi All,
I am working with a schema that has several full text searchable columns along with several other convential columns. I noticed that when I join on 2 Domain Indexes, the response time for query completion nearly doubles. However, If I were to combine my 2 clob columns into 1 clob, the extra cost of finding the intersection of 2 rows sets can be saved..
In my query, I am taking 2 sets of random high probability words (the top 1000 sorted by token_count DESC).
NOTE: All of my query execution times are taken with words not previously used to avoid caching by the engine..
HERE IS THE SLOW VERSION OF THE QUERY WHICH REFERENCES THE BODY CLOB TWICE:
SELECT count(NSS_ID) FROM jk_test_2 WHERE
CONTAINS (body, '( STANDARDS and HELP ) ' ) > 0
AND
CONTAINS (body, '( WORKING and LIMITED ) ' ) > 0 ;
THE EXPLAIN PLAN shows the intersection being calculated:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 99 | 3836 (0)| 00:00:47 |
| 1 | SORT AGGREGATE | | 1 | 99 | | |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 3 | BITMAP AND | | | | | |
| 4 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 5 | SORT ORDER BY | | | | | |
|* 6 | DOMAIN INDEX | JK_BODY_NDX | | | 1284 (0)| 00:00:16 |
| 7 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 8 | SORT ORDER BY | | | | | |
|* 9 | DOMAIN INDEX | JK_BODY_NDX | | | 2547 (0)| 00:00:31 |
Predicate Information (identified by operation id):
6 - access("CTXSYS"."CONTAINS"("BODY",'( PURCHASE and POSSIBLE)')>0 AND
"CTXSYS"."CONTAINS"("BODY",'(NATIONAL and OFFICIAL)')>0)
9 - access("CTXSYS"."CONTAINS"("BODY",'(NATIONAL and OFFICIAL)')>0)
I RAN 3 QUERIES and got these times:
Elapsed: 00:00:00.25
Elapsed: 00:00:00.21
Elapsed: 00:00:00.27
HERE IS THE QUERY RE-WRITTEN INTO A DIFFERENT FORMAT WHICH COMBINES THE 2 PARTS INTO 1:
SELECT count(NSS_ID) FROM jk_test_2 WHERE
CONTAINS (body, '( ( STANDARDS and HELP ) AND ( WORKING and LIMITED ) ) ' ) > 0;
The Plan is now:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 99 | 3207 (0)| 00:00:39 |
| 1 | SORT AGGREGATE | | 1 | 99 | | |
|* 2 | DOMAIN INDEX | JK_BODY_NDX | 5 | 495 | 3207 (0)| 00:00:39 |
Predicate Information (identified by operation id):
2 - access("CTXSYS"."CONTAINS"("BODY",'( ( FORM and GIVE ) AND (WEB and AGREED) ) ')>0)
I RAN 3 QUERIES using new words and got these times:
Elapsed: 00:00:00.12
Elapsed: 00:00:00.11
Elapsed: 00:00:00.13
Although the cost is only %15 lower, it executes twice as fast. Also, the same improvement is gained if OR's are used instead of ands:
With --> CONTAINS (BODY,'( ( ( FORM OR GIVE ) and (WEB OR AGREED) ) ') >0
My 2 timings are .25 and .50 with the OR'ed clause above getting the better response time..
BASED ON THIS, On my project I am tempted to merged 2 totally seperate clob columns into 1, to get the better response time. The 2 columns are 1) Body Text and 2) Codes. They LOGICALLY DO NOT BELONG WITH ONE ANOTHER. This would be very ackward for the project as it would cause a lot of fudging of the data in many places throughout the system. I have done this testing taking averages of 500 unique queries, my indexes are up to date with full statistics computed on my tables and indexes. Response time is HIGHLY CRITICAL for this project.
Does anyone have any advice that would allow me to obtain the good response time avoiding the ackwardness of globbing all the data into one clob ?
JoekYou might try adding sections and querying using within.
-
Greetings All, I was hoping that others may have some insight into DB Control and how is reports ASM Disk response times.
First my environment:
Oracle RAC 11g R1 Standard Edition, Patchset 12
Two Node Cluster
Windows x64 2003 R2 DataCenter Edition.
I am leveraging DB Control to monitor the ASM instances along with the db instances. My issue is regarding how db control gathers metrics to report on average response time for the DISKS that make up the Disk Group.
I have two issues:
1.) The overall response time reported in db control for my disk group "DATA" does not jive with the average I calculate based on the numbers being reported by db control. E.g.) I have ten LUNS in my DATA disk group and if I calculate the mean avg response time from each individual disk as reported by db control I don't get the same number being reported by db control. The numbers differ by as much as 20%.
2.) The numbers reported by ASM for avg response time for each LUN in the disk group are not the same from disk to disk. E.g.) In my current production environment here are the Avg Response Times for each LUN in the group:
8.73, 11.38, 5.22, 4.13, 3.04, 15.84, 12.71, 12.91, 10.51, 9.25.
I would have expected that these disks would have had the same avg response time because ASM is working to guarantee that each disk has the same number of I/O's.
The disk array has identical disks for all 10.
Further, the average for all disks as being reported by db control is : 7.28.
If I do the math I get an average of 9.38, the % diff between these two numbers is 28%
I have heard that db control does a poor job of reporting on ASM disk metrics but this is just people grumbling. I was hoping that someone out there may have some solid facts regarding db control and ASM.hey....
maybe its be better off to open a generel discussion task on metalink....
*T
Maybe you are looking for
-
Error while creating SCSM 2012 sp1 self-portal on SP2013
Hell to everyone today dive into SCSM installation and self-service portal setup. the environment i use is as following: Windows 2008 R2 Datacenter x64 SP1 SQL 2012 BI Edition Sharepoint server 2013 Enterprise Edition SCSM 2012 SP1 i have successfuly
-
Problem with opening link in the Avant Browser
The Avant browser, even with flash enabled, isn't successfully opening the link inside the email notification message one constructs inside the Share file(e.g., the link to: www.share.acrobat.com/**********). I get to the URL but the flash doesn't ac
-
Toplink doenst save and doesnt give an error :(
Hey :) Sorry for bad english.. I have a method that i user UnitOfWork to save my objects but it doesnt work and doenst give me an exception. I run on debug mode and i execute all lines without error...i dont know what to do.... Well, this is my metho
-
Hi Experts, We are receiving orders from our customers into sap system through PI but the problem is we are getting same orders twice from them as it is creating duplicates in our sap system. So I thought of doing RFC look up where if that order exis
-
For some reason I cannot open any of my pictures and home videos anymore. I may have accidentily deleted something. It tells me now that I need to install apple appllcaton support. I dont seem to know how to do this. I have tried all I know. Tha