Reports for Database metriks: what is possible?
Hello,
we are using the GridControl since several months. For our databases we are collecting several metrics and keep them for 365 days. I know how to look up the metrics for historic data (e.g. the first week in january), but for reporting purposes I need something like this:
=> a report which contains the metriks CPU, User I/O and System I/O of the week before the actual week (and of the complete month before the current month, and of the year before the
current year)
What the report should show is some kind of grafical curve which shows: the values of the metrics have increased, decreased or stayed the some
When looking into the report tab of the GridControl I find all kinds of predefined reports - but not exactly what I'm looking for...
Any help will be appreciated
Rgds
JH
Please read below tutorials for creating custom reports:
http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b40007/views.htm#BACCEIBI
Metalink Notes:
Generic Issues With Custom Reports [ID 758293.1]
Examples: Creating Custom Reports [ID 831243.1]
Grid Control Reports FAQ [ID 460894.1]
Regards
Rajesh
Similar Messages
-
HP loadrunner report for database performance
Hi Team,
client testing team gave us a report for database, from load runner and asked if the database load testing parameters is correct on the basis of below mentioned 4 parameters. I have also attached report here for your reference. please help me to analyze and let me know what is the minimum and maximum value for these parameters. Thanks.
bytes received via SQL*Net from client
bytes sent via SQL*Net to client
CPU used by this session
SQL*Net roundtrips to/from client
Scale
Measurement
Graph Minimum
Graph Average
Graph Maximum
Graph Std. Deviation
1.00E-06
bytes received via SQL*Net from client (V$SYSSTAT 1/RM_Q) (absolute):
70175856636
71089487223
71998425926
622767259.2
1.00E-06
bytes sent via SQL*Net to client (V$SYSSTAT 1/RM_Q) (absolute):
70903049507
71974218479
73016616305
721035721.7
1.00E-06
CPU used by this session (V$SYSSTAT 1/RM_Q) (absolute):
22273715.16
22679322.52
23057004.4
265680.491
1.00E-06
SQL*Net roundtrips to/from client (V$SYSSTAT 1/RM_Q) (absolute):
76493132.84
78422403.32
80169678.32
1244089.29
Regards,
ankitYou can Generate AWR(Automatic Workload Repository) and ADDM(Automatic Database Diagnostic Monitor) Reports to see the performance.
They are similar to staspack report with more information. -
EWA Report for DEV/QAS System (if possible)
Hi Experts,
I would like to know if it is possible to generate EWA reports for DEV/QAS Systems. I believe i have set up all the necessary RFCs for our SOLMAN to communicate with the DEV/QAS systems.
When I try to create an EWA session in SDCCN of the DEV/QAS system, the only destination i can choose is "NONE" which is local.
Any idea how I can send this to my SOLMAN. My SOLMAN is already configured to be the master system of my DEV/QAS system.
Any help would be appreciated.
Thanks.Hi Prince Jose,
My problem actually is that I want to set up EWA for a DEV system. I have tested all the RFC connections to and from my SOLMAN.
Even the RFC connection in SDCCN -> Settings .. -> RFC destination, i have 2, one SAP_OSS and my 'back' RFC to my SOLMAN and everything is working OK.
Would you be able to give me a step by step guide on how to setup EWA. From what i know:
1. Setup the system via SMSY (done)
2. Generate the necessary RFCs (done and tested OK)
3. Set SOLMAN as the master system of the satellite systems (done)
4. Create the necessary task for EWA via SDCCN of satellite system
Are the steps correct?
Thanks -
Custom Report For Database Availability
SELECT target_name,
target_type,
SUM (up_time) up_time,
SUM (down_time) down_time,
SUM (blackout_time) blackout_time,
TRUNC( SUM (up_time)
/ (SUM (NVL (up_time, 1)) + SUM (NVL (down_time, 1)))
* 100)
availability_pct
FROM ( SELECT target_name,
target_type,
SUM(TRUNC( (NVL (end_timestamp, SYSDATE) - start_timestamp)
* 24))
total_hours,
CASE availability_status
WHEN 'Target Down'
THEN
0
WHEN 'Target Up'
THEN
0
WHEN 'Blackout'
THEN
SUM(TRUNC( (NVL (end_timestamp, SYSDATE)
- start_timestamp)
* 24))
END
blackout_time,
CASE availability_status
WHEN 'Target Down'
THEN
0
WHEN 'Target Up'
THEN
SUM(TRUNC( (NVL (end_timestamp, SYSDATE)
- start_timestamp)
* 24))
WHEN 'Blackout'
THEN
0
END
up_time,
CASE availability_status
WHEN 'Target Down'
THEN
SUM(TRUNC( (NVL (end_timestamp, SYSDATE)
- start_timestamp)
* 24))
WHEN 'Target Up'
THEN
0
WHEN 'Blackout'
THEN
0
END
down_time,
availability_status
FROM MGMT$AVAILABILITY_HISTORY
WHERE target_type IN ('oracle_database', 'rac_database')
AND availability_status IN
('Target Down', 'Target Up', 'Blackout')
GROUP BY target_name, target_type, availability_status
ORDER BY target_name, availability_status)
GROUP BY target_name, target_type
ORDER BY target_name
Above is the Query that i'm using for the element "Table from SQL" but getting the error "Error rendering element. Exception: ORA-01476: divisor is equal to zero "
Can anyone please help me with fixing this report.
Thanks
Edited by: 822424 on Aug 22, 2011 8:27 AM
Edited by: 822424 on Aug 22, 2011 8:27 AMThe error says that somewhere in you statement you are using a value 0 as an divisor.
So A / B where B = 0
Try to debug you SQL statement to make sure you will never use the value 0 as divisor.
Once you corrected your statement, your report will work fine
Rob
http://oemgc.wordpress.com -
Need the report for database fragmentation
HI,
Is there any script to get database defragmentation report
ThanksHi;
Please also check:
http://www.orafaq.com/node/1936
Script to Report Extents and Contiguous Free Space [ID 162994.1]
Script to Report Tablespace Free and Fragmentation [ID 1019709.6]
Regard
Helios -
Graphical Reports for Database Health Check or Database Statictics
Hello
we are using oracle 10g on Solaris.
Is there a way I can get the Database health check or database statictics report in graphical form like to represent the growth of database (in a week or month), log size etc..
ThanksYou'll have to develop your own method/application/script as per your requirement. To some extent Enterprise manager may help you but I think this is not your requirement.
-
Hi: I'm analyzing this STATSPACK report: it is "volume test" on our UAT server, so most input is from 'bind variables'. Our shared pool is well utilized in oracle. Oracle redo logs is not appropriately configured on this server, as in 'Top 5 wait events' there are 2 for redos.
I need to know what else information can be dig-out from 'foreground wait events' & 'background wait events', and what can assist us to better understanding, in combination of 'Top 5 wait event's, that how the server/test went? it could be overwelming No. of wait events, so appreciate any helpful diagnostic or analysis. Database is oracle 11.2.0.4 upgraded from 11.2.0.3, on IBM AIX power system 64bit, level 6.x
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
700000XXX XXX 1 22-Apr-15 12:12 11.2.0.4.0 NO
Host Name Platform CPUs Cores Sockets Memory (G)
~~~~ ---------------- ---------------------- ----- ----- ------- ------------
dXXXX_XXX AIX-Based Systems (64- 2 1 0 16.0
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 5635 22-Apr-15 13:00:02 114 4.6
End Snap: 5636 22-Apr-15 14:00:01 128 8.8
Elapsed: 59.98 (mins) Av Act Sess: 0.6
DB time: 35.98 (mins) DB CPU: 19.43 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 2,064M Std Block Size: 8K
Shared Pool: 3,072M Log Buffer: 13,632K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB time(s): 0.6 0.0 0.00 0.00
DB CPU(s): 0.3 0.0 0.00 0.00
Redo size: 458,720.6 8,755.7
Logical reads: 12,874.2 245.7
Block changes: 1,356.4 25.9
Physical reads: 6.6 0.1
Physical writes: 61.8 1.2
User calls: 2,033.7 38.8
Parses: 286.5 5.5
Hard parses: 0.5 0.0
W/A MB processed: 1.7 0.0
Logons: 1.2 0.0
Executes: 801.1 15.3
Rollbacks: 6.1 0.1
Transactions: 52.4
Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.98 Optimal W/A Exec %: 100.00
Library Hit %: 99.77 Soft Parse %: 99.82
Execute to Parse %: 64.24 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 53.15 % Non-Parse CPU: 98.03
Shared Pool Statistics Begin End
Memory Usage %: 10.50 12.79
% SQL with executions>1: 69.98 78.37
% Memory for SQL w/exec>1: 70.22 81.96
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
CPU time 847 50.2
enq: TX - row lock contention 4,480 434 97 25.8
log file sync 284,169 185 1 11.0
log file parallel write 299,537 164 1 9.7
log file sequential read 698 16 24 1.0
Host CPU (CPUs: 2 Cores: 1 Sockets: 0)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
1.16 1.84 19.28 14.51 66.21 1.20 82.01
Instance CPU
~~~~~~~~~~~~ % Time (seconds)
Host: Total time (s): 7,193.8
Host: Busy CPU time (s): 2,430.7
% of time Host is Busy: 33.8
Instance: Total CPU time (s): 1,203.1
% of Busy CPU used for Instance: 49.5
Instance: Total Database time (s): 2,426.4
%DB time waiting for CPU (Resource Mgr): 0.0
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 16,384.0 16,384.0
SGA use (MB): 7,136.0 7,136.0
PGA use (MB): 282.5 361.4
% Host Mem used for SGA+PGA: 45.3 45.8
Foreground Wait Events DB/Inst: XXXXXs Snaps: 5635-5636
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg %Total
%Tim Total Wait wait Waits Call
Event Waits out Time (s) (ms) /txn Time
enq: TX - row lock contentio 4,480 0 434 97 0.0 25.8
log file sync 284,167 0 185 1 1.5 11.0
Disk file operations I/O 8,741 0 4 0 0.0 .2
direct path write 13,247 0 3 0 0.1 .2
db file sequential read 6,058 0 1 0 0.0 .1
buffer busy waits 1,800 0 1 1 0.0 .1
SQL*Net more data to client 29,161 0 1 0 0.2 .1
direct path read 7,696 0 1 0 0.0 .0
db file scattered read 316 0 1 2 0.0 .0
latch: shared pool 144 0 0 2 0.0 .0
CSS initialization 30 0 0 3 0.0 .0
cursor: pin S 10 0 0 9 0.0 .0
row cache lock 41 0 0 2 0.0 .0
latch: row cache objects 19 0 0 3 0.0 .0
log file switch (private str 8 0 0 7 0.0 .0
library cache: mutex X 28 0 0 2 0.0 .0
latch: cache buffers chains 54 0 0 1 0.0 .0
latch free 290 0 0 0 0.0 .0
control file sequential read 1,568 0 0 0 0.0 .0
log file switch (checkpoint 4 0 0 6 0.0 .0
direct path sync 8 0 0 3 0.0 .0
latch: redo allocation 60 0 0 0 0.0 .0
SQL*Net break/reset to clien 34 0 0 1 0.0 .0
latch: enqueue hash chains 45 0 0 0 0.0 .0
latch: cache buffers lru cha 7 0 0 2 0.0 .0
latch: session allocation 5 0 0 1 0.0 .0
latch: object queue header o 6 0 0 1 0.0 .0
ASM file metadata operation 30 0 0 0 0.0 .0
latch: In memory undo latch 15 0 0 0 0.0 .0
latch: undo global data 8 0 0 0 0.0 .0
SQL*Net message from client 6,362,536 0 278,225 44 33.7
jobq slave wait 7,270 100 3,635 500 0.0
SQL*Net more data from clien 7,976 0 15 2 0.0
SQL*Net message to client 6,362,544 0 8 0 33.7
Background Wait Events DB/Inst: XXXXXs Snaps: 5635-5636
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg %Total
%Tim Total Wait wait Waits Call
Event Waits out Time (s) (ms) /txn Time
log file parallel write 299,537 0 164 1 1.6 9.7
log file sequential read 698 0 16 24 0.0 1.0
db file parallel write 9,556 0 13 1 0.1 .8
os thread startup 146 0 10 70 0.0 .6
control file parallel write 2,037 0 2 1 0.0 .1
Log archive I/O 35 0 1 30 0.0 .1
LGWR wait for redo copy 2,447 0 0 0 0.0 .0
db file async I/O submit 9,556 0 0 0 0.1 .0
db file sequential read 145 0 0 2 0.0 .0
Disk file operations I/O 349 0 0 0 0.0 .0
db file scattered read 30 0 0 4 0.0 .0
control file sequential read 5,837 0 0 0 0.0 .0
ADR block file read 19 0 0 4 0.0 .0
ADR block file write 5 0 0 15 0.0 .0
direct path write 14 0 0 2 0.0 .0
direct path read 3 0 0 7 0.0 .0
latch: shared pool 3 0 0 6 0.0 .0
log file single write 56 0 0 0 0.0 .0
latch: redo allocation 53 0 0 0 0.0 .0
latch: active service list 1 0 0 3 0.0 .0
latch free 11 0 0 0 0.0 .0
rdbms ipc message 314,523 5 57,189 182 1.7
Space Manager: slave idle wa 4,086 88 18,996 4649 0.0
DIAG idle wait 7,185 100 7,186 1000 0.0
Streams AQ: waiting for time 2 50 4,909 ###### 0.0
Streams AQ: qmn slave idle w 129 0 3,612 28002 0.0
Streams AQ: qmn coordinator 258 50 3,612 14001 0.0
smon timer 43 2 3,605 83839 0.0
pmon timer 1,199 99 3,596 2999 0.0
SQL*Net message from client 17,019 0 31 2 0.1
SQL*Net message to client 12,762 0 0 0 0.1
class slave wait 28 0 0 0 0.0
thank you very much!Hi: just know it now: it is a large amount of 'concurrent transaction' designed in this "Volume Test" - to simulate large incoming transaction volme, so I guess wait in eq:TX - row is expected.
The fact: (1) redo logs at uat server is known to not well-tune for configurations (2) volume test slow 5%, however data amount in its test is kept the same by each time import production data, by the team. So why it slowed 5% this year?
The wait histogram is pasted below, any one interest to take a look? any ideas?
Wait Event Histogram DB/Inst: XXXX/XXXX Snaps: 5635-5636
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 19 26.3 5.3 10.5 57.9
ADR block file write 5 40.0 60.0
ADR file lock 6 100.0
ARCH wait for archivelog l 14 100.0
ASM file metadata operatio 30 100.0
CSS initialization 30 100.0
Disk file operations I/O 9090 97.2 1.4 .6 .4 .2 .1 .1
LGWR wait for redo copy 2447 98.5 .5 .4 .2 .2 .2 .1
Log archive I/O 35 40.0 8.6 25.7 2.9 22.9
SQL*Net break/reset to cli 34 85.3 8.8 5.9
SQL*Net more data to clien 29K 99.9 .0 .0 .0 .0 .0
buffer busy waits 1800 96.8 .7 .7 .6 .3 .4 .5
control file parallel writ 2037 90.7 5.0 2.1 .8 1.0 .3 .1
control file sequential re 7405 100.0 .0
cursor: pin S 10 10.0 90.0
db file async I/O submit 9556 99.9 .0 .0 .0
db file parallel read 1 100.0
db file parallel write 9556 62.0 32.4 1.7 .8 1.5 1.3 .1
db file scattered read 345 72.8 3.8 2.3 11.6 9.0 .6
db file sequential read 6199 97.2 .2 .3 1.6 .7 .0 .0
direct path read 7699 99.1 .4 .2 .1 .1 .0
direct path sync 8 25.0 37.5 12.5 25.0
direct path write 13K 97.8 .9 .5 .4 .3 .1 .0
enq: TX - row lock content 4480 .4 .7 1.3 3.0 6.8 12.3 75.4 .1
latch free 301 98.3 .3 .7 .7
latch: In memory undo latc 15 93.3 6.7
latch: active service list 1 100.0
latch: cache buffers chain 55 94.5 3.6 1.8
latch: cache buffers lru c 9 88.9 11.1
latch: call allocation 6 100.0
latch: checkpoint queue la 3 100.0
latch: enqueue hash chains 45 97.8 2.2
latch: messages 4 100.0
latch: object queue header 7 85.7 14.3
latch: redo allocation 113 97.3 1.8 .9
latch: row cache objects 19 89.5 5.3 5.3
latch: session allocation 5 80.0 20.0
latch: shared pool 147 90.5 1.4 2.7 1.4 .7 1.4 2.0
latch: undo global data 8 100.0
library cache: mutex X 28 89.3 3.6 3.6 3.6
log file parallel write 299K 95.6 2.6 1.0 .4 .3 .2 .0
log file sequential read 698 29.5 .1 4.6 46.8 18.9
log file single write 56 100.0
log file switch (checkpoin 4 25.0 50.0 25.0
log file switch (private s 8 12.5 37.5 50.0
log file sync 284K 93.3 3.7 1.4 .7 .5 .3 .1
os thread startup 146 100.0
row cache lock 41 85.4 9.8 2.4 2.4
DIAG idle wait 7184 100.0
SQL*Net message from clien 6379K 86.6 5.1 2.9 1.3 .7 .3 2.8 .3
SQL*Net message to client 6375K 100.0 .0 .0 .0 .0 .0 .0
Wait Event Histogram DB/Inst: XXXX/xxxx Snaps: 5635-5636
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
SQL*Net more data from cli 7976 99.7 .1 .1 .0 .1
Space Manager: slave idle 4086 .1 .2 .0 .0 .3 3.2 96.1
Streams AQ: qmn coordinato 258 49.2 .8 50.0
Streams AQ: qmn slave idle 129 100.0
Streams AQ: waiting for ti 2 50.0 50.0
class slave wait 28 92.9 3.6 3.6
jobq slave wait 7270 .0 100.0
pmon timer 1199 100.0
rdbms ipc message 314K 10.3 7.3 39.7 15.4 10.6 5.3 8.2 3.3
smon timer 43 100.0 -
How to create a daily report for sales order
hi
how to create a daily report for sales order. what fields it must consists of. what are the tables it need?Hi
You have to use the sales order tables VBAK,VBAP and VBEP
So keep date field on selection screen
and treat this date as Order creation data audat field in VBAK.
based on this fetch the data from VBAK and VBAP with the following fields like
VBELN, KUNNR,NETWR,POSNR, MATNR,ARKTX,KWMENG,WAERS etc and display in the report
<b>Reward points for useful Answers</b>
Regards
Anji -
New report for Cash Position (TC - FF7A)
Hi
I want to draw cash position (FF7A) report in following format:-
1) Receipt
AR Collection
Other
Total Receipt
2) Disbursements
Payroll
Bonus
3) Accounts Payable
Purchase
Freight
Other Exp
I have done the necessary configuration for Cash Management and getting result as per standard report. Can anybody let me know how to draw above report for TC - FF7A. If possible revert in detail
Regards
DDHi,
Please explore custom reporting option because I don't see FF7A/FF7B giving the desired output.
Manish -
Hello,
I presently licensed adobe creative suite 5.5 design standard for windows.Est-what is possible to exchange the license for Windows Mac Ios? you return me a message with a # which is 144561396 but I'm not a very comfortable chat in English. then please is it possible to reondre me by email? else return me an email to tell me or I just have this information.
thank you very much for your help is very apreciated
Brigitte.You cannot switch the platform for legacy versions, only current ones./
Vous ne peut pas echanger les versions d's system d' operation pur versions anciennes, seulement versions currente.
Order an Adobe product platform swap or language swap
Mylenium -
Dear All,
I have configured the action type 08 as Transfer (from one PA to another PA). Now my requirement is, I have to get transferred employees from PA to PA. Are they any standard report available to get this?
I have tried report RPLACTJ0 (Employee Action) but its giving only new PA, PSA, Org-unit, position (Transferred) but my requirement is i want get both. OLD PA. PSA as well as NEW PA PSA.
Should i do any further config for this in report RPLACTJ0???
are there any reports for this pls help me?
thanks for advance.
Regards,
Dinesh.Hi Sikindar,
I did not try through Adhoc Query. actually im searching standard report for this.
Is it possible to take both PA's in Adhoc Query?If so, can u just give me an idea for that??
thanks
Dinesh. -
How to export the dashboard reports for Excel in OBIEE 11.1.1.6.0?
Hi Experts,
How to export the dashboard reports for Excel in OBIEE 11.1.1.6.0?
For example:
Dashboard contains more than one report, and add one export button for downloading these reports for Excel.
Is it possible to implement this requirement?Thanks.There is no direct option, you can try the following workaround.
Page Option -> Print -> Printable Html
Then File -> Save Page as
Choos Save as type as -> All files
File name as Report.xls
When you Open the xls file, if you get a pop up saying problem during load just omit it and proceed.
The report will open in excel.
Or save the page as html /mht and open it with Excel, this also will work.
Note: I have tested this in Mozilla
Thanks,
Vino -
I am working on upgrading an application that has been in use for many years. The application is written in VB6 and I have been tasked with upgrading the current application to Crystal Reports for Visual Studio. I am using Crystal Reports for VS Version 13.0.12.1494. The system's database is a Sybase SQL Anywhere 16 database with an ODBC connection using integrated login. Each of the reports has the database connection set up from within the report. There is only once database server, so each of the reports are pointing to the same DB. The database server is currently installed as a "Personal Server" with a limit of 10 connections.
I have implemented the CR viewer as part of a COM-callable wrapper that exposes a COM interface for VB6 to interact with. Inside of my viewer component is a Winform that embeds the Crystal's Report viewer. The COM interface basically maps the basic Crystal apis to methods that the VB6 can call (i.e., Load Report, Set Field Text, Update SQL Query, etc). This architecture is working as designed and the reports are displaying correctly and responding correctly to changes in queries, etc.
The issue is that after I open 9 reports, the tenth one will respond with an error indicating that the database connection limit has been reached. The database connections used by the reports aren't released until after the application is closed. The application is designed for a secure environment that prohibits the non-administrative user from accessing the systems desktop, so asking the user tor restart the application after 10 reports isn't a viable option.
I have checked and database connection pooling is turned off for the SQL Anywhere 16 driver.
I have been digging on this for a few days and have tried adding code in the FormClosed event to close and dispose of the Report Document as follows:
ReportDocument reportDoc= (ReportDocument) crystalReportViewer1.ReportSource;
reportDoc.Close();
reportDoc.Dispose();
GC.Collect(); // Force garbage collection on disposed items
I have also tried the following (as well as maybe 20 or so other permutations) trying to fix the issue with no success.
ReportDocument reportDoc= (ReportDocument) crystalReportViewer1.ReportSource;
foreach (Table table in reportDoc.Database.Tables)
table.Dispose();
crystalReportViewer1.ReportSource = null;
reportDoc.Database.Dispose();
reportDoc.Close();
reportDoc.Dispose();
reportDoc = (ReportDocument)crystalReportViewer1.ReportSource;
GC.Collect(); // Force garabe collection on disposed items
Any ideas or suggestions would be greatly appreciated. I have been pulling my hair out on this one!Hi Ludek,
Thanks so much for the quick reply. Unfortunately I did not have time to work on the reporting project Friday afternoon, but did a quick test this morning with some interesting results. I'm hoping if I describe what I'm doing, you can show me the error of my ways. This is really my first major undertaking with Crystal Reports.
If I simply load the report, then close and dispose, I don't hit the limit of 10 files. Note that I do not logon manually in my code as the logon parameters are all defined within the reports themselves. The logon happens when you actually view the report. Loading the report doesn't seem to actually log in to the DB.
What I did was create a very simple form with a single button that creates the WinForm class which contains the Crystal Viewer. It then loads the report, sets the ReportSource property on the CrystalReportsViewer object contained in the WInForm and shows the report. The report does show correctly, until the 10 reports limit is reached.
The relevant code is shown below. More than I wanted to post, but i want to be as complete and unambiguous as possible.
This code displays the same behavior as my earlier post (after 10 reports we are unable to create another connection to the DB).
// Initial Form that simply has a button
public partial class SlectReport : form
public SelectReport()
InitializeComponent();
private void button1_Click(object sender, EventArgs e)
ReportDocument rd = new ReportDocument();
ReportForm report = new ReportForm();
try
rd.Load(@"Test.rpt");
report.ReportSource = rd;
report.Show();
catch (Exception ex)
MessageBox.Show(ex.Message);
// The WinForm containing the Crystal Reports Viewer
public partial class ReportForm : Form
public ReportForm()
InitializeComponent();
private void Form1_Load(object sender, EventArgs e)
this.crystalReportViewer1.RefreshReport();
this.FormClosed += new FormClosedEventHandler(ReportForm_FormClosed);
void ReportForm_FormClosed(object sender, FormClosedEventArgs e)
ReportDocument rd;
rd = (ReportDocument)crystalReportViewer1.ReportSource;
rd.Close();
rd.Dispose();
public object ReportSource
set { crystalReportViewer1.ReportSource = value; }
Again, any guidance would be greatly appreciated. -
Auto-Generate mail for database performace reporting.
Hello,
i have many server to keep an eye on for maintenance but, now the number for it is growing day by day. Its difficult for me to keep a watch on each in detail. So I have an idea to implement.
I want to make a script which will auto generate a mail from the server & send it to my email id with all the details of database. Basically performance related details.+
Is it possible to do so ???? I know how to send a mail with attachment, the code i will use is given below.+
please suggest me how can i attach my performance tuning queries output & get those things in my mail on a daily basis......
thanks in advance .....
DECLARE
v_From VARCHAR2(80) := '[email protected]';
v_Recipient VARCHAR2(80) := '[email protected]';
v_Subject VARCHAR2(80) := 'test subject';
v_Mail_Host VARCHAR2(30) := 'mail.mycompany.com';
v_Mail_Conn utl_smtp.Connection;
crlf VARCHAR2(2) := chr(13)||chr(10);
BEGIN
v_Mail_Conn := utl_smtp.Open_Connection(v_Mail_Host, 25);
utl_smtp.Helo(v_Mail_Conn, v_Mail_Host);
utl_smtp.Mail(v_Mail_Conn, v_From);
utl_smtp.Rcpt(v_Mail_Conn, v_Recipient);
utl_smtp.Data(v_Mail_Conn,
'Date: ' || to_char(sysdate, 'Dy, DD Mon YYYY hh24:mi:ss') || crlf ||
'From: ' || v_From || crlf ||
'Subject: '|| v_Subject || crlf ||
'To: ' || v_Recipient || crlf ||
'MIME-Version: 1.0'|| crlf || -- Use MIME mail standard
'Content-Type: multipart/mixed;'|| crlf ||
' boundary="-----SECBOUND"'|| crlf ||
crlf ||
'-------SECBOUND'|| crlf ||
'Content-Type: text/plain;'|| crlf ||
'Content-Transfer_Encoding: 7bit'|| crlf ||
crlf ||
'some message text'|| crlf || -- Message body
'more message text'|| crlf ||
crlf ||
'-------SECBOUND'|| crlf ||
'Content-Type: text/plain;'|| crlf ||
' name="excel.csv"'|| crlf ||
'Content-Transfer_Encoding: 8bit'|| crlf ||
'Content-Disposition: attachment;'|| crlf ||
' filename="excel.csv"'|| crlf ||
crlf ||
'CSV,file,attachement'|| crlf || -- Content of attachment
crlf ||
'-------SECBOUND--' -- End MIME mail
utl_smtp.Quit(v_mail_conn);
EXCEPTION
WHEN utl_smtp.Transient_Error OR utl_smtp.Permanent_Error then
raise_application_error(-20000, 'Unable to send mail: '||sqlerrm);
END;
/I like your script idea.
You can spool output to a file and mail the file to yourself.
What OS are you running?
On linux a simple starter script would look like this:
#!/bin/bash
echo `date`
# Set the Environmental variable for TESTDB instance
. /u01/app/oracle/dba_tool/env/TESTDB.env
$ORACLE_HOME/bin/sqlplus -s system/<PASSWORD> <<EOF
@/u01/app/oracle/dba_tool/TESTDB/quickaudit
EOF
echo `date`
mailx -s "Check database on TESTDB" [email protected] < /tmp/quickaudit.lst
----------------------sample ENV file--------------------------------------
ORACLE_BASE=/u01/app/oracle
ULIMIT=unlimited
ORACLE_SID=TESTDB
export ORACLE_TERM=xterm
ORACLE_HOME=$ORACLE_BASE/product/11.2.0
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
LIBPATH=$LD_LIBRARY_PATH:/usr/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:
export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
export ORACLE_HOME
export LIBPATH LD_LIBRARY_PATH ORA_NLS33
export TNS_ADMIN
export PATH
--------------------Starter reporting script--------------------------------------------------
SET ECHO OFF
SET TERMOUT OFF
REM Revisions:
REM Date ID Version Description
REM -------- -- ------- ----------------------------------------------------|
REM 10/07/05 1.0 Script to check for database issues
SPOOL /tmp/quickaudit.lst
SELECT SYSDATE FROM DUAL;
SHOW USER
SET TERMOUT ON
SET VERIFY OFF
SET FEEDBACK ON
PROMPT
PROMPT Checking database name and archive mode
PROMPT
column NAME format A9
column LOG_MODE format A12
SELECT NAME,CREATED, LOG_MODE FROM V$DATABASE;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking database versions
PROMPT
column BANNER format A64
select * from v$version;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking control file(s)
PROMPT
column STATUS format a7
column NAME format a68
column IS_RECOVERY_DEST_FILE format a3
set linesize 110
SELECT * FROM V$CONTROLFILE;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking redo logs and group(s)
PROMPT
column member format a70
set linesize 110
set pagesize 30
SELECT group#, member FROM v$logfile;
PROMPT
PROMPT -----------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking freespace by tablespace
PROMPT
column dummy noprint
column pct_used format 999.9 heading "%|Used"
column name format a16 heading "Tablespace Name"
column bytes format 9,999,999,999,999 heading "Total Bytes"
column used format 99,999,999,999 heading "Used"
column free format 999,999,999,999 heading "Free"
break on report
compute sum of bytes on report
compute sum of free on report
compute sum of used on report
set termout off
set pagesize 40
select a.tablespace_name name,
b.tablespace_name dummy,
sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) bytes,
sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) -
sum(a.bytes)/count( distinct b.file_id ) used,
sum(a.bytes)/count( distinct b.file_id ) free,
100 * ( (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) -
(sum(a.bytes)/count( distinct b.file_id ) )) /
(sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) pct_used
from sys.dba_free_space a, sys.dba_data_files b
where a.tablespace_name = b.tablespace_name
group by a.tablespace_name, b.tablespace_name;
PROMPT
PROMPT ------------------------------------------------------------------------|
PROMPT
PROMPT
PROMPT Checking for invalid objects
PROMPT -
What is the best way to create a SSRS 2005 Line Chart Report for a 12 month period?
I'm looking for advice on how to create a SQL Server 2005 query and line chart report for SSRS 2005.
I need to display the peak number of patients assigned to a medical practice each month for a 12 month period based on the end-user selecting a
single month and year.
I've previously created a report that displays all patients assigned to the practice for any single month but I’m looking for advice on how to
how to produce a resultset that shows the peak number of patients each month for a 12 month period. I thought about creating a query that returns the peak count for each month (based on my previously created report which displays all patients assigned to the
practice for any single month) and then use a UNION statement to join all 12 months but I'm sure that isn't the most efficient way to do this. The other challenge with this approach (twelve resultsets combined via a UNION) is that the end-user needs to be
able to select any month and year for the parameter and the report needs to display the 12 month period based on the month selected (the month selected would be the last month of the 12 month period).
For the report I’ve previously created that displays all patients assigned to the practice for any single month, the WHERE statement filters the
resultset on two fields:
Start Date - The date the patient was assigned to the practice. This field is never null or blank.
End Date - The date the patient left the practice. This field can be null or blank as active patients assigned to the practice do not have an End Date. When the patient
leaves the practice, the date the patient left is populated in this field.
Using these two fields I can return all patients assigned to the practice during Nov 2012 by looking for patients that meet the following criteria:
start date prior to 11/30/2012 (using the last day of the month selected ensures patients added mid-month would be included)
AND
end date is null or blank (indicates the patient is active) OR the end date is between 11/1/2012 -11/30/2012 (returns patients that leave during the month
selected)
Regarding the query I need to create for the report that displays the peak count each month for 12 months, I'm looking for advice on
how to count patients for each month the patient is assigned to the practice if the patient has been assigned for several months (which applies to most patients). Examples are:
John Doe has a start date of 6/01/2012 and an End Date of 10/07/2012
Sally Doe has a start date of 8/4/2012 and no End Date (the patient is still active)
Jimmy Doe has a start of 7/3/2012 and an End Date of 9/2/2012
Given these examples how would I include John Doe in the peak monthly count each month for May - October, Sally Doe in the peak monthly count for
August - December and Jimmy Doe in the peak monthly count for July – Sept if the end-user running the report selected December 2012 as the parameter?
Given the example above and the fact I'm creating a line chart I think the best way to create this report would be a resultset that looks like
this:
Patient Name
Months Assigned
John Doe
June 2012
John Doe
July012
John Doe
Aug 2012
John Doe
Sept 2012
John Doe
Oct 2012
Sally Doe
Aug 2012
Sally Doe
Sept 2012
Sally Doe
Oct 2012
Sally Doe
Nov 2012
Sally Doe
Dec 2012
Jimmy Doe
July 2012
Jimmy Doe
Aug 2012
Jimmy Doe
Sept 2012
From the resultset above I could create another resultset that would count\group on month and year to return the peak count for each month:
June 2012 - 1
July 2012 – 2
Aug 2012 - 3
Sept 2012 - 3
Oct 2012 - 2
Nov 2012 - 1
Dec 2012 - 1
The resultset that displays the peak count for each month would be used to create the line chart (month would be the X axis and the count would
be the y axis).
Does this sound like the best approach?
If so, any advice on how to create the resultset that lists each patient and each month they were assigned to the practice would be greatly appreciated.
I do not have permissions to create SPs or Functions within the database but I can create temp tables.
I know how to create the peak monthly count query (derived from the query that lists each patient and month assigned) as well as the line chart.
Any advice or help is greatly appreciated.Thanks for the replies. I reviewed them shortly after they were submitted but I'm also working on other projects at the same time (hence the delayed reply).
Building a time table and doing a cross join to my original resultset gave me the desired resultset of the months assigned between dates. What I can't figure out now is how to filter months I don't want.
Doing a cross join between my original resultset that had two dates:
08/27/2010
10/24/2011
and a calendar table that has 24 rows (each month for a two year period)
my new resultset looks like this:
I need to filter the rows in yellow as the months assigned for stage 3 that started on 8/27/2010 should stop when the patient was assigned to stage 4 on 10/24/2011.
You'll notice that Jan - Sept 2011 isn't listed for Stage 4 assigned on 10/24/2011 as I included a filter in the WHERE clause that states
the Months Assigned value must be greater than or equal to the date assigned value.
Any advice would be appreciated.
Maybe you are looking for
-
My new I pad retina display restarts automatically in every 3 minutes
I purchased it ( new ipad with retina display wifi + cellular ) one month back. It restarts automatically in every 3 minutes. I tried few tips already like reset n all but all failed. Pls help me
-
Every 15 min I get an "internet connection reminder" that asks me if I want to stay connected to the Internet. If I don't click "stay connected", after a short time, my connection gets lost. Can't find this in preferences anywhere.
-
Hi Gurus, I have a Table in database (53 Million records) Eg: Employee Table I Reverse Eng. Table and as i want only Records with Employee Id: 1 to 10 as a source to my target table. I applied the filter in the source datastore. And, able to do ma
-
Calling onInputProcessing javascript error
Hi, My query is as follows: I want to call onInputProcessing on pressing of a link. as follows. <a href="mailto:<% loop at i_mail into wa_mail. %> <%= wa_mail%>," <% endloop. %> onClick="onInputProcessing()">Send Email to
-
Displaying data from one data set when a field is null in other data set
Hi All, I have a report where I need to show some columns from one data set(Ex: SsOpportunity) ....... when there is no data in a column (Ex: "Petrofac-ProductLine") in other data set (Ex: SsRevenue) in one table. I have another table which shows dat