Monitoring on table
Hi,
is there any performance degradation if i turned monitoring on.
user00726 wrote:
Hi,
is there any performance degradation if i turned monitoring on.Its always a good idea to mention the version of oracle on which you are working. From 10g onwards, alter table monitoring clause is deprecated. The monitoring for the entire db's objects is by default on now due to the parameter statistics_level which is set to typical by default. This parameter is doing the monitoring for the entire schema's objects to chekc when there is a need to defragment them, gather stats over them and so on. So you don't need to do so explicitly , atleast in 10g. In previous versions too, doing so wouldn't call a performance degradation.
HTH
Aman....
Similar Messages
-
As Oracle 10g STATISTICS_LEVEL parameter needs to be "Typical" to keep Monitoring of all tables DML i.e (Insert ,Update ,Delete) , i have got Oracle 10g and STATISTICS_LEVEL is "Typical" but still the monitoring of tables is not updated , what could be reason for it.
SQL> show parameter statistics_level
NAME TYPE VALUE
statistics_level string TYPICAL
SQL> conn scott/tiger
Connected.
SQL> select *
2 from tab
3 /
TNAME TABTYPE CLUSTERID
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE
SQL> conn sys/sys as sysdba
Connected.
SQL> select * from all_tab_modifications
2 where table_owner='SCOTT'
3 /
no rows selected
SQL> select * from all_tab_modifications
2 where table_owner='SCOTT'
3 /
no rows selected
SQL> select * from dba_tab_modifications
2 where table_owner='SCOTT'
3 /
no rows selected
SQL> select * from user_tab_modifications
2 where table_name='T'
3 /
no rows selected
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining optionsbefore referring document read my question loudly 100 times then read documents 100 times , you will come to know what am i asking is not under the documents.
SQL> create table t as select * from all_objects
2 /
Table created.
SQL> select * from user_tab_modifications
2 where table_name='T'
3 /
no rows selected
SQL> exec dbms_stats.gather_table_stats('SCOTT','T')
PL/SQL procedure successfully completed.
SQL> select * from user_tab_modifications
2 where table_name='T'
3 /
no rows selected
SQL> /
no rows selected
SQL> conn sys/sys as sysdba
Connected.
SQL> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
SQL> conn scott/tiger
Connected.
SQL> select * from user_tab_modifications
2 where table_name='T'
3 /
no rows selectedAs emp,dept are seeded table during default creation of database , and i hope you may also know when a database is created in Oracle 10g , Oracle creates a job called GATHER_STATS_JOB which gather stats within its maitenance window.I dont know why you always hijack my query here without giving any proper insight but always try to spoil.
Re: Freelist
[pre]Creating Controlfile Consfusion -
Incorrect registration information in Monitoring Registration table
Hi, I've got following issue: if I reboot my lync phone (not pc client) new entry is not created in the monitoring Registration table, but the old entry, column DeRegisterTime is updated and DeRegisterTypeId is changed to 'ClientCrashed'.
However if I keep the phone powered down for 10 minutes (default subscription expiry time I guess) or try to call the user on this phone, the old entry will be marked as ClientCrashed but after I log in, I will get an new entry.
Did any of you guys had such issue? the problem is that you can't really tell if the client is logged on or not, because in many cases I have status 'client crashed' when in reality the user has just restarted their phone...
cheers,
ŁukaszHi,
Did the issue happen for only one user or for multiple users? May be when you restart the phone you did not exit the Lync client so Monitoring Registration table show client crashed. You can try to exit Lync client firstly and then restart Phone and check
again.
Best Regards,
Eason Huang
Eason Huang
TechNet Community Support -
hi java gurus!
i have an application that inserts a number into a table(SQL), a listener picks this number and queries another db (oracle) using this number then updates the sql db with the data extracted from the oracle db. in my app i only insert the number into sql and extract data from sql - i don't deal with the oracle bit.
what i want to know is: how do i monitor the table to know: the data has been extracted from oracle & inserted into sql and so now i can extract it from sql...
please please help...
~blu!~IMHO, there is no clean way to do what you want. You
could add a database trigger on the SQL table that
fires on UPDATE, but that potentially takes your
business logic out of java and into the DB tier. Or,
you can poll the database on a fixed interval, but
that is expensive and clumsy, and it is nearly
impossible to optimize the timing..
The best way I know of to do this is to have the
application that does the update to notify your
application when it has finished an update. It could
send a JMS message to a queue you listen on; this
gives the benefit of asynchronous processing, in case
the applications work at different speeds. If not
JMS, a synchronous SOAPMessage could be sent to an
HTTP listener on your application.
I'll disagree that this is "best" if the object that does the update runs in the same JVM as the OP's object that needs to be notified. In that case, I'd suggest using java.util.Observer/Observable. I think it'd be simpler than queues or SOAP messages. -
Eem script monitor routing table - multiple entries into 1 email
I have the following script running to report on routing table changes
event manager applet route-table-monitor
event routing network 0.0.0.0/0 ge 1
action 0.5 set msg "Route changed: Type: $_routing_type, Network: $_routing_network, Mask/Prefix: $_routing_mask, Protocol: $_routing_protocol, GW: $_routing_lastgateway, Intf: $_routing_lastinterface"
action 1.0 syslog msg "$msg"
action 2.0 cli command "enable"
action 4.0 info type routername
action 5.0 mail server "10.*.*.*" to "roger.perkin@****" from "Switch1" subject "Routing Table Change" body "$msg $_cli_result"
It works perfectly however if multiple routes change I get multiple emails.
Last night we had a site go out and I got about 20 separate email for each subnet change.
What I would like to do is get this script to take all routes changed in a 1 minute interval and then output them into an email.
Not quite sure how I would go about that?
Thanks
Roger
Currently studying for my CCIE and just started on EEM, have not done much scripting before so this is all good stuff to know.You can't do this with one policy. However, you could accomplish this with a timer policy that will batch up the pending updates, though. Something like this would work.
event manager applet route-table-monitor event routing network 0.0.0.0/0 ge 1 action 0.5 set msg "Route changed: Type: $_routing_type, Network: $_routing_network, Mask/Prefix: $_routing_mask, Protocol: $_routing_protocol, GW: $_routing_lastgateway, Intf: $_routing_lastinterface" action 0.6 syslog msg "$msg" action 1.0 handle-error type ignore action 2.0 context retrieve key RTRCTXT variable msgs action 3.0 if $_error ne FH_EOK action 4.0 set msgs "$msg\n" action 5.0 else action 6.0 append msgs "$msg\n" action 7.0 end action 8.0 handle-error type exit action 9.0 context save key RTRCTXT variable msgs!event manager applet route-table-batcher event timer watchdog time 60 action 1.0 handle-error type ignore action 2.0 context retrieve key RTRCTXT variable msgs action 3.0 if $_error eq FH_EOK action 4.0 info type routername action 5.0 mail server "10.*.*.*" to "roger.perkin@****" from "$_info_routername" subject "Routing Table Change" body "$msgs" action 6.0 end action 7.0 handle-error type exit -
Business Process Monitoring-Value table
Dear friends,
I am working with Sol Man 4.0, SP12.
In SOLUTION_MANAGER t.code, in the screen of Business Process Maintenance, iam finding the the following tabs
1. Steps of process-Partially display
2. Seasons (Peak Months)
3. Process availability...etc
In that screen, i am choosing the tab "Steps of process-Partially display" then under the column. "Details", i am choosing the check box.
At that time i am getting folowing information
"Value Create transfer posting in column Step Type not listed in value table"
In which value table, i have to include the process step?
Thanks
SenthilHi John,
you get an overview about all available key figures in the PPT "Business Process & Interface Monitoring - Part 2" under https://service.sap.com/bpm --> Media Library --> Customer Information. It lists all Monitoring Object and Key Figures with Select-Options. There is no document that describes the setup of every key figure as the setup is too similar for most objects. Therefore the grey text boxes within the tool.
The NAST collector will always pick-up everything older than x days. So you would have to process or delete the olde entries. But it is a good advise that for this rather technical key figure we should think about a delta mechanism like with our IDOC or ABAP dump monitor.
Best Regards
Volker -
How to create a package/sql agent job to monitor a table constantly
I have a situation like this: there is a log table. once a while, there will be a row inserted into this log table. What i'd like to do is to create a ssis package/sql agent job, to monitor this log table. As soon as a new row got inserted
into the log table, do something process it (process time might be as long as a few minutes). Timing is quite important here. I'd like to make sure the new row get processed as soon as it got inserted into the log table (so I don't like the ideas
to schedule the sql agent job to run every 5 minutes, or something like that). What is the best way to achive this? Thanks a lot in advance!
TimHi,
There is one System Stored Procedure: dbo.sp_Start_job
Which excepts values are like :-
@job_name sysname = NULL,
@job_id UNIQUEIDENTIFIER = NULL,
@error_flag INT = 1, -- Set to 0 to suppress the error from sp_sqlagent_notify if SQLServerAgent is not running
@server_name sysname = NULL, -- The specific target server to start the [multi-server] job on
@step_name sysname = NULL, -- The name of the job step to start execution with [for use with a local job only]
@output_flag INT = 1 -- Set to 0 to suppress the success message
First Create one Trigger on your table which call above Store Procedure
Trigger is look like below:-
ALTER TRIGGER dbo.MyTrigger
ON dbo.MyTable
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE @MyTableTypeTVP AS MyTableType;
INSERT INTO @MyTableTypeTVP(job_name, job_id, @error_flag ,@server_name,@step_name,@output_flag)
SELECT 'Your JOb Name',generate UNIQUEIDENTIFIER,1,'Localhost','Any Step if',1
EXEC [dbo].[sp_start_job] @MyTableTypeTVP;
END
Thanks
Prasad -
Table Entry Monitoring on Table BKPF
Hello dear SAP Community,
I set up the table entry monitoring in BPM to look for parked documents (Field BSTAT = V) in table BKPF with the corresponding company code using "Number of Counted Entries"
So far everything works fine but I do have a question regarding the Output.
Is there a possibility to change/adapt this upper/lower range settings (RED, if less/equal; YELLOW, if less/equal; Yellow, if more than; RED, if more than)
to simply YELLOW if more than and RED if more than.
Because in my case if there are no selected values this causes a RED Alert but actually this should trigger a green message because it is a desired result.
Thank you for hints/solution
BR, FlorianHi Florian,
Just to clarify, if you do not put any values in "RED, if less/equal; YELLOW, if less/equal" that causes an alert in your Table Entry Counter?
I'm using this type of monitor for checking if there are more then 0 entries of some type, for that I am only filling "Red if more than" and all the other fields are empty. This configuration works for me. Could you describe a bit closer what values you typed into "Number of Counted Entries" tab ?
BR,
Jolanta. -
EXP/IMP does not preserve MONITORING on tables
Consider the following (on 8.1.7):
1. First, create a new user named TEST.
SQL> CONNECT SYSTEM/MANAGER
Connected.
SQL> CREATE USER TEST IDENTIFIED BY TEST;
User created.
SQL> GRANT CONNECT, RESOURCE TO TEST;
Grant succeeded.2. Connect as that user, create a table named T and enable monitoring on the table.
SQL> CONNECT TEST/TEST
Connected.
SQL> CREATE TABLE T(X INT);
Table created.
SQL> ALTER TABLE T MONITORING;
Table altered.
SQL> SELECT TABLE_NAME, MONITORING FROM USER_TABLES WHERE TABLE_NAME = 'T';
TABLE_NAME MON
T YES3. Export the schema using EXP.
SQL> HOST EXP OWNER=TEST FILE=TEST.DMP CONSISTENT=Y COMPRESS=N4. Drop and recreate the user.
SQL> CONNECT SYSTEM/MANAGER
Connected.
SQL> DROP USER TEST CASCADE;
User dropped.
SQL> CREATE USER TEST IDENTIFIED BY TEST;
User created.
SQL> GRANT CONNECT, RESOURCE TO TEST;
Grant succeeded.5. Finally, connect as the user, and import the schema.
SQL> CONNECT TEST/TEST
Connected.
SQL> HOST IMP FROMUSER=TEST TOUSER=TEST FILE=TEST.DMPNow monitoring is no longer enabled:
SQL> SELECT TABLE_NAME, MONITORING FROM USER_TABLES WHERE TABLE_NAME = 'T';
TABLE_NAME MON
T NOIs this behaviour documented anywhere?
Are there any IMP/EXP options that will preserve MONITORING?Apparently it's a non-public bug #809007 in 8.1.7 which should be fixed with 9i
-
Monitor all tables of a schema
Hi All,
I would like to monitor all the tables under a schema, hoping to have an alert (in email) if there is decrease in rows of any table owned by the schema (delete rows).
My db version is 11.1.0.6
Please share your suggestions on how it can be done, also if possible with some good examples
Thanks,
Sathishuser13158979 wrote:
can i monitor all the tables owned by the schema using above method? FGA requires object_name need to be monitored, in this case how can we use it?
eg:
DBMS_FGA.ADD_POLICY (object_schema => 'scott', object_name=>'emp', policy_name
=> 'mypolicy1', audit_condition => 'sal < 100', audit_column =>'comm, credit_
card, expirn_date', handler_schema => NULL, handler_module => NULL, enable =>
TRUE, statement_types=> 'INSERT, UPDATE');You can do ;
SQL> conn sys as sysdba
Connected.
SQL> SHOW PARAMETER AUDIT
NAME TYPE VALUE
audit_file_dest string /home/oracle/app/oracle/admin/
ORAWISS/adump
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string DB
SQL> AUDIT ALL BY ORAWISS BY ACCESS;
Audit succeeded.
SQL> AUDIT EXECUTE PROCEDURE BY ORAWISS BY ACCESS;
Audit succeeded.
SQL> AUDIT SELECT TABLE, UPDATE TABLE, INSERT TABLE, DELETE TABLE BY ORAWISS BY ACCESS;
Audit succeeded.
SQL> conn orawiss
Enter password:
Connected.
SQL> insert into orawiss.test_audit values(1);
1 row created.
SQL> commit;
Commit complete.
SQL> delete from orawiss.test_audit;
1 row deleted.
SQL> commit;
Commit complete.
SQL> conn sys as sysdba
Connected.
COLUMN username FORMAT A10
COLUMN owner FORMAT A10
COLUMN obj_name FORMAT A10
COLUMN extended_timestamp FORMAT A35
SQL>
SELECT username,
extended_timestamp,
owner,
obj_name,
action_name
FROM dba_audit_trail
WHERE owner = 'ORAWISS'
ORDER BY timestamp;
USERNAME EXTENDED_TIMESTAMP OWNER OBJ_NAME ACTION_NAME
ORAWISS 08-AUG-11 09.59.48.837419 PM +02:00 ORAWISS TEST_AUDIT INSERT
ORAWISS 08-AUG-11 09.59.59.645848 PM +02:00 ORAWISS TEST_AUDIT DELETE
SQL> -
how to monitor the contents of a structure table in debug tool? any other way?
newbie here. <b>thanks</b>.Hi,
Do you mean you want to watch a particular field for particular value?
Then use watch point. in debugging mode, presee create watch point. give the field name or table name, set the releational operator and the value.
The program will stop when there is a match for the given condition
Regards,
Niyaz -
How i can disable MONITORING temp tables
In the Oracle 10 g database stats are gathering using
Gtaher_stats_job
current setting is statictics _level =Typical
i need to disable the monotoring for temp tables
i try this command it will not change
ALTER TABLE "RDC"."INV_TEMP"
NOMONITORING
How i can disable this , i read some artikel this feature is disabled in oracle 10g
how i can disable
tks
rdaJustin Note
I would suggest that this is probably a bad idea, though. If there are no statistics on an object,
you're basically asking the CBO to work blind. Why not gather statistics after loading a fresh set of data
into the table?
I am running stats once a week , so i cannnot gather fresh set of dataNicolas
setting stats to a given value and lock them (to be not set to 0 by any gather stat process) may help as well.
Justin
Nicolas.
I cannot understand this >>to be not set to 0 by any gather stat processOk my process as follows
through sqlldr i load a data to temp table , i have a after insert trigger
once the data insert into temp table it check for some condition and
fire this trigger and insert into Permanent Table,
After the process finished the temp table truncated.
this whole process running every day .
I already mention stats are collected once a week , no point i keep the stats
for this temp table and suppose if i keep , it will give wong execution plan
and insert into this temp tabel will be slow -- am i right in this step
This is the reason i dont want stats at all in temp table
Thanks Justin & Nicolas
Best rds -
MIB to monitor sticky table entries
Does anyone know the MIB and OID I can use to monitor the current number of entries in my sticky table
this does not exist currently but developpers agreed this is something that would be useful so they will at the possibility to implement it.
Thanks,
Gilles. -
BPM: percentage growth in table monitoring
Hello all,
I wanted to know whether it's possible to monitor a table in BPM with percentage growth values e.g. I get an alert when the table grows 10% or more per day.
I only found the growth monitoring by absolute values like 100.000 records inserted.
Thanks in advance for your answers.
Best Regards,
Marcel WinderHello,
I just realized that an existing Setup Guide for the Table Entry Counter is not uploaded to the SAP Service Marketplace where it should be under http://service.sap.com/bpm > Media Library > Technical Information
This document desribes how dynamic dates can be realized. I guess the document shall be uploaded by tomorrow. In the corrsponding section it says
1.5.3 Dynamic Date Filtering
There are many use cases, where the result of a table count is time-dependent. For our used
example, we might be interested only in the amount of short dumps created on the current day.
However, inside the BPMon Setup Session there are only static filter values, like a fixed day
entry for a date field. To achieve a dynamic date filtering, which means calculating relative
dates instead of absolute dates, the Table Counter provides a special syntax on how define
fixpoints and offsets for relative dates.
Syntax for Relative Dates
Instead of a fixed (absolute) date, you can enter a special keyword for the start date (prefixed
by a $ sign) and optionally an additional offset as difference in days.
Syntax = <StartDate>[<Difference>]
For <StartDate> the following keywords are available:
Keyword for <StartDate> Description Example for 2008/09/24
$TODAY current date 2008/09/24
Hence you could say SELECT everything > $TODAY-30 Then you should get all entries of the last 30 days.
Best Regards
Volker -
Hi
can anyone explain me, the meaning of each column of system.monitor TimesTen table ?
regards
lewismm
Edited by: lewismm on Sep 25, 2008 9:28 AMHi Lewis
Thanks for the clarification. SYS.MONITOR does not hold replication performance metrics, it is more concerned with 'engine' performance. We have rudimentary statistics for Replication which are available view the ttRepAdmin utility, -showconfig option. The numbers available, for example records per second and latency are based on samples and only really mean anything if you have a steady flow of replication work. If you see -1 under the headings then we don't have enough of a sample to provide a meaningful metric.
ttRepAdmin -showstatus is another option which will provide you with some information on the amount of data being sent across the network.
You could also look at the distance between the last written LSN (the last log record generated) and the replication hold LSN (the position in the log of the oldest acknowledged batch) via ttBookmark on the master side to determine how far behind replication is. In cases where replication has fallen way behind the workload on the master you could get in to the position where the replication agent is reading log records from disk and not from the log buffer in memory. In this situation you would being to see LOG_FS_READS in SYS.MONITOR increase.
Hope this helps
Paul
Maybe you are looking for
-
Space and URL in table while generating PDF
Hi Gurus, I am generating a PDF document via WebDyenpro ABAP (SE80), I am facing a following problem's 1. While populating data in a particular record there comes a extra space [or i dont know what] which occupies full page below it causing disturban
-
When I start the computer I try to open the safari but it start to reaload over and over, so I turn off the Wi-Fi and after turn on, then start Safari again, and it starts to work again. I believe that is not a problem of RAM because I put 16GB of RA
-
We've had malware problems so I backed up my hard drive's bookmarks onto our server but now the bookmarks are out of date. I just wanted to update them and save them back as the full web page that I originally had as the backup, so I can import them
-
JDBC initialize failed due to long idling time
Hi gurus Anyone having similar problem and would like to share their solution for the problem stated below. Normally the integration for IDOC-XI-JDBC scenarios is working fine, but when the R3 halt sending IDOC to XI, the JDBC connection to MS SQL 20
-
How to deploy webservices in weblogic 90
I am tryiong to find a way to deploy a set of web service endpoints represented by stateless session beans in WL9.0. I know that 9.0 best practice recommendation is using JSR181 and jwsc tool, but for various reasons I cannot use that approach. So, w