Statistics via SAPWL_WORKLOAD_GET_STATISTIC
Hi,
right now i am concerned with the creation of statistic data about the amount of calls of customer developments (reports and function modules)
For this purpose I used the function module 'SAPWL_WORKLOAD_GET_STATISTIC' .
Unfortunately this module does not provide information about function modules. Can somebody tell, which system parameters play a role in this scenario, and which further settings might have to be performed for this task.
maybe there are also other functions, which I might use for these statistic data.
the report must run on a 3.1l and 4.0B release, because we want to migrate these systems to ERP without using old and never used programs.
i appreciate your valueable help in advance.
Best Regards,
Marc
Hi,
Try with Fucntion module: "/SDF/OCS_GET_STAT_INFO".
When i see the code in above function module he using the code as follows inorder to fecth function modules to list.
FUNCMODNAME = 'S390_GET_CURRENT_SSID' .
SELECT SINGLE FUNCNAME
FROM TFDIR
INTO :FUNCTION_CHECK
WHERE FUNCNAME = :FUNCMODNAME .
IF SY-SUBRC = 0 .
CALL FUNCTION FUNCMODNAME
IMPORTING
SSID = DBSID_DB2
DBHOST = DBHOST_DB2.
ENDIF .
It may helpful to you.
Thanks,
Naga
Similar Messages
-
How I gather statistics via OEM db control?
Hi,
According to the documentation,
I may setup a job to automatically
gather statistics for all the segments in my database
via EM database control.
I cant find any documentation on how to do this.
I did find a link in EM which will allow me
to gather statistics ad hoc but I want it automated.
Thanks,
-moiThere should be a stats gathering program scheduled
by default. Check under Database > Administration >
Scheduler > Jobs in Database Control. The job is called
SYS.GATHER_STATS_JOB. You will need to login as SYS to
see the job. The job runs the
DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure.
You can, of course, schedule any other
DBMS_STATS.GATHER_* procedures if you prefer.
Hope this helps.
Kailash. -
Get Proxy Service statistics via WLST
Hello! When i try to get statistics of using Proxu Service, then i only have a list with proxies. How i can get statistics?
This code:
alsbCore = findService(ALSBConfigurationMBean.NAME, ALSBConfigurationMBean.TYPE)
allRefs = alsbCore.getRefs(Ref.DOMAIN)
for ref in allRefs:
typeId = ref.getTypeId()
if typeId == "ProxyService":
cd('domainRuntime:/DomainServices/ServiceDomain')
print cmo.getProxyServiceStatistics([ref],ResourceType.SERVICE.value(),'')
is return:
{ProxyService SimpleREST/Products=com.bea.wli.monitoring.ServiceResourceStatistic@81f1b5}
{ProxyService project_02/ru_tii_crm_ws_ps=com.bea.wli.monitoring.ServiceResourceStatistic@82609f}
{ProxyService osb_tii/crm_ws_ps_https_ccert=com.bea.wli.monitoring.ServiceResourceStatistic@82c605}
{ProxyService project_04/ru_tii_crm_ws_ps=com.bea.wli.monitoring.ServiceResourceStatistic@834864}
{ProxyService project_01/ru_tii_lkk_yl_ws_ps=com.bea.wli.monitoring.ServiceResourceStatistic@8868f1}
It looks as id, or hashcode.
What function can take this hash as parameter and return statistics?# Set some constants
username = os.getenv('LOGIN')
password = os.getenv('PASSWORD')
import socket
localhost = socket.gethostname()
import os
domain = os.getenv('WL_DOMAIN')
domain_dir= os.getenv('WL_DOMAIN_DIR')
mwHome = os.getenv('MW_HOME')
print mwHome
url = 't3s://' + localhost + ':8002'
print url
connect(username,password,url)
from com.bea.wli.sb.management.configuration import ALSBConfigurationMBean
from com.bea.wli.config import Ref
from java.lang import String
from com.bea.wli.config import Ref
from com.bea.wli.sb.util import Refs
from com.bea.wli.sb.management.configuration import CommonServiceConfigurationMBean
from com.bea.wli.sb.management.configuration import SessionManagementMBean
from com.bea.wli.sb.management.configuration import ProxyServiceConfigurationMBean
from com.bea.wli.monitoring import StatisticType
from com.bea.wli.monitoring import ServiceDomainMBean
from com.bea.wli.monitoring import ServiceResourceStatistic
from com.bea.wli.monitoring import StatisticValue
from com.bea.wli.monitoring import ResourceType
domainRuntime()
alsbCore = findService(ALSBConfigurationMBean.NAME, ALSBConfigurationMBean.TYPE)
allRefs = alsbCore.getRefs(Ref.DOMAIN)
for ref in allRefs:
typeId = ref.getTypeId()
if typeId == "ProxyService":
#print ref.getFullName()
#print ref
#print ref.getGlobalName()
cd('domainRuntime:/DomainServices/ServiceDomain')
#print cmo.getProxyServiceStatistics([ref],ResourceType.SERVICE.value(),'')
props = cmo.getProxyServiceStatistics([ref],ResourceType.SERVICE.value(),'')
print props
for rs in props[ref].getAllResourceStatistics():
for e in rs.getStatistics():
if e.getType() == StatisticType.COUNT:
print e.getName() + "("+ str(e.getType()) +"): " + str(e.getCount())
if e.getType() == StatisticType.INTERVAL:
print e.getName() + "("+ str(e.getType()) +"): " + str(e.getMin()) + " " + str(e.getMax()) + " " + str(e.getAverage()) + " " + str(e.getSum())
if e.getType() == StatisticType.STATUS:
print e.getName() + "("+ str(e.getType()) +"): " + str(e.getCurrentStatus()) + "(" + str(e.getInitialStatus()) + ")" -
Retrieve task statistics via soap
Hello,
I would need some task-list-metadata like the total number of tasks,
the number of assigned tasks, the number of completed tasks just
like the Chart-Feature in the BPM Worklist Application.
I need to be able to retrieve that data via soap. I couldn't find any
available webservice, which would be able to accomplish that.
Documentation tells me that there exists a 'Tasks Report Service'
which may be appropriate, but it is supported just by plain Java API.
So before starting to invent my own solution I would like to know
if there is a existing webservice which supports what I need?
Kind regards,
MartinHi,
Try with Fucntion module: "/SDF/OCS_GET_STAT_INFO".
When i see the code in above function module he using the code as follows inorder to fecth function modules to list.
FUNCMODNAME = 'S390_GET_CURRENT_SSID' .
SELECT SINGLE FUNCNAME
FROM TFDIR
INTO :FUNCTION_CHECK
WHERE FUNCNAME = :FUNCMODNAME .
IF SY-SUBRC = 0 .
CALL FUNCTION FUNCMODNAME
IMPORTING
SSID = DBSID_DB2
DBHOST = DBHOST_DB2.
ENDIF .
It may helpful to you.
Thanks,
Naga -
Transaction history via SAPWL_WORKLOAD_GET_STATISTIC
Hi,
I have been trying to get transaction history out of SAP by using the SAPWL_WORKLOAD_GET_STATISTIC api. Is this the correct approach?
I execute SAPWL_WORKLOAD_GET_STATISTIC with 'd', 'w' or 'm' parameter and then retrieve the following fields from HitList_DBCAlls structure returned by the call -
a.TCode (Tcode)
b.Account (User Name)
c.Date (Transaction execution date)
d.Endti (Transaction execution time)
e.Mandt (SAP Client)
Will this API return all the transactions executed in the time period or only some top X worst transactions in terms of performance?Hi,
Try with Fucntion module: "/SDF/OCS_GET_STAT_INFO".
When i see the code in above function module he using the code as follows inorder to fecth function modules to list.
FUNCMODNAME = 'S390_GET_CURRENT_SSID' .
SELECT SINGLE FUNCNAME
FROM TFDIR
INTO :FUNCTION_CHECK
WHERE FUNCNAME = :FUNCMODNAME .
IF SY-SUBRC = 0 .
CALL FUNCTION FUNCMODNAME
IMPORTING
SSID = DBSID_DB2
DBHOST = DBHOST_DB2.
ENDIF .
It may helpful to you.
Thanks,
Naga -
How to get CGNAT statistics from ASR?
Hi team,
Is it possible to get nat statistics via SNMP wich contains in show cgn nat44 natXX stat command.
We use IOS XR 4.3.1 and rancid every 1 and 4 minutes. But sometimes the cgn_ma process stayes in Blocked.
Regards,
KonstantinHi Konstantin,
MIB support is not available for CGN yet.
Replied about cgn_ma issue in the other thread.
regards,
Somnath. -
Web Monitor - Statistics available from
Hi
I'm trying to generate some user statistics via the Web Monitor, however it is only able to return statistics available from the 1st January 2014. However the logs files go back to May 2012.
I've set the "Maximum Report Size" to the max allowed limit of 65535 and saved and reactivated the configuration but I'm still having the same issue.
Am i missing a trick somewhere?
Any help appreciated.The log tells us NOTHING about why the stats arent working.
In the admin interface we get the following error mesage when trying to view the stats
Error
Failure obtaining statistics from Web Cache cache server: bad response code received.
Please check that the cache server is up and running.
BUT the cahce IS running and we can access pages and the "LOGS" DO show activity.
Any idea WHY the admin page won't display the statistics
Rob -
Hi All,
I wanted to know about the statistic checks on Cubes or ODS, that should be carried out in the system (BW 3.5)
Which are the statistics checks which I should carry out daily or others and with what frequency?
ALso, I wanted to know about the significance of these statistics checks.
Thank and Best Regards,
SharmishthaHi,
for an overview about your tables statistics goto table DBSTATT*; for instance DBSTATTORA if your DB is Oracle. It will give you how the table was analyzed and when it eas last analyzed.
DB20 is the central transaction for statistics for checking.
You should refresh your statistics via the performance tab of an infoprovider and/or via process chains.
Statistics must be updated regularly in order to have your RDBMS optimizer calculating the right execution plan for a particular query.
The frequency to pdate statistics will depend on the number of records changed / inserted / deleted in comparison with the total (well shown in DB20)
hope this helps,
Olivier. -
Warnings Pool or cluster table selected to check/collect statistics
Dear all,
I am getting error in in db13 backup.
We are using Sap Ecc5 and
oracle 9i on Window 2003.
Production Server I am facing problem suddenly in db13 the UpdateStatsEnded with Return code: 0001 Success with warnings
BR0819I Number of pool and cluster tables found in DDNTT for owner SAPPRD: 169
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXB
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXC
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLSP
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLTP
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KAPOL
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KOCLU
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.M_IFLM
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBCLU
BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBFCL
And in db02
Missing in R/3 DDIC 11 index
MARA_MEINS
MARA_ZEINR
MCHA_VFDAT
VBRP_ARKTX
VBRP_CHARG
VBRP_FKIMG
VBRP_KZWI1
VBRP_MATKL
VBRP_MATNR
VBRP_SPART
VBRP_WERKS
Please guide steps how to build index and Pool or cluster table problem.
Thanks,
Kumar> BR0819I Number of pool and cluster tables found in DDNTT for owner SAPPRD: 169
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXB
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.EPIDXC
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLSP
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.GLTP
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KAPOL
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.KOCLU
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.M_IFLM
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBCLU
> BR0871W Pool or cluster table selected to check/collect statistics: SAPPRD.VBFCL
Upto Oracle 9i the rulebased optimizer was still used for Pool/Clustertables for reasons of plan stability (e.g. always take the index).
To ensure that this is the case, these tables/indexes mustn't have CBO statistics.
Therefore these tables are usually excluded from getting CBO statistics via an DBSTATC entry. You can modify this setting in transaction DB21.
> And in db02
>
>
Missing in R/3 DDIC 11 index
> MARA_MEINS
> MARA_ZEINR
> MCHA_VFDAT
> VBRP_ARKTX
> VBRP_CHARG
> VBRP_FKIMG
> VBRP_KZWI1
> VBRP_MATKL
> VBRP_MATNR
> VBRP_SPART
> VBRP_WERKS
Well, these indexes have been setup just in the database and not (how it is supposed to be) via the SE11. As the indexes have a naming-scheme, that is not supported by the ABAP Dictionary, the easiest way to get away from the warnings is to check which columns are covered by the indexes, drop the indexes on DB level and recreate them via SE11.
Best regards,
Lars -
Collecting Statistics on iView usage.
We are using WebTrends to track site statistics for all of our web sites, but the one piece of the puzzle we are missing is being able to track iView usage on our portals (EP6 SP2 and EP6 SP9).
WebTrends provides software that tracks statistics via a javascript include file, and we are trying to determine if there is one place we could put this javascript - either include it in another javascript file, or a page, that is used by all the iViews.
Is there such a place (thing, file, etc...)? Or is there some other solution that SAP provides that would help us track iView usage? Any help with this would be greatly appreciated.
Thanks,
PatrickHi Patrick,
I've used WebTrends some years ago, but solely the log file analysis.
For a certain iView, for sure you can include some JS; it's mainly the question if you have the source code under your control or if it's an iView delivered by SAP as portal standard iView (then you would have to decompile / add / compile / redeploy; but loosing everything when an update gets deployed).
There is also here in the download section the "Portal Activity Reporting" (Downloads -- Developer or Administrator Tool -- EP); some people regulary report problems with it, but maybe you could give it a try...
Hope it helps
Detlev
PS: Please consider rewarding points for helpful answers. Thanks in advance! -
Unable to capture traffic with Ethanalyzer on N5K-5548
Version - 5.0(2)N2(1)
My understanding is that we need
1) Access-List defined, with statistics configured to get matched traffic onto control plane
2) Access-List applied to an interface, via command "ip port access-group mycap in"
3) ethanalyzer command, ex; "ethanalyzer local interface mgmt capture-filter "net 1.1.1.0/24" (also tried interfaces inbound-hi & inbound-low)
I see matches on the access-list, but not seeing anything captured.
What am I missing?
ip access-list mycap
statistics per-entry
10 permit ip any 1.1.1.0/24
20 permit ip 1.1.1.0/24 any
30 permit ip any anyjust fyi.. on a similar sidenote we are going to enchance the capability of capture filter to collect the necessary statistics via the following enhancement
CSCsz99277 - ethanalyzer capture filter broken
http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCsz99277 -
Dear all,
What are the advantages if we do
analyze table <table name> compute statisticsWhen ever i run this it is computing huge CPU utilization and taking long time........
if we do this as far as i know
1)it will analyze the table and count number of rows in a table properly.......
Is there any advantage for INDEX if we do this operation.
Regards,
VamsiHey,
Actually this command is old. I think it is 9i and below; It is still there for backwards compatibility.
Since you are computing the statitics, oracle is going over all the records to get the stats. You can do estimate statics instead.
OR, even better, you can use the new DBMS_Stats Package:
EXEC DBMS_STATS.GATHER_TABLE_STATS (ownname=>'SCHEMA_NAME', tabname => 'TABLE_NAME', estimate_percent=>dbms_stats.auto_sample_size, degree=>2);
This uses DBMS stats package, to estimate the statistics via a sample size automatically set through oracle.
You can also use a similar command to estimate the stats of the whole schema
EXEC DBMS_STATS.GATHER_SCHEMA_STATS (ownname=> 'SCHEMA_NAME', estimate_percent=>dbms_stats.auto_sample_size, degree=>2);
From previous experience, creating an index is not enough, you have to gather the stats on the related table, so that the execution plan gets optimized.
This command get table stats, histogram, uniqueness.. etc -
Why is LOWER function producing a cartesian merge join, when UPPER doesn't?
Hi there,
I have an odd scenario that I would like to understand correctly...
We have a query that is taking a long time to run on one of our databases, further investigation of the explain plan showed that the query was in fact producing a Cartesian merge join even though there is clearly join criteria specified. I know that the optimiser can and will do this if it is a more efficient way of producing the results, however in this scenario it is producing the Cartesian merge on two unrelated tables and seemingly ignoring the Join condition...
*** ORIGINAL QUERY ***
SELECT count(*)
FROM srs_sce sce,
srs_scj scj,
men_mre mre,
srs_mst mst,
cam_smo cam,
ins_spr spr,
men_mua mua,
temp_webct_users u
WHERE sce.sce_scjc = scj.scj_code
AND sce.sce_stuc = mre.mre_code
AND mst.mst_code = mre.mre_mstc
AND mre.mre_mrcc = 'STU'
AND mst.mst_code = mua.mua_mstc
AND cam.ayr_code = sce.sce_ayrc
AND cam.spr_code = scj.scj_sprc
AND spr.spr_code = scj.scj_sprc
-- Ignored Join Condition
AND LOWER(mua.mua_extu) = LOWER(u.login)
AND SUBSTR (sce.sce_ayrc, 1, 4) = '2008'
AND sce.sce_stac IN ('RCE', 'RLL', 'RPD', 'RIN', 'RSAS', 'RHL_R', 'RCO', 'RCI', 'RCA');
*** CARTESIAN EXPLAIN PLAN ***
SELECT STATEMENT CHOOSECost: 83
20 NESTED LOOPS Cost: 83 Bytes: 176 Cardinality: 1
18 NESTED LOOPS Cost: 82 Bytes: 148 Cardinality: 1
15 NESTED LOOPS Cost: 80 Bytes: 134 Cardinality: 1
13 NESTED LOOPS Cost: 79 Bytes: 123 Cardinality: 1
10 NESTED LOOPS Cost: 78 Bytes: 98 Cardinality: 1
7 NESTED LOOPS Cost: 77 Bytes: 74 Cardinality: 1
NOTE: The Cartesian product is performed on the men_mre & temp_webct_users tables not the men_mua mua & temp_webct_users tables specified in the join condition.
4 MERGE JOIN CARTESIAN Cost: 74 Bytes: 32 Cardinality: 1
1 TABLE ACCESS FULL EXETER.TEMP_WEBCT_USERS Cost: 3 Bytes: 6 Cardinality: 1
3 BUFFER SORT Cost: 71 Bytes: 1,340,508 Cardinality: 51,558
2 TABLE ACCESS FULL SIPR.MEN_MRE Cost: 71 Bytes: 1,340,508 Cardinality: 51,558
6 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCE Cost: 3 Bytes: 42 Cardinality: 1
5 INDEX RANGE SCAN SIPR.SRS_SCEI3 Cost: 2 Cardinality: 3
9 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCJ Cost: 1 Bytes: 24 Cardinality: 1
8 INDEX UNIQUE SCAN SIPR.SRS_SCJP1 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID SIPR.INS_SPR Cost: 1 Bytes: 25 Cardinality: 1
11 INDEX UNIQUE SCAN SIPR.INS_SPRP1 Cardinality: 1
14 INDEX UNIQUE SCAN SIPR.SRS_MSTP1 Cost: 1 Bytes: 11 Cardinality: 1
17 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MUA Cost: 2 Bytes: 14 Cardinality: 1
16 INDEX RANGE SCAN SIPR.MEN_MUAI3 Cost: 2 Cardinality: 1
19 INDEX RANGE SCAN SIPR.CAM_SMOP1 Cost: 2 Bytes: 28 Cardinality: 1 After speaking with data experts I realised one of the fields being LOWERed for the join condition generally always had uppercase values so I tried modifying the query to use the UPPER function rather than the LOWER one originally used, in this scenario the query executed in seconds and the Cartesian merge had been eradicated which by all accounts is a good result.
*** WORKING QUERY ***
SELECT count(*)
FROM srs_sce sce,
srs_scj scj,
men_mre mre,
srs_mst mst,
cam_smo cam,
ins_spr spr,
men_mua mua,
temp_webct_users u
WHERE sce.sce_scjc = scj.scj_code
AND sce.sce_stuc = mre.mre_code
AND mst.mst_code = mre.mre_mstc
AND mre.mre_mrcc = 'STU'
AND mst.mst_code = mua.mua_mstc
AND cam.ayr_code = sce.sce_ayrc
AND cam.spr_code = scj.scj_sprc
AND spr.spr_code = scj.scj_sprc
-- Working Join Condition
AND UPPER(mua.mua_extu) = UPPER(u.login)
AND SUBSTR (sce.sce_ayrc, 1, 4) = '2008'
AND sce.sce_stac IN ('RCE', 'RLL', 'RPD', 'RIN', 'RSAS', 'RHL_R', 'RCO', 'RCI', 'RCA');
*** WORKING EXPLAIN PLAN ***
SELECT STATEMENT CHOOSECost: 13
20 SORT AGGREGATE Bytes: 146 Cardinality: 1
19 NESTED LOOPS Cost: 13 Bytes: 146 Cardinality: 1
17 NESTED LOOPS Cost: 12 Bytes: 134 Cardinality: 1
15 NESTED LOOPS Cost: 11 Bytes: 115 Cardinality: 1
12 NESTED LOOPS Cost: 10 Bytes: 91 Cardinality: 1
9 NESTED LOOPS Cost: 7 Bytes: 57 Cardinality: 1
6 NESTED LOOPS Cost: 6 Bytes: 31 Cardinality: 1
4 NESTED LOOPS Cost: 5 Bytes: 20 Cardinality: 1
1 TABLE ACCESS FULL EXETER.TEMP_WEBCT_USERS Cost: 3 Bytes: 6 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MUA Cost: 2 Bytes: 42 Cardinality: 3
2 INDEX RANGE SCAN EXETER.TEST Cost: 1 Cardinality: 1
5 INDEX UNIQUE SCAN SIPR.SRS_MSTP1 Cost: 1 Bytes: 11 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MRE Cost: 2 Bytes: 26 Cardinality: 1
7 INDEX RANGE SCAN SIPR.MEN_MREI2 Cost: 2 Cardinality: 1
11 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCE Cost: 3 Bytes: 34 Cardinality: 1
10 INDEX RANGE SCAN SIPR.SRS_SCEI3 Cost: 2 Cardinality: 3
14 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCJ Cost: 1 Bytes: 24 Cardinality: 1
13 INDEX UNIQUE SCAN SIPR.SRS_SCJP1 Cardinality: 1
16 INDEX RANGE SCAN SIPR.CAM_SMOP1 Cost: 2 Bytes: 19 Cardinality: 1
18 INDEX UNIQUE SCAN SIPR.INS_SPRP1 Bytes: 12 Cardinality: 1 *** RESULT ***
COUNT(*)
83299I am still struggling to understand why this would have worked as to my knowledge the LOWER & UPPER functions are similar enough in function and regardless of that why would one version cause the optimiser to effectively ignore a join condition.
If anyone can shed any light on this for me it would be very much appreciated.
Regards,
Kieron
Edited by: Kieron_Bird on Nov 19, 2008 6:09 AM
Edited by: Kieron_Bird on Nov 19, 2008 6:41 AMMy mistake on the predicate information, was in a rush to run off to a meeting when I posted the entry...
*** UPPER Version of the Explain Plan ***
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 146 | 736 | | | |
| 1 | SORT AGGREGATE | | 1 | 146 | | | | |
| 2 | SORT AGGREGATE | | 1 | 146 | | 86,10 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | 241 | 35186 | 736 | 86,10 | PCWP | |
|* 4 | HASH JOIN | | 774 | 105K| 733 | 86,09 | P->P | HASH |
|* 5 | HASH JOIN | | 12608 | 1489K| 642 | 86,08 | P->P | BROADCAST |
| 6 | NESTED LOOPS | | 14657 | 1531K| 491 | 86,07 | P->P | HASH |
|* 7 | HASH JOIN | | 14657 | 1359K| 490 | 86,07 | PCWP | |
|* 8 | HASH JOIN | | 14371 | 996K| 418 | 86,06 | P->P | HASH |
|* 9 | TABLE ACCESS FULL | SRS_SCE | 3211 | 106K| 317 | 86,00 | S->P | BROADCAST |
|* 10 | HASH JOIN | | 52025 | 1879K| 101 | 86,06 | PCWP | |
|* 11 | TABLE ACCESS FULL | MEN_MRE | 51622 | 1310K| 71 | 86,01 | S->P | HASH |
| 12 | INDEX FAST FULL SCAN| SRS_MSTP1 | 383K| 4119K| 30 | 86,05 | P->P | HASH |
| 13 | TABLE ACCESS FULL | SRS_SCJ | 114K| 2672K| 72 | 86,02 | S->P | HASH |
|* 14 | INDEX UNIQUE SCAN | INS_SPRP1 | 1 | 12 | | 86,07 | PCWP | |
| 15 | TABLE ACCESS FULL | MEN_MUA | 312K| 4268K| 151 | 86,03 | S->P | HASH |
| 16 | INDEX FAST FULL SCAN | CAM_SMOP1 | 527K| 9796K| 91 | 86,09 | PCWP | |
| 17 | TABLE ACCESS FULL | TEMP_WEBCT_USERS | 33276 | 194K| 3 | 86,04 | S->P | HASH |
Predicate Information (identified by operation id):
3 - access(UPPER("MUA"."MUA_EXTU")=UPPER("U"."LOGIN"))
4 - access("CAM"."AYR_CODE"="SCE"."SCE_AYRC" AND "CAM"."SPR_CODE"="SCJ"."SCJ_SPRC")
5 - access("MST"."MST_CODE"="MUA"."MUA_MSTC")
7 - access("SCE"."SCE_SCJC"="SCJ"."SCJ_CODE")
8 - access("SCE"."SCE_STUC"="MRE"."MRE_CODE")
9 - filter(SUBSTR("SCE"."SCE_AYRC",1,4)='2008' AND ("SCE"."SCE_STAC"='RCA' OR "SCE"."SCE_STAC"='RCE' OR
"SCE"."SCE_STAC"='RCI' OR "SCE"."SCE_STAC"='RCO' OR "SCE"."SCE_STAC"='RHL_R' OR "SCE"."SCE_STAC"='RIN' OR
"SCE"."SCE_STAC"='RLL' OR "SCE"."SCE_STAC"='RPD' OR "SCE"."SCE_STAC"='RSAS'))
10 - access("MST"."MST_CODE"="MRE"."MRE_MSTC")
11 - filter("MRE"."MRE_MRCC"='STU')
14 - access("SPR"."SPR_CODE"="SCJ"."SCJ_SPRC")
Note: cpu costing is off
40 rows selected.*** LOWER Version of the Explain Plan ***
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 146 | 736 | | | |
| 1 | SORT AGGREGATE | | 1 | 146 | | | | |
| 2 | SORT AGGREGATE | | 1 | 146 | | 88,10 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | 257K| 35M| 736 | 88,10 | PCWP | |
|* 4 | HASH JOIN | | 774 | 105K| 733 | 88,09 | P->P | HASH |
|* 5 | HASH JOIN | | 12608 | 1489K| 642 | 88,08 | P->P | BROADCAST |
| 6 | NESTED LOOPS | | 14657 | 1531K| 491 | 88,07 | P->P | HASH |
|* 7 | HASH JOIN | | 14657 | 1359K| 490 | 88,07 | PCWP | |
|* 8 | HASH JOIN | | 14371 | 996K| 418 | 88,06 | P->P | HASH |
|* 9 | TABLE ACCESS FULL | SRS_SCE | 3211 | 106K| 317 | 88,00 | S->P | BROADCAST |
|* 10 | HASH JOIN | | 52025 | 1879K| 101 | 88,06 | PCWP | |
|* 11 | TABLE ACCESS FULL | MEN_MRE | 51622 | 1310K| 71 | 88,01 | S->P | HASH |
| 12 | INDEX FAST FULL SCAN| SRS_MSTP1 | 383K| 4119K| 30 | 88,05 | P->P | HASH |
| 13 | TABLE ACCESS FULL | SRS_SCJ | 114K| 2672K| 72 | 88,02 | S->P | HASH |
|* 14 | INDEX UNIQUE SCAN | INS_SPRP1 | 1 | 12 | | 88,07 | PCWP | |
| 15 | TABLE ACCESS FULL | MEN_MUA | 312K| 4268K| 151 | 88,03 | S->P | HASH |
| 16 | INDEX FAST FULL SCAN | CAM_SMOP1 | 527K| 9796K| 91 | 88,09 | PCWP | |
| 17 | TABLE ACCESS FULL | TEMP_WEBCT_USERS | 33276 | 194K| 3 | 88,04 | S->P | HASH |
Predicate Information (identified by operation id):
3 - access(LOWER("MUA"."MUA_EXTU")=LOWER("U"."LOGIN"))
4 - access("CAM"."AYR_CODE"="SCE"."SCE_AYRC" AND "CAM"."SPR_CODE"="SCJ"."SCJ_SPRC")
5 - access("MST"."MST_CODE"="MUA"."MUA_MSTC")
7 - access("SCE"."SCE_SCJC"="SCJ"."SCJ_CODE")
8 - access("SCE"."SCE_STUC"="MRE"."MRE_CODE")
9 - filter(SUBSTR("SCE"."SCE_AYRC",1,4)='2008' AND ("SCE"."SCE_STAC"='RCA' OR "SCE"."SCE_STAC"='RCE' OR
"SCE"."SCE_STAC"='RCI' OR "SCE"."SCE_STAC"='RCO' OR "SCE"."SCE_STAC"='RHL_R' OR "SCE"."SCE_STAC"='RIN' OR
"SCE"."SCE_STAC"='RLL' OR "SCE"."SCE_STAC"='RPD' OR "SCE"."SCE_STAC"='RSAS'))
10 - access("MST"."MST_CODE"="MRE"."MRE_MSTC")
11 - filter("MRE"."MRE_MRCC"='STU')
14 - access("SPR"."SPR_CODE"="SCJ"."SCJ_SPRC")
Note: cpu costing is off
40 rows selected.As you state something has obviously changed, but nothing obvious has been changed.
We gather statistics via...
exec dbms_stats.gather_schema_stats(ownname => 'USERNAME', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE , degree => 4, granularity => ''ALL'', cascade => TRUE);
We run a script nightly which works out which indexes require a rebuild and rebuild those only it doesn;t just rebuild all indexes.
It would be nice to be able to use the 10g statistics history, but on this instance we aren't yet at that version, hopefully we will be there soon though.
Hope this helps,
Kieron -
Partition last update date?
Hi,
Is there ,somewhere, where I can find out when a partition for a table was last updated?
I have searched several places but have not come up with an answer.
I am using oracle version 10.2.0.3.Hi Robert,
I have created a partitioned table and I am trying to find the best way to up date the statistics via 'dbms_stats.gather_table_stats’. I only want update the stats on partitions that have recently been updated by the most recenty data load.
Or does oracle handle this behind the scenes? -
Hello Everyone,
I have a quick question about stackwise plus technology. I would like to confirm that there is no redundancy at the ethernet switch port in terms of a physical problem. The reason I ask is that we are deploying stacked switches shortly to ensure high availability but curious to know whether anyone has seen issues with physical ports failing and not the chassis/PSUs. I understand that Etherchannels can be used but we have security cameras that cannot suffer any outages and would be connected to a single port on the stack. I'm guessing that my option is to monitor the Switch Ports statistics via SNMP and move the camera to another port if this ever happens.
Thanks in advance.
Cheers.
EvanDisclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Sure, ports can fail without the whole switch failing, but edge redundancy, such as using dual links (often configured as a bundled channel), between the host and two stack members addresses both switch port failure and stack member failure.
Unless your security cameras support dual links, your going to have a single point of failure at the edge port.
Next best options, as you've already noted, would be to have, on-line, "warm" spare ports that you can quickly repatch into. Ideally you have enough spare ports to allowing repatching in case a whole switch member fails.
(BTW, when you have spare ports, you don't have to set aside a whole stack member, sitting empty. For example, instead of having a dual stack with only one switch member populated, and the other not at all, split the populated ports across both stack members. That way, if a stack member fails, you don't lose all your hosts, only half. [With cameras, depending on their views, you might be able to overlap their coverage across multiple stack member.])
Maybe you are looking for
-
Balkin AC1000 DB fails to connect to the internet
I have a 2009 MacBook Pro 15" and I recently moved to a new town living in student accomodation. Each room has an ethernet port which connects directly to an extended LAN network. When I connect my mac directly to the ethernet port the internet works
-
Need to automate odi scenario through windows
Hi, i need to run an odi scenario through windows scheduler. Please let me know. Thanks,
-
No planning files for backflush components- MRP V1
experts, no planning file is generated for the compoent with MRP type V1- material backflush with 261 mvmt. pls advice! here then to incluse this component to generate planning file what necessary setting needs to be done?
-
How to tell how many gb's the phone is?
I need to know how many GB's are on my iPhone - it does not say on the back. The Capacity says 6.8 GB & the available says 6.2 GB - not sure what that tells me.
-
Security update 2007-007 says it fixes CVE-2007-2446, 2447, and 2407. Yet after installation smbd -V still shows it running version 3.0.10. Does Apple maintain it's own version of the source than Samba.org? I would have thought this version would be