BW statistics for ODS?
Hi,
Is there any Statistics report which gives list of queries on ODS and query performances?
I found reports for infocubes.
I need to look at reports on ODS which run slow.
Thanks,
Sai.
Hi,
Look at this
Have you mark these ods objects in rsa1->tools->bw statistics for infoprovider ? checkbox for olap and whm ticked
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5401ab90-0201-0010-b394-99ffdb15235b
Similar Messages
-
Hi Everyone,
In the search criteria I found documents only for bw statistics for infocubes. but not for ods objects.
After executing the queries for the ods objects in the bex, the entries are not made in rsddstat table, then where else it is store.
Can anyone please let me know how can I find the query runtime of ods objects.
Thanks,
Prashant.Hi,
Yes I marked for the ods objects also for olap and whm, and executed the query in bex analyzer but still I do not get any entries for these objects in RSDDSTAT table.
What could be the problem. Is there a different procedure to find the query runtime of ods objects, please let me know.
Thanks,
Prashant. -
Can anyone tell me why my ODS is not in the list for activation of statistics?
I have created a couple of ODS' and transported. I'd like to use stats to analyse the data loads but only some of the ODS' are in the list to activate (Tools->BW Statistics for InfoProviders).
I also can't see any difference between the ODS' that would cause some to be listed & others not - I must have missed something, right?
BW 3.5 p17
Thanks,
Patrickjust a thought...
are the settings the same for all ODS...are they all "bex reporting".
do you have data in all the ods?
did you create queries for all ods?
did you refresh the AWB before getting the list? -
1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed in the stat plan ?
2. Does rowsource statistics gives some kind of understanding of Extended stats ?You can get Row Source Statistics only *after* the SQL has been executed. An Explain Plan midway cannot give you row source statistics.
To get row source statistics either set STATISTICS_LEVEL='ALL' in the session that executes theSQL OR use the Hint "gather_plan_statistics" in the SQL being executed.
Then use dbms_xplan.display_cursor
Hemant K Chitale -
How can i see visitor statistics for web page hosted on osx lion server
Hello
how can i see visitor statistics for web page hosted on osx lion server
Thanks
AdrianJust click inside the url address bar. Full url address highlighted, will appear.
Best. -
ABAP Routine for Deleting and creating index for ODS in Process chains
Any pointers for the ABAP Routine code for deleting and creating index for ODS in Process chains.
Hi Sachin,
find the following ABAP code to delete ODS ondex.
data : v_ods type RSDODSOBJECT.
move 'ODSname' to v_ods .
CALL FUNCTION 'RSSM_PROCESS_ODS_DROP_INDEXES'
EXPORTING
I_ODS = v_ods.
To create index:
data : v_ods type RSDODSOBJECT.
move 'ODSname' to v_ods .
CALL FUNCTION 'RSSM_PROCESS_ODS_CREA_INDEXES'
EXPORTING
I_ODS = v_ods.
hope it helps....
regards,
Raju -
Disable Statistics for specific Tables
Is it possible to disable statistics for specific tables???
If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
To get Run time Statistics for a Data target
Hello All,
I need to collect one month data (ie.Start time and End time of the cube) for the documentation work. Could someone help me to find out the easiest way to get above mentioned data in BW Production system.
Please guide me to know the query name to get the runtime statistics for the cube
Thanks in advance,
Anjaliit will fetch the data if the BI stats are turned on for that cube.....
please verify these links
http://help.sap.com/saphelp_nw04s/helpdata/en/8c/131e3b9f10b904e10000000a114084/content.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm -
Help,why brconnect do not collect statistics for mseg table?
I found "MSEG" table`s statistics is too old.
so i check logs in db13,and the schedule job do not collect statistics for "MSEG".
Then i execute manually: brconnect -c -u system/system -f stats -t mseg -p 4
this command still do not collect for mseg.
KS1DSDB1:oraprd 2> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
BR0801I BRCONNECT 7.00 (46)
BR0154E Unexpected option value 'u2013f' found at position 8
BR0154E Unexpected option value 'collect' found at position 9
BR0806I End of BRCONNECT processing: ceenwjre.log 2010-11-12 08.41.38
BR0280I BRCONNECT time stamp: 2010-11-12 08.41.38
BR0804I BRCONNECT terminated with errors
KS1DSDB1:oraprd 3> brconnect -c -u system/system -f stats -t mseg -p 4
BR0801I BRCONNECT 7.00 (46)
BR0805I Start of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.04
BR0484I BRCONNECT log file: /oracle/PRD/sapcheck/ceenwjse.sta
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.11
BR0813I Schema owners found in database PRD: SAPPRD*, SAPPRDSHD+
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.12
BR0807I Name of database instance: PRD
BR0808I BRCONNECT action ID: ceenwjse
BR0809I BRCONNECT function ID: sta
BR0810I BRCONNECT function: stats
BR0812I Database objects for processing: MSEG
BR0851I Number of tables with missing statistics: 0
BR0852I Number of tables to delete statistics: 0
BR0854I Number of tables to collect statistics without checking: 0
BR0855I Number of indexes with missing statistics: 0
BR0856I Number of indexes to delete statistics: 0
BR0857I Number of indexes to collect statistics: 0
BR0853I Number of tables to check (and collect if needed) statistics: 1
Owner SAPPRD: 1
MSEG
BR0846I Number of threads that will be started in parallel to the main thread: 4
BR0126I Unattended mode active - no operator confirmation required
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
BR0817I Number of monitored/modified tables in schema of owner SAPPRD: 1/1
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
BR0877I Checking and collecting table and index statistics...
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
BR0879I Statistics checked for 1 table
BR0878I Number of tables selected to collect statistics after check: 0
BR0880I Statistics collected for 0/0 tables/indexes
BR0806I End of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.16
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.17
BR0802I BRCONNECT completed successfully
the log says:
Number of tables selected to collect statistics after check: 0
Could you give some advices? thanks a lot.Hello,
If you would like to force the creation of that stats for table MSEG you need to use the -f (force) switch.
If you leave out the -f switch the parameter from stats_change_threshold is taken like you said correctly:
[http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm]
[http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm]
You have tried to do this in your second example :
==> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
Therefore you received:
BR0154E Unexpected option value 'u2013f' found at position 8
BR0154E Unexpected option value 'collect' found at position 9
This is the correct statement, however the hyphen in front of the f switch is not correct.
Try again with the following statement (-f in stead of u2013f) you will see that it will work:
==> brconnect -c -u system/system -f stats -t mseg -f collect -p 4
I hope this can help you.
Regards.
Wim -
Which Event Classes i should use for finding good indexs and statistics for queries in SP.
Dear all,
I am trying to use pro filer to create a trace,so that it can be used as workload in
"Database Engine Tuning Advisor" for optimization of one stored procedure.
Please tel me about the Event classes which i should use in trace.
The stored proc contains three insert queries which insert data into a table variable,
Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
1) There is only one table with three inserts which gets into a table variable with a final sequence creation block.
2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
sequence creation block.
3)
There are 3 tables with 9 inserts , which gets into a table variable with a final
sequence creation block.
In all the above case number of record will be around 5 lacks.
Purpose is optimization of queries in SP
like which Event Classes i should use for finding good indexs and statistics for queries in SP.
yours sincerely"Database Engine Tuning Advisor" for optimization of one stored procedure.
Please tel me about the Event classes which i should use in trace.
You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA. See
http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance. Instead, start/stop the Profiler Tuning template against a test server and then script the trace
definition (File-->Export-->Script Trace Definition). You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file. Stop and remove the trace after the workload
is captured with sp_trace_setstatus:
DECLARE @TraceID int = <trace id returned by the trace create script>
EXEC sp_trace_setstatus @TraceID, 0; --stop trace
EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Managing statistics for object collections used as table types in SQL
Hi All,
Is there a way to manage statistics for collections used as table types in SQL.
Below is my test case
Oracle Version :
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> Original Query :
SELECT
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:02.90
Execution Plan
Plan hash value: 3970072279
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 1 | HASH JOIN | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 2 | HASH JOIN | | 8168 | 287K| 695 (3)| 00:00:09 |
| 3 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | TG_FILE | 565K| 12M| 659 (2)| 00:00:08 |
| 7 | TABLE ACCESS FULL | TG_FILE_DATA | 852K| 128M| 3863 (1)| 00:00:47 |
Predicate Information (identified by operation id):
1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
Statistics
7 recursive calls
0 db block gets
16783 consistent gets
16779 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
select
index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
from
all_indexes
where table_name in ('TG_FILE','TG_FILE_DATA');
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
TG_FILE_PK 2 2160 552842 21401 552842 285428
TG_FILE_DATA_PK 2 3544 852297 61437 852297 852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
So my question is, is there any way by which I can change the statistics while using collections in SQL ?
I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
Modified query with hints :
SELECT
/*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1670128954
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 29978 (1)| 00:06:00 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 194 | 29978 (1)| 00:06:00 |
| 3 | NESTED LOOPS | | 8168 | 1363K| 16379 (1)| 00:03:17 |
| 4 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | TG_FILE_DATA | 1 | 158 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | TG_FILE_DATA_PK | 1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | TG_FILE_PK | 1 | | 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | TG_FILE | 1 | 23 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
filter("TF"."FILE_ID"="TFD"."FILE_ID")
Statistics
0 recursive calls
0 db block gets
16 consistent gets
8 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Thanks,
BThanks Tubby,
While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
http://www.oracle-developer.net/display.php?id=427
If we go across the document, it has mentioned in total 3 hints to set statistics :
1) CARDINALITY (Undocumented)
2) OPT_ESTIMATE ( Undocumented )
3) DYNAMIC_SAMPLING ( Documented )
4) Extensible Optimiser
Tried it out with different hints and it is working as expected.
i.e. cardinality and opt_estimate are taking the default set value
But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
With CARDINALITY hint
SELECT
/*+ cardinality( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 10 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With OPT_ESTIMATE hint
SELECT
/*+ opt_estimate(table, e, scale_rows=0.0006) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Execution Plan
Plan hash value: 4043204977
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 485 | 29 (0)| 00:00:01 |
| 1 | VIEW | | 5 | 485 | 29 (0)| 00:00:01 |
| 2 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With DYNAMIC_SAMPLING hint
SELECT
/*+ dynamic_sampling( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 | 11 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 2 | 4 | 11 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
Regards,
B -
Oracle 11g upgrade: How to update stale statistics for sys and sysman?
Hi,
I am in the process of testing Oracle 11g upgrade from Oracle 10.2.0.3. I have run utlu111i.sql on the 10g database.
The utility utlu111i.sql reports about the stale statistics for SYS and SYSMAN components.
I executed dbms_stats.gather_dictionary_stats; dbms_stats.gather_schema_stats('SYS'); and dbms_stats.gather_schema_stats('SYSMAN');
After that the utlu111i.sql still reports the stale statistics for sys and sysman. Does anyone know how to get rid off this warning successfully?
Thanks,
SreekanthDoes anyone know how to get rid off this warning successfully?Just ignore the warnings. Check The Utlu111i.Sql Pre-Upgrade Script Reports Stale Sys Statistics - 803774.1 from Metalink.
-
Create new CBO statistics for the tables
Dear All,
I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
a longer time.how to create new CBO statistics for the tables D010TAB and D010INC. Please suggest.
Regards,
KumarHi,
I am facing problem in when save/activating any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
Now as you have suggest when tx db20
Table D010LINC
there error comes Table D010LINC does not exist in the ABAP Dictionary
Table D010TAB
Statistics are current (|Changes| < 50 %)
New Method C
New Sample Size
Old Method C Date 10.03.2010
Old Sample Size Time 07:39:37
Old Number 51,104,357 Deviation Old -> New 0 %
New Number 51,168,679 Deviation New -> Old 0 %
Inserted Rows 160,770 Percentage Too Old 0 %
Changed Rows 0 Percentage Too Old 0 %
Deleted Rows 96,448 Percentage Too New 0 %
Use O
Active Flag P
Analysis Method C
Sample Size
Please suggest
Regards,
Kumar -
Cisco LMS Prime: Device Center does not show Port statistics for Routers
Hello,
i am wondering why the Port Statistics for Routers are not showing in the Device Center Port Status Section. Is this normal behaviour?
thanks
AlexHi Afroj,
Data Collection as well as Usertracking ran successfully.
regards
Alex -
SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB
We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
is currently taking up to 30 hours to complete.
Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan). Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
Bill ThackerI'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
Come on SQL Server Community. Show me some love :)
Bill Thacker
Maybe you are looking for
-
Error: 'BP category 1 does not fit the data in category 2'
Hi, i have written a program to create a contact person and the code for it is pasted below...when i run this program i get nothing but this message in the status bar 'BP category 1 does not fit the data in category 2'...can someone tell
-
How can I set iTunes to load "iTunes U = Name of University" at startup?
Is there a way thru preferences, plist, or app contents to make iTunes automatically load a specific URL when starting?
-
Applet java.rmi.RemoteException class not found
Hi, my applet uses the RMI connection to server. Applet working fine with appletviewer. i try to load my applet in IE its giving class not found error. error message : java.lang.ClassNotFoundException: java.rmi.RemoteException at com/ms/vm/loade
-
I have a new iMac (3 months old) and it's not taking any discs and it doesn't show it has any discs inside the drive. I don't know what's wrong? Anybody have any suggestions, please?
-
ShellRunAs.exe and msc files
not sure what the best forum was to post this, i have this issue: Im trying to use ShellRunAs.exe to run application under my Admin account with smartcard. This works for most applications, but not for the Domain admin tools like Active Directory Use