DB6 and Statistics

Does anyone know if the "Check Statistics" button does anything when using a DB6 database?  It always leaves the traffic light as yellow.

Hi,
Yellow status isOK. there might be some DB stats that might go missing.If you  want it to be green - then try running DBSTATS through the backend DB and then update statistics in the cube.
Regards
CSM Reddy

Similar Messages

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • Which Event Classes i should use for finding good indexs and statistics for queries in SP.

    Dear all,
    I am trying to use pro filer to create a trace,so that it can be used as workload in
    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    The stored proc contains three insert queries which insert data into a table variable,
    Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
    There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
    1) There is only one table with three inserts which gets  into a table variable with a final sequence creation block.
    2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
    sequence creation block.
    3)
    There are 3 tables with 9 inserts , which gets into a table variable with a final
    sequence creation block.
    In all the above case number of record will be around 5 lacks.
    Purpose is optimization of queries in SP
    like which Event Classes i should use for finding good indexs and statistics for queries in SP.
    yours sincerely

    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA.  See
    http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
    If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance.  Instead, start/stop the Profiler Tuning template against a test server and then script the trace
    definition (File-->Export-->Script Trace Definition).  You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file.  Stop and remove the trace after the workload
    is captured with sp_trace_setstatus:
    DECLARE @TraceID int = <trace id returned by the trace create script>
    EXEC sp_trace_setstatus @TraceID, 0; --stop trace
    EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • CRM workload and statistics tables (ST03N)

    Hi All,
          can anyone please give the name of the table used for storing the CRM workload and statistics data (Tx: ST03N).
    Thanks in advance

    here you go...
    SAPBWPRFLS                     BW Workload: Relation of Workload Profile and
                                                Class Names
    SAPWLDMTHD                     SAP Workload: Dynamically-Called Method for
                                                Events
    SAPWLROLED                     SAP Workload: Storage of User-Specific ST03N
                                                Role
    SAPWLROLES                     SAP Workload: Table for User-Specific Parameters
    SAPWLTREEG                     SAP Workload: Nodes for ST03G Tree Controls
    SAPWLTREET                     SAP Workload: Text Field for ST03G Trees
    SGLWLCUST1                     User Parameters for Workload Transactions
    SGLWLPRFLS                     SAP Workload: Profile Codes and Class Names
    SGLWLSLIST                      SAP Workload: User-Specific System List for Global
                                              Analysis
    Julius

  • Logs and Statistics Page in Airport

    What does "rotated ccmp group key" mean on Airport routers? What do other saying mean on the logs and statistics page? Thanks helps alot

    Your signal and noise readings are excellent; these are numbers in decibel (logarithmic) scale, with 0 (zero) being the maximum; the signal levels reported indicate how much below the theoretical maximum signal strength you are receiving. As a point of comparison, as I type this right now I'm getting a signal of -45 db and a noise of -9 db. Rate, I believe, is in megabits per second (I'm showing 117 at the moment, though it often goes up to 130 or so).

  • How to impl result match percentage and statistics on # of visit of docs

    Can someone please suggest on how to impl result match percentage and statistics on # of visit of docs.
    Can this be achieved without using ultra search?
    For example, by entering 'merge', I would like to see
    result such as
    "Party Merge High Level Design", 100%, # of hits:125.

    Can someone please suggest on how to impl result match percentage and statistics on # of visit of docs.
    Can this be achieved without using ultra search?
    For example, by entering 'merge', I would like to see
    result such as
    "Party Merge High Level Design", 100%, # of hits:125.

  • Siebel and parameters and statistics

    Hi,
    I work on SIEBEL 8 with 10gr2 database on i want to know the preconisations SIEBEL about specific parameter and statistics.
    Thanks for your help

    Read Technical Note 582 for general information.
    Additionally, this technical note also summarizes/compares 9i and 10g server initialization parameters. Be aware that some of the settings for 9i and 10g are completely different.
    Install/Upgrade/Manage
    http://download.oracle.com/docs/cd/B40099_02/books/UPG/UPG_Prep_Oracle3.html
    HTH
    -Anantha

  • LIFO Valuation for two financial books Tax and Statistics

    Dear Sir,
    Please let me know if my following requirement is possible in SAP or not.
    We would like to generate LIFO valuation for two financial books Tax and Statistics
    Tax book begins from April to March and
    Stat book begins from August to July.
    We are able to execute LIFO valuation only for Statistic books, as our financial books are from Aug to July.
    Also in LIFO configuration there is no setting for two fiscal year periods.
    Please let me know if our requirement is feasible for not.
    Regards,
    Sandeep

    Guys,
    Can anyone through some light on my query?
    Regards,
    Sandeep parab

  • Gnome-Power-Manager don't save history and statistics

    I notice that g-p-m doesn't save history and statistics during a session:
    when I reboot the pc I loose every information and the graphs start by 0.
    I checked every configuration on gconf-editor, but nothing helped me.
    There is the section /apps/gnome-power-manager/info/ but there isn't a schema for any configuration, I don't know what test I could make.
    And I don't find the folder ~/.gnome2/gnome-power-manager on my Arch installations.
    I found this bug with Ubuntu:
    https://bugs.launchpad.net/ubuntu/+sour … bug/302570
    So I run lshal | grep battery to have information about my asus eeepc battery.
    This is the result:
    [root@e3pc luca]# lshal | grep battery
    udi = '/org/freedesktop/Hal/devices/computer_power_supply_battery_BAT0'
      battery.charge_level.current = 20871  (0x5187)  (int)
      battery.charge_level.design = 51034  (0xc75a)  (int)
      battery.charge_level.last_full = 50638  (0xc5ce)  (int)
      battery.charge_level.percentage = 41  (0x29)  (int)
      battery.charge_level.rate = 16993  (0x4261)  (int)
      battery.is_rechargeable = true  (bool)
      battery.model = '901'  (string)
      battery.present = true  (bool)
      battery.rechargeable.is_charging = true  (bool)
      battery.rechargeable.is_discharging = false  (bool)
      battery.remaining_time = 6306  (0x18a2)  (int)
      battery.reporting.current = 2691  (0xa83)  (int)
      battery.reporting.design = 6580  (0x19b4)  (int)
      battery.reporting.last_full = 6529  (0x1981)  (int)
      battery.reporting.rate = 2191  (0x88f)  (int)
      battery.reporting.technology = 'Li-ion'  (string)
      battery.reporting.unit = 'mAh'  (string)
      battery.serial = ''  (string)
      battery.technology = 'lithium-ion'  (string)
      battery.type = 'primary'  (string)
      battery.vendor = 'ASUS'  (string)
      battery.voltage.current = 7756  (0x1e4c)  (int)
      battery.voltage.design = 8400  (0x20d0)  (int)
      battery.voltage.unit = 'mV'  (string)
      info.capabilities = {'battery'} (string list)
      info.category = 'battery'  (string)
      info.udi = '/org/freedesktop/Hal/devices/computer_power_supply_battery_BAT0'  (string)
    Eeepc hasn't the serial. Ok...
    But on my hp 6735s there is the serial of the battery while I run "lshal | grep battery".
    And it has the same problem.
    The version of g-p-m is 2.26.2-1 on both of them.
    I had many difficulties to find documentation about this gnome daemon!!
    Somebody found a solution?
    Thank you all!

    up

  • SAP XI BPM Performance and statistics

    Hello all,
    I am currently working on an integration using the BPM process within XI.  During our initial testing, we noticed that, when it comes to the BPM process, XI takes too long to process a message.  The actual size of the payload is really small and there is not much to the BPM process.
    For example, I create one BPM process that takes a message and splits the message into separate documents for each target application and pass it on to another BPM process.  The second process takes the message and changes its format to what is expected by the receiving application.  It calls the target system and waits for a response.
    Sender     Recv'r     Begin       End        End - Begin
    SAPPRG     Split     11:29:46  11:31:01 74.810089
    Split     Route     11:35:11  11:35:17  6.312764
    Split     Route     11:35:28  11:35:40 12.021294
    Route     DW     11:35:49  11:35:59 10.71562
    Route     MPR     11:35:50  11:36:00  9.579952
    DW     Route     11:35:59  11:35:59  0.50343
    MPR     Route     11:35:59  11:36:00  0.485403
         Total                0:06:14  1.907142533
    We also noticed some gaps b/w a message been send and receive steps within XI.  I would like to know if there is any statistics on XI performance when it comes to processing messages within the BPM process and/or some documentation on how to improve performance using BPM.
    Please, advise
    Thanks..
    -OV-

    Hi,
    Checklist to use BPM-
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3bf550d4-0201-0010-b2ae-8569d193124e
    Also refer this Performance tuning guide-
    https://websmp106.sap-ag.de/~sapidb/011000358700000592892005E.PDF
    If you are using Correlation etc, then try to use Local Correlation. SO that you can avoid the scope of the correlation to the entire process. By default it will be for entire process. Even you can make use Block step and try to group related steps and give the scope of the steps for the block.
    Hope this helps,
    Regards,
    Moorthy

  • Dyanamic sampling and statistics collection

    HI all,
    im a newbie and i have a question. please correct me if im wrong too.
    The documentation I studied specifies that Dyanmic sampling collects the statistics of the objects. im not sure during query execution or while an auto job.
    i also read that starting from 10g , Oracle collects stats for objects as a job (using dbms_stats package). the job is scheduled for new or stale objects , running between 10 pm and 6 am.
    if oracle already runs a job for collecting stats what is dynamic sampling good for.
    Please fill me in . can some body also explaing the query optimizer components.
    thanks in advance.
    Dev

    Assume stats are collected every day at 02:00. Beginning at 08:00 users start making changes. By 11:45 the stats may not bear a close relationship to what was collected the previous morning.
    I thought the explanation in the docs was clear. What doc are you reading (please post the link).

  • FTP and statistics

    I asked this question before but didn't get a clear answer. I just want to ask about this sentence I remember I read in an article but really forgot the source:
    +FTP upload puts the entire site into a separate folder, and this may affect statistics+
    Does it mean here that RANKING is the issue? How is that then?
    Thanks.

    When you use iWeb's FTP only the changed pages are uploaded.
    Search engines are capable of seing the changes.
    I make a lot of changes to my site. My current ranking is somewhere between 3.500.000 to 4.000.000.
    I hardly notice a change.

  • Hints and statistics

    Is it feasable for a shop that is heavily dependent on hints to also take periodic statistics on tables and indexes? CBO is heavily dependent on stats, doesn't it make sense that they should be periodically refreshed, even in a shop that cooks the plans with hints?
    Any helpful input will be appreciated

    > Without any refreshed stats, the optimizer would be just as effective as the hints.
    I've read that sentence several times and I still don't understand it.
    If a table/column/partition etc is missing stats then from 10g onwards the optimizer will use dynamic sampling. This can be very effective although it probably suits batch systems better than OLTP since a decent amount of dynamic sampling takes a little time. Since it is not usually as thorough as a full analyze job, you can also miss skewed data values and thus be unlucky with the plan.
    If the stats are present but inaccurate then you could again be unlucky with the execution plan if they are significantly out on something that affects the optimizer's choice of index, join order etc.
    If you are relying entirely on hints, then they had better be complete enough so that they do not leave the optimizer any room to choose a different plan that honours the hints but is not quite what you meant. You may think you are getting plan stability, but plans can still change when (if there are no stats) dynamic sampling discovers changed data volumes, selectivity etc, or (if you have old stats you don't update) time moves on and your queries use dates that are further and further away from the values described in the old stats (ask me how I know). I'm sure there are other ways to hit this type of issue. Another is to upgrade the database and find that your hints relied on odd bits of optimizer behaviour that have now changed.
    A general problem with hints IMHO is that the optimizer has more possible approaches to any complex query than you or I can probably think of, and often the plan it comes up with is one you didn't expect but that actually works well. To produce that same plan explicitly and reliably using hints would require the same knowledge of optimizer rewrites, transformations, complex view merging, predicate pushing, hash join ordering, parallel execution and so on as the optimizer itself. It is therefore better to use hints to correct cases where the optimizer can gets things wrong, and better still to find the root cause within the stats.

  • Performance and Statistics

    Hi,
    I have a question regarding database statistics and running SQL queries. The situation is the following: our developers have very small development databases and poor SQL gets bumped up to production where the problems become obvious. Giving access to the production database is out of the question, so is it possible to copy over the statistics from our production database and insert it into our development environments to simulate real world run times.
    I have spoken to a couple of DBA's and everyone has a different opinion on the matter. Some say that it can't be done, but one mentioned that it could be done but it might not work right. He mentioned histograms, and that just the statistics could not give what we are looking for. Opinions?
    This is part of my co-op term project to catch poor SQL in development prior to having it built into the production code. I will appreciate any help given and if more info is required than I will do my best to provide the most accurate information.
    Thanks
    Sebastian

    In my previous project we used to replicate the production data to non prod environments like test and dev environments.
    First we used to replicate to Cert (onsite testing) and NPI the sensitive information.
    NPI means either encrypt or corrupt the data which has no meaning, like SSN, borrower information etc.
    From Cert we used to replicate the data to all the other non prod environments. We used to do this every quarter or 4 months.
    Apart from replicating the production data nothing will work to do performance testing.
    Even this will not work for some scenarios. Some things will be caught in production only.

Maybe you are looking for