Histograms and statistics

Hi
How to know the list of tables/columns where histograms statistics exists ?
Regards
Den

Hi,
you can use this SQL to see the histograms:
SELECT endpoint_number, endpoint_value, endpoint_actual_value
  FROM DBA_HISTOGRAMS
WHERE table_name = '?????' and column_name = '????'
  ORDER BY endpoint_number;You can query DBA_HISTOGRAMS to see which of your tables are there.

Similar Messages

  • Performance and Statistics

    Hi,
    I have a question regarding database statistics and running SQL queries. The situation is the following: our developers have very small development databases and poor SQL gets bumped up to production where the problems become obvious. Giving access to the production database is out of the question, so is it possible to copy over the statistics from our production database and insert it into our development environments to simulate real world run times.
    I have spoken to a couple of DBA's and everyone has a different opinion on the matter. Some say that it can't be done, but one mentioned that it could be done but it might not work right. He mentioned histograms, and that just the statistics could not give what we are looking for. Opinions?
    This is part of my co-op term project to catch poor SQL in development prior to having it built into the production code. I will appreciate any help given and if more info is required than I will do my best to provide the most accurate information.
    Thanks
    Sebastian

    In my previous project we used to replicate the production data to non prod environments like test and dev environments.
    First we used to replicate to Cert (onsite testing) and NPI the sensitive information.
    NPI means either encrypt or corrupt the data which has no meaning, like SSN, borrower information etc.
    From Cert we used to replicate the data to all the other non prod environments. We used to do this every quarter or 4 months.
    Apart from replicating the production data nothing will work to do performance testing.
    Even this will not work for some scenarios. Some things will be caught in production only.

  • Histogram and saturation in Photoshop

    1. Photoshop histogram panel is the average of the color intensity of the selected areas and layers of color channel statistics mean?
    2. In the Photoshop histogram panel do you mean the average of the brightness of the image luminance channel?
    I represent the average value of the maximum to the average of brightness from 0 to 255?
    3. To obtain the saturation current number of images should I use any method or menu or panel?
    I would appreciate a response.
    Mail: [email protected]

    Not sure I follow all the details you mention.
    Hope this helps:
    Photoshop Help | Viewing histograms and pixel values

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • Which Event Classes i should use for finding good indexs and statistics for queries in SP.

    Dear all,
    I am trying to use pro filer to create a trace,so that it can be used as workload in
    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    The stored proc contains three insert queries which insert data into a table variable,
    Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
    There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
    1) There is only one table with three inserts which gets  into a table variable with a final sequence creation block.
    2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
    sequence creation block.
    3)
    There are 3 tables with 9 inserts , which gets into a table variable with a final
    sequence creation block.
    In all the above case number of record will be around 5 lacks.
    Purpose is optimization of queries in SP
    like which Event Classes i should use for finding good indexs and statistics for queries in SP.
    yours sincerely

    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA.  See
    http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
    If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance.  Instead, start/stop the Profiler Tuning template against a test server and then script the trace
    definition (File-->Export-->Script Trace Definition).  You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file.  Stop and remove the trace after the workload
    is captured with sp_trace_setstatus:
    DECLARE @TraceID int = <trace id returned by the trace create script>
    EXEC sp_trace_setstatus @TraceID, 0; --stop trace
    EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • CRM workload and statistics tables (ST03N)

    Hi All,
          can anyone please give the name of the table used for storing the CRM workload and statistics data (Tx: ST03N).
    Thanks in advance

    here you go...
    SAPBWPRFLS                     BW Workload: Relation of Workload Profile and
                                                Class Names
    SAPWLDMTHD                     SAP Workload: Dynamically-Called Method for
                                                Events
    SAPWLROLED                     SAP Workload: Storage of User-Specific ST03N
                                                Role
    SAPWLROLES                     SAP Workload: Table for User-Specific Parameters
    SAPWLTREEG                     SAP Workload: Nodes for ST03G Tree Controls
    SAPWLTREET                     SAP Workload: Text Field for ST03G Trees
    SGLWLCUST1                     User Parameters for Workload Transactions
    SGLWLPRFLS                     SAP Workload: Profile Codes and Class Names
    SGLWLSLIST                      SAP Workload: User-Specific System List for Global
                                              Analysis
    Julius

  • Logs and Statistics Page in Airport

    What does "rotated ccmp group key" mean on Airport routers? What do other saying mean on the logs and statistics page? Thanks helps alot

    Your signal and noise readings are excellent; these are numbers in decibel (logarithmic) scale, with 0 (zero) being the maximum; the signal levels reported indicate how much below the theoretical maximum signal strength you are receiving. As a point of comparison, as I type this right now I'm getting a signal of -45 db and a noise of -9 db. Rate, I believe, is in megabits per second (I'm showing 117 at the moment, though it often goes up to 130 or so).

  • How to impl result match percentage and statistics on # of visit of docs

    Can someone please suggest on how to impl result match percentage and statistics on # of visit of docs.
    Can this be achieved without using ultra search?
    For example, by entering 'merge', I would like to see
    result such as
    "Party Merge High Level Design", 100%, # of hits:125.

    Can someone please suggest on how to impl result match percentage and statistics on # of visit of docs.
    Can this be achieved without using ultra search?
    For example, by entering 'merge', I would like to see
    result such as
    "Party Merge High Level Design", 100%, # of hits:125.

  • Siebel and parameters and statistics

    Hi,
    I work on SIEBEL 8 with 10gr2 database on i want to know the preconisations SIEBEL about specific parameter and statistics.
    Thanks for your help

    Read Technical Note 582 for general information.
    Additionally, this technical note also summarizes/compares 9i and 10g server initialization parameters. Be aware that some of the settings for 9i and 10g are completely different.
    Install/Upgrade/Manage
    http://download.oracle.com/docs/cd/B40099_02/books/UPG/UPG_Prep_Oracle3.html
    HTH
    -Anantha

  • LIFO Valuation for two financial books Tax and Statistics

    Dear Sir,
    Please let me know if my following requirement is possible in SAP or not.
    We would like to generate LIFO valuation for two financial books Tax and Statistics
    Tax book begins from April to March and
    Stat book begins from August to July.
    We are able to execute LIFO valuation only for Statistic books, as our financial books are from Aug to July.
    Also in LIFO configuration there is no setting for two fiscal year periods.
    Please let me know if our requirement is feasible for not.
    Regards,
    Sandeep

    Guys,
    Can anyone through some light on my query?
    Regards,
    Sandeep parab

  • Gnome-Power-Manager don't save history and statistics

    I notice that g-p-m doesn't save history and statistics during a session:
    when I reboot the pc I loose every information and the graphs start by 0.
    I checked every configuration on gconf-editor, but nothing helped me.
    There is the section /apps/gnome-power-manager/info/ but there isn't a schema for any configuration, I don't know what test I could make.
    And I don't find the folder ~/.gnome2/gnome-power-manager on my Arch installations.
    I found this bug with Ubuntu:
    https://bugs.launchpad.net/ubuntu/+sour … bug/302570
    So I run lshal | grep battery to have information about my asus eeepc battery.
    This is the result:
    [root@e3pc luca]# lshal | grep battery
    udi = '/org/freedesktop/Hal/devices/computer_power_supply_battery_BAT0'
      battery.charge_level.current = 20871  (0x5187)  (int)
      battery.charge_level.design = 51034  (0xc75a)  (int)
      battery.charge_level.last_full = 50638  (0xc5ce)  (int)
      battery.charge_level.percentage = 41  (0x29)  (int)
      battery.charge_level.rate = 16993  (0x4261)  (int)
      battery.is_rechargeable = true  (bool)
      battery.model = '901'  (string)
      battery.present = true  (bool)
      battery.rechargeable.is_charging = true  (bool)
      battery.rechargeable.is_discharging = false  (bool)
      battery.remaining_time = 6306  (0x18a2)  (int)
      battery.reporting.current = 2691  (0xa83)  (int)
      battery.reporting.design = 6580  (0x19b4)  (int)
      battery.reporting.last_full = 6529  (0x1981)  (int)
      battery.reporting.rate = 2191  (0x88f)  (int)
      battery.reporting.technology = 'Li-ion'  (string)
      battery.reporting.unit = 'mAh'  (string)
      battery.serial = ''  (string)
      battery.technology = 'lithium-ion'  (string)
      battery.type = 'primary'  (string)
      battery.vendor = 'ASUS'  (string)
      battery.voltage.current = 7756  (0x1e4c)  (int)
      battery.voltage.design = 8400  (0x20d0)  (int)
      battery.voltage.unit = 'mV'  (string)
      info.capabilities = {'battery'} (string list)
      info.category = 'battery'  (string)
      info.udi = '/org/freedesktop/Hal/devices/computer_power_supply_battery_BAT0'  (string)
    Eeepc hasn't the serial. Ok...
    But on my hp 6735s there is the serial of the battery while I run "lshal | grep battery".
    And it has the same problem.
    The version of g-p-m is 2.26.2-1 on both of them.
    I had many difficulties to find documentation about this gnome daemon!!
    Somebody found a solution?
    Thank you all!

    up

  • Question about histograms and indexes

    I read that if a histogram is generated for a column and that column has an index then if the where clause contains a value that has a high cardinality the CBO will skip using the index. The article was with reference to the benefits of histograms.
    My question is: Why would the CBO skip using the index? Why not use it anyways? Is it because there is a cost associated with loading and using the index itself?
    Would appreciate some clarification on this, thanks!!!

    First, the article in question doesn't say to create histograms on columns with high cardinality values. A primary key column is going to be the ultimate in high cardinality columns (each value is unique after all) but it's rarely appropriate to create a histogram on that column. It does say that that histograms are generally useful when the data in a particular column is highly skewed-- that is, different values occur at wildly different rates.
    If you have a table of orders with an ORDER_STATUS column, for example, 95% of your orders may be CLOSED, 3% may be SHIPPING, and 1.9% may be IN PROCESS and 0.1% may be CANCELLED. Without a histogram, Oracle would take a look at that column and see that there were 4 distinct values, so it would assume an equal distribution across the statuses. Which would cause it to favor a full scan on the table even if you were looking just for the CANCELLED orders. With a histogram, the optimizer would favor an index on ORDER_STATUS for the cancelled query while still favoring a table scan if you're looking for closed orders.
    Gathering unnecessary histograms will make statistics collection take longer, which can cause issues with SLAs. It can also force you to gather statistics more frequently/ cause statistics to get out of date more quickly if you have monotonically increasing values in a column. If you have a CREATE_DATE column, for example, and gather a histogram, values that are greater than the max value at the time the histogram was gathered might be incorrectly estimated to be too infrequent, which can cause problems. If Oracle thinks that 1/6th of the rows are from Jan, 1/6 from Feb, etc. through June and you start looking for values from July because you haven't gathered statistics in a month, the CBO's estimates are going to be off. Unnecessary histograms also cause Oracle to spend more time parsing queries, potentially with no better results. And it can make troubleshooting a bit more difficult because, depending on the version and various optimizer settings, there may be multiple query plans for the same statement or different query plans depending on the particular bind variable that is first passed in.
    Justin

  • 10.2.0.4 CBO behavior without histograms and binds/literals

    Hello,
    i have a question about the CBO and the collected statistic values LOW_VALUE and HIGH_VALUE. I have seen the following on an oracle 10.2.0.4 database.
    The CBO decides for a different execution plan, if we use bind variables (without bind peeking) or literals - no histograms exist on the table columns.
    Unfortunately i didn't export the statistics to reproduce this behaviour on my test database, but it was "something" like this.
    Environment:
    - Oracle 10g 10.2.0.4
    - Bind peeking disabled (_optim_peek_user_binds=FALSE)
    - No histograms
    - No partitioned table/indexes
    The table (TAB) has 2 indexes on it:
    - One index (INDEX A1) has included the date (which was a NUMBER column) and the values in this columns spread from 0 (LOW_VALUE) up to 99991231000000 (HIGH_VALUE).
    - One index (INDEX A2) has included the article number which was very selective (distinct keys nearly the same as num rows)
    Now the query looks something like this:
    SELECT * FROM TAB WHERE DATE BETWEEN :DATE1 AND :DATE2 AND ARTICLENR = :ARTNR;And the CBO calculated, that best execution plan would be a index range scan on both indexes and perform a btree-to-bitmap conversion .. compare the returned row-ids of both indexes and then access the table TAB with that.
    What the CBO didn't know (because of the disabled bind peeking) was, that the user has entered DATE1 (=0) and DATE2 (=99991231000000) .. so the index access on index A1 doesn't make any sense.
    Now i executed the query with literals just for the DATE .. so query looks something like this:
    SELECT * FROM TAB WHERE DATE BETWEEN 0 AND 99991231000000 AND ARTICLENR = :ARTNR;And then the CBO did the right thing ... just access index A2 which was very selective and then acceesed the table TAB by ROWID.
    The query was much faster (factor 4 to 5) and the user was happy.
    As i already mentioned, that there were no historgrams i was very amazed, that the execution plan changed because of using literals.
    Does anybody know in which cases the CBO includes the values in LOW_VALUE and HIGH_VALUE in its execution plan calcuation?
    Until now i thought that these values will only be used in case of histograms.
    Thanks and Regards

    oraS wrote:
    As i already mentioned, that there were no historgrams i was very amazed, that the execution plan changed because of using literals.
    Does anybody know in which cases the CBO includes the values in LOW_VALUE and HIGH_VALUE in its execution plan calcuation?
    Until now i thought that these values will only be used in case of histograms.I don't have any references in front of me to confirm but my estimation is that the LOW_VALUE and HIGH_VALUE are used whenever there is a range based predicate, be it, the BETWEEN or any one of the <, >, <=, >= operators. Generally speaking the selectivity formula is the range defined in the query over the HIGH_VALUE/LOW_VALUE range. There are some specific variations of this due to including the boundaries (<= vs <) and NULL values. This make sense to use when the literal values are known or the binds are being peaked at.
    However, when bind peaking is disabled Oracle has no way to use the general formula above for an estimation of the rows so it mostly likely uses the 5% rule. Since in your query you have a BETWEEN clause the estimated selectivity becomes 5%*5% which equals 0.0025. This estimated cardinality could be what made the CBO decide to use the index path versus ignoring it completely.
    If you can post some sample data to reproduce this test case we can confirm.
    Just a follow-up question. Why is a date being stored as a number?
    HTH!

  • I've lost the histogram - and other adjustment woes

    I filed a rather long description of this to the Aperture Feedback, but as I suspect that is a "write only" channel I thought I'd ask here (and I did review a bunch of posts with no luck)
    I have a brand spanking new MacPro quad, with 2 NVIDIA GeForce 7300 GT video cards (I mention this only because of Aperture's use of GPUs).
    In summary, I imported a project that works fine on my MacBook Pro, but on my MacPro the histogram in the adjustments HUD is gone. When I try to make an adjustment, a HUGE black triangle appears where the preview image was, the histogram magically appears, I make the adjustment (the slider is not smooth), a full second passes and the preview is updated with the adjustment -- and, of course, the histogram disappears. When the system is feeling particularly nasty, the preview comes up completely black after the adjustment (although the thumbnail appears to be updated properly).
    Suffice it to say I've tried everything but yank out the RAM and the GPUs... I've seen this on some images in other projects. I created a new Aperture library and loaded some images (from different cameras) -- all exhibit this behavior. I've stopped and restarted Aperture. I've shutdown and restarted the MacPro. I've reverted my profiles for the displays to the Apple defaults. I've started considering moving to Lightroom...
    Aperture is such a wonderful program when it works... I've had a bunch of minor issues with the program over the past year I've been using it - but this is insane.
    If anyone can provide some clues as to how to fix this I'd appreciate it.
    Thanks,
    - dave
    Note: A small update to my post. I thought the histogram had disappeared, but upon closer inspection it is there -- but it reads as if it were a completely black image (e.g., all of the values are in the first column). Another clue...
    Message was edited by: davebets

    I should mention that i have NOT upgraded to iOS 5 on the 3GS and don't want to.  Therefore, no iCloud on the phone.

Maybe you are looking for

  • How do I connect dual monitors to my x61 with docking?

    How do I connect dual monitors to my X61 with docking?

  • Servicewise customer outstanding report

    Dear Experts, I would like to know the configuration requirements which have to perform to know the service wise outstanding from each customer. I am working in a implementation project for Public sector client which renders services. As per the repo

  • Standard Leave iView of ESS does not work... What to do?

    Hi Experts, We are implementing ESS / MSS. Most of the iViews of ESS are working fine. However, iView of Leave Request is not working. 1) Is it possible that during the portal installation due to some error, some of the components needed for Leave Re

  • Oracle Reports 11g is giving error while uploading an xml

    REP-62202: Unsupported image format. Although xml does not contain any image tag. <?xml version="1.0" encoding="UTF-8" standalone="no"?> <report DTDVersion="9.0.2.0.10" name="Simple"> <data> <userParameter datatype="character" display="no" name="P_JD

  • No document type defined  in mapping rule

    Hi, Information on document type is need to be loaded from SAP R/3 to BCS for reporting purpose. Customised doc type with assigned role - subassignment already defined. Problem is i still don't see the document type (fiels) for selection in mapping r