Gathering statistics without Histogram

DB version:10gR2
Metalink's workaround for a ORA-600 bug is : gather stats without Histograms.
In 10GR2 how can i gather stats for a table without Histograms getting created. I know this can be done using the Input parameter METHOD_OPT in dbms_stats.gather_table_stats procedure. But i don't know how?

Like this :
exec dbms_stats.gather_table_stats(user,tabname=>'MYTABLE',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100,cascade=>TRUE);
(I've added estimate_percent and cascade as well)
Hemant K Chitale
http://hemantoracledba.blogspot.com

Similar Messages

  • Best practices for gathering statistics in 10g

    I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
    I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
    Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.

    Hi,
    Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
    I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
    I have my detailed notes here:
    http://www.dba-oracle.com/art_otn_cbo.htm
    As to system stats, oh yes!
    By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
    No Workload (NW) stats:
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    I have my notes here:
    http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
    Hope this helps. . . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Gathering Statistics in 10.7 NCA with 8i Database

    In preparation for the 11i upgrade we are first migrating our database to 8i
    (strongly recommended Category 1 pre-upgrade step 5).
    I have contacted Oracle Support on whether it is recommended to gather
    statistics in a 10.7 NCA environment (optimizer_mode remains RULE). Oracle
    Support says NOT to gather statistics in a 10.7 environment.
    This is contradictory information to several documents I have found.
    Furthermore, Oracle has provided ADXANLYZ.sql and FND_STATS for the 10.7
    environment.
    The following sources recommend gathering statistics in a 10.7 environment:
    1) 10.7 Installation Manual (A47542-1) page A-16
    2) 11i Upgrade Manual (A69411-01) page 2-5
    3) Metalink note 1065584.6
    Can somebody please clarify? Your feedback is much appreciated.
    Thank you,
    Rich
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Rich Cisneros:
    We will be running 10.7 NCA in a server partitioned mode for 6-8 month using 7.3.4.4 with 8.1.6.2.
    Should I gather statistics as part of the 8i database upgrade (still 10.7) or part of the 11i Application upgrade?<HR></BLOCKQUOTE>
    Rich,
    Gather Statistics is only relevant to databases running with optimiser mode = COST or CHOOSE. Apps 10.7 runs with optimiser mode = RULE so you don't need to gather statistics until you start your actual upgrade to 11i, which will run with CBO.
    Hope this makes it clear.
    Steve

  • Gathering statistics on partitioned and non-partitioned tables

    Hi all,
    My DB is 11.1
    I find that gathering statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES        
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
      COUNT(*)
           112I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /It costs 2 mins for the first two tables to gather the statistics respectively, but more than 10 mins for the partitioned table.
    The time of collecting statistics accounts for a large part of total batch time.
    And most jobs of the batch are full load in which case all partitions and subpartitions will be affected and we can't just gather specified partitions.
    Does anyone have some experiences on this subject? Thank you very much.
    Best regards,
    Leon
    Edited by: user12064076 on Aug 30, 2011 1:45 AM

    Hi Leon
    Why don't you gather stats at partition level? If your partitions data is not going to change after a day (date range partition for ex), you can simply do at partition level
    GRANULARITY=>'PARTITION' for partition level and
    GRANULARITY=>'SUBPARTITION' for subpartition level
    You are gathering global stats every time which you may not require.
    Edited by: user12035575 on 30-Aug-2011 01:50

  • Implications of not gathering statistics

    Hi,
    I have a scenario that is provoking vigorous debate in my workplace and may do so here.
    We have a table with about 70 million rows in it. It grows at the rate of about 3-4 million rows a month, the rows being loaded daily rather than in one lot at the end of the month. Gathering statistics takes about 6 hours and takes a significant part of our 9 hour window for completion of various batch jobs, including loading the new rows.
    The new rows do have quite different values in certain columns (some indexed) to the existing data. However, as these rows are processed over the course of a week they will come to look just like the existing rows.
    The action that we're considering is to stop gathering statistics after every large data load and instead to do this every few months instead on the basis that the new data should not skew the balance of the existing data significantly. However, this has divided opinions.
    The database is running on Oracle 10g R2.
    Any thoughts?
    Russell.

    Oracle's default collection may or may not be the best for you given the size.
    For a table this large you should have the partitioning option. If you do then you should write your own stats collection to only look at active partitions and, if possible, set archival data partition's tablespaces to READ ONLY.
    Then with respect to collection ... do some research on what estimate percentage gives you stats good enough to support a good plan. In 10gR2 this may be a very high number. With 11g and above the default collection seems sufficiently improved you can trust it.
    SB advises "let Oracle be Oracle" and I agree. Right up until doing that hurts performance and interferes with the primary purpose of the database: Supporting your organization and its customers.

  • How important is gathering statistics on SYS objects?

    Hi,
    How important is gathering statistics on Data dictionary tables and other X$ tables in SYS schema. Is it bad to keep the statistics. Recently our Sr.DBA has deleted all the SYS schema stats telling that it will inversely affect the DB performance. Is it true?
    Regards
    Satish

    Hi Satish,
    *10g:*
    A new DBA task in Oracle Database 10g is to generate statistics on data dictionary objects which are contained in the SYS schema. The stored procedures dbms_stats.gather_database_stats and dbms_stats.gather_schema_stats can be used to gather the SYS schema stats. Here is an example of using dbms_stats.gather_schema_stats to gather data dictionary statistics:
    EXEC dbms_stats.gather_schema_stats(’SYS’);
    *9i*
    While it is supported in 9.2 to gather statistics on the data dictionary and fixed views, doing so isn't the norm.
    There is a bug fixed only in 10gR2 (not expected to be back-ported to 9.2) that caused this error. The fix is – don’t generate statistics against SYS – especially not the Fixed tables.
    For this query, let's see if we can get a better plan by removing statistics or by getting better statistics, or if we need to do something else to tune it. Take the SYS statistics as before, but with gather_fixed => false.
    I would like for you to test first by deleting the statistics on these two X$ tables and see how the query runs (elapsed time, plan).
    delete_table_stats('SYS','X$KQLFXPL');
    delete_table_stats('SYS','X$KGLOB');
    Then you can take statistics on them using gather_table_stats and check again (elapsed time, plan).
    gather_table_stats('SYS','X$KQLFXPL');
    gather_table_stats('SYS','X$KGLOB');
    The issue with this is that the contents of these fixed views, particularly x$kqlfxpl, can change dramatically. Gathering fixed object statistics may help now and cause problems later as the contents change.
    Warning, this is a bit dangerous due to latch contention, see the following note. I've supported a couple of very busy systems that were completely halted for a time due to latch contention on x$kglob due to monitoring queries (particularly on v$open_cursors).
    Note.4339128.8 Ext/Pub Bug 4339128 - Heavy latch contention from queries against library cache views.
    Hope this answers your question . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Gathering statistics on interMedia indexes and tables

    Has anyone found any differences (like which one is better or worse) between using the ANALYZE sql command, dbms_utility package, or dbms_stats package to compute or estimate statistics for interMedia text indexes and tables for 8.1.6? I've read the documentation on the subject, but it is still unclear as to which method should be used. The interMedia text docs say the ANALYZE command should be used, and the dbms_stats docs say that dbms_stats should be used.
    Any help or past experience will be grateful.
    Thanks,
    jj

    According to the Support Document "Using statistics with Oracle Text" (Doc ID 139979.1), no:
    Q. Should we gather statistics on the underlying DR$/DR# tables? If yes/no, why?
    A. The recommendation is NO. All internal recursive queries have hints to fix the plans that are deemed most optimal. We have seen in the past that statistics on the underlying DR$ tables may cause query plan changes leading to serious query performance problems.
    Q. Should we gather statistics on Text domain indexes ( in our example above, BOOKS_INDEX)? Does it have any effect?
    A: As documented in the reference manual, gathering statistics on Text domain index will help CBO to estimate selectivity and costs for processing a CONTAINS() predicate. If the Text index does not have statistics collected, default selectivity and cost will be used.
    So 'No' on the DR$ tables and indexes, 'yes' on the user table being indexed.

  • Relation between computing and gathering statistics

    Hi gurus,
    What is the relation between computing and gathering statistics for a database object.Are they mutually dependent or one doesn't have anything to do with the other?How they affect performance of a database??
    plz dont redirect..just bottom lines are expected(be specific)
    Thanks in advance
    anirban

    computing term used to collect 100% statistics along with analyze command.
    gather stats is a new package provided by the oracle, also recommend to use this instead of analyze. You can also take 100% stats, like compute with analyze command, in gather stats pacakge, parameter estimate_percent=>null.
    exec dbms_stats.gather_table_stats('SCHEMA',TABLE',cascade=>true,estimate_percent=>NULL);
    Jaffar

  • Does concurrent Gather Schema Statistics generate histograms?

    Hello:
    Does anybody know if concurrent Gather Schema Statistics generates histograms?
    Thank you.
    Alex.

    Alex,
    When Gather Schema Statistics is executed, it reads FND_HISTOGRAM_COLS and builds the histograms.
    Performance Tuning the Apps Database Layer
    http://blogs.oracle.com/stevenChan/2007/05/performance_tuning_the_apps_da.html
    Regards,
    Hussein

  • Statistics gathering in 10g - Histograms

    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    Following one is post from Oracle forums.
    Statistics
    In the above post Mr Lewis mentions about adding
    method_opt => 'for all columns size 1' to the DBMS job
    And in the same forum post Mr Richard Foote has mentioned that
    "Not only does it change from 'FOR ALL COLUMNS SIZE 1' (no histograms) to 'FOR ALL COLUMNS SIZE AUTO' (histograms for those tables that Oracle deems necessary based on data distribution and whether sql statements reference the columns), but it also generates a job by default to collect these statistics for you.
    It all sounds like the ideal scenario, just let Oracle worry about it for you, except for the slight disadvantage that Oracle is not particularly "good" at determining which columns really need histograms and will likely generate many many many histograms unnecessarily while managing to still miss out on generating histograms on some of those columns that do need them."
    http://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/
    Our environment Windows 2003 server Oracle 10.2.0.3 64bit oracle
    We use the following script for our analyze job.
    BEGIN DBMS_STATS.GATHER_TABLE_STATS
    (ownname => ''username'', '
    'tabname => TABLE_NAME
    'method_opt => ''FOR ALL COLUMNS SIZE AUTO''
    'granularity => ''ALL'', '
    'cascade => TRUE, '
    'degree => DBMS_STATS.DEFAULT_DEGREE);
    END;
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.
    I would appreciate any suggestions, insight or further readings regarding the same.
    Appreciate your time
    Thanks
    Niki

    Niki wrote:
    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.As the author of one of the posts cited, let me make some comments. First, I would always recommend starting with the defaults. All to often people "tune" their dbms_stats call only to make it run slower and gather less accurate stats than if they did absolutely nothing and let the default autostats job gather stats in the maintenance window. With your dbms_stats command I would comment that granularity => 'ALL', is rarely needed and certainly adds to the stats collection times. Also, if the data has not changed enough why recollect stats? This is the advantage of the using options=>'gather stale'. You haven't mentioned what kind of application your database is used for: OLTP or data warehouse. If it is OLTP and the application uses bind values, then I would recommend to disable or manually collect histograms (bind peeking and histograms should not be used together in 10g) using size 1 or size repeat. Histograms can be very useful in a DW where skew may be present.
    The one non-default option I find myself using is degree=>dbms_stats.auto_degree. This allows dbms_stats to choose a DOP for the gather based on the size of the object. This works well if you dont want to specify a fixed degree or you would like dbms_stats to use a different DOP than the table is decorated with.
    Hope this helps.
    Regards,
    Greg Rahn
    http://structureddata.org

  • 10.2.0.4 CBO behavior without histograms and binds/literals

    Hello,
    i have a question about the CBO and the collected statistic values LOW_VALUE and HIGH_VALUE. I have seen the following on an oracle 10.2.0.4 database.
    The CBO decides for a different execution plan, if we use bind variables (without bind peeking) or literals - no histograms exist on the table columns.
    Unfortunately i didn't export the statistics to reproduce this behaviour on my test database, but it was "something" like this.
    Environment:
    - Oracle 10g 10.2.0.4
    - Bind peeking disabled (_optim_peek_user_binds=FALSE)
    - No histograms
    - No partitioned table/indexes
    The table (TAB) has 2 indexes on it:
    - One index (INDEX A1) has included the date (which was a NUMBER column) and the values in this columns spread from 0 (LOW_VALUE) up to 99991231000000 (HIGH_VALUE).
    - One index (INDEX A2) has included the article number which was very selective (distinct keys nearly the same as num rows)
    Now the query looks something like this:
    SELECT * FROM TAB WHERE DATE BETWEEN :DATE1 AND :DATE2 AND ARTICLENR = :ARTNR;And the CBO calculated, that best execution plan would be a index range scan on both indexes and perform a btree-to-bitmap conversion .. compare the returned row-ids of both indexes and then access the table TAB with that.
    What the CBO didn't know (because of the disabled bind peeking) was, that the user has entered DATE1 (=0) and DATE2 (=99991231000000) .. so the index access on index A1 doesn't make any sense.
    Now i executed the query with literals just for the DATE .. so query looks something like this:
    SELECT * FROM TAB WHERE DATE BETWEEN 0 AND 99991231000000 AND ARTICLENR = :ARTNR;And then the CBO did the right thing ... just access index A2 which was very selective and then acceesed the table TAB by ROWID.
    The query was much faster (factor 4 to 5) and the user was happy.
    As i already mentioned, that there were no historgrams i was very amazed, that the execution plan changed because of using literals.
    Does anybody know in which cases the CBO includes the values in LOW_VALUE and HIGH_VALUE in its execution plan calcuation?
    Until now i thought that these values will only be used in case of histograms.
    Thanks and Regards

    oraS wrote:
    As i already mentioned, that there were no historgrams i was very amazed, that the execution plan changed because of using literals.
    Does anybody know in which cases the CBO includes the values in LOW_VALUE and HIGH_VALUE in its execution plan calcuation?
    Until now i thought that these values will only be used in case of histograms.I don't have any references in front of me to confirm but my estimation is that the LOW_VALUE and HIGH_VALUE are used whenever there is a range based predicate, be it, the BETWEEN or any one of the <, >, <=, >= operators. Generally speaking the selectivity formula is the range defined in the query over the HIGH_VALUE/LOW_VALUE range. There are some specific variations of this due to including the boundaries (<= vs <) and NULL values. This make sense to use when the literal values are known or the binds are being peaked at.
    However, when bind peaking is disabled Oracle has no way to use the general formula above for an estimation of the rows so it mostly likely uses the 5% rule. Since in your query you have a BETWEEN clause the estimated selectivity becomes 5%*5% which equals 0.0025. This estimated cardinality could be what made the CBO decide to use the index path versus ignoring it completely.
    If you can post some sample data to reproduce this test case we can confirm.
    Just a follow-up question. Why is a date being stored as a number?
    HTH!

  • Gathering statistics manually in 10g !!

    Hi, all.
    The database is 2 node RAC 10.2.0.2.0.
    By default, 'GATHER_STATS_JOB' job is enabled and it takes statistics of
    "ALL objects in a database" every 22:00 on a daily basis.
    (objects that have changes more than 10% of the rows).
    I found that the default job is causing library cache lock wait event by
    invalidating objects too frequently in the shared pool
    ( especially in a RAC environment).
    Therefore, I am considering taking statistics only for application schema
    on a weekly basis by using "DBMS_STATS.GATHER_SCHEMA_STATS" procedure.
    "EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('NMSUSER',DBMS_STATS.AUTO_SAMPLE_SIZE);"
    takes statistics of all objects of NMSUSER schema.
    I would like to take statistics for objects that have more than "10% changes"
    of rows.
    What is the option for that??
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref8115
    Thanks and Regards.
    Message was edited by:
    user507290

    Be very careful taking statistics with default setting of method_opt => 'FOR ALL COLUMNS SIZE AUTO' in 10g.
    Check the number of histograms you have (dba_tab_columns and dba_tab_histograms).
    The results may not be what you expect ...
    And they may be playing a part in your latch issues.
    Cheers
    Richard Foote

  • Gathering statistics in 8.1.7.4

    Hi
    I 10g there is a default job to collect statistics
    can somebody refer me to code that can do the same on 8.1.7.4
    in 8.1.7.4
    also is the "bytes" column in the dba_tables is updated automatically , when rows inserted and deleted.
    or we need to dbms_stats.gather_table_statistics() , same as in 10g
    Thanks

    thanks very much
    bytes.dba_tables look need gathering the statistics
    but
    bytes.dba_segments look DOES NOT need gathering the statistics
    i tested it , so i think i can rely on that
    also
    SQL> sho parameter opt
    NAME TYPE VALUE
    object_cache_optimal_size integer 102400
    optimizer_features_enable string 8.1.7
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_max_permutations integer 80000
    optimizer_mode string CHOOSE
    optimizer_percent_parallel integer 0

  • Automatic Gathering Statistics Script

    Hi,
    We must to schedule in our database some kind of job in order to monitoring the database and regularly gathering statitics of database objects (nowadays most of tables have been never analyzed). We have read the Oracle documentation in order to use the DBMS_STATS package but we are a bit confused about the enormous possibilities (gather_schema_stats, gather_table_stats,...=
    Could some one provide us some kind of help about the basics steps that should contain a simple script that could be scheduled daily in order to mantain fresh table statistics on database?
    Thanks

    Mmm. this docs refers to 10g version.
    We have 9.2.0.7 Standard EdtionYou neglected to mention that in your original post so one must assume that you are on a 'supported release' of oracle - which you aren't:
    http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_stats2.htm#1012974.

  • Affect of Gathering Statistics on Performance........

    Hello All,
    When schema/table statistics need to gathered and why ?
    In Pick business time or in non business hour

    With SQL Plan Management, I found that it can really screw up your database performance if you have queries which don't make use of the bind variables.
    I found this out when I enabled it for our Production Exadata machine: most of the queries are well-coded, boxed queries which make use of bind variables perfectly.
    However, there was one application (a .NET one) which simply refused to use them. Every time it executed (and it executed a LOT), it would use dynamic statements with literal values. This caused the Plan Management to BALLOON from something reasonable to 600 billion rows. At the start, no-one noticed and people liked the plan stability, but then the whole database started to crawl because every statement referenced the HUGE table.
    Needless to say, the coders of the application wouldn't budge from their insistence on literal values because THEIR code was dynamically generated from a 'core' which had no understanding of them. I was very disappointed when I had to turn it off as it's a great feature. Bad coding beats great features anytime, though.
    As for gathering stats, I've found that the automated job which runs in the maintenance window does a fairly good job. We haven't change the estimate_percent defaults and sometimes we have to compute statistics on a couple of awkward tables (i.e. estimate_percent at 100) manually. But it's pretty solid.
    If you wanted to know what tables have 'stale' stats, you can query dba_tab_statistics and look for the stale_stats column.
    Mark

Maybe you are looking for

  • Still having problems with insufficeint permissions - after clean install

    I'm seeing this on multiple friends' computers, and now have it on mine. An auto update fails with the error of insufficient permissions.  (I can't find how to post a screen shot of it.) I read through several discussions, and ran the manual uninstal

  • TS1368 What is apple doing to fix this problem

    This is very frustrating. Is Apple fixing this?

  • Function Module for Update Payment Terms in Vendor Master

    Hi Guys, I have to update Payment Terms in Vendor General and  Compnr code level by ABAP Program. If any guys come accross pl help me to send the FM Details. Thanks in advance.

  • PO Status Deleted in SRM

    Hello Everyone, I accidentally deleted all of the status of a PO using FM CRM_STATUS_DELETE. To bring back the status of the PO, I inserted the status using CRM_STATUS_UPDATE. Now that the status are visible in the PO document, I tried to push it to

  • Moving about newly imported photos

    Hi there, I have a fairly large iPhoto library spanning over 800 events and 49,000 pictures, all arranged chronologically. I am about to start scanning in a lot of old memorabilia that I will eventually want to put new scanned images into the relevan