Autogrowth, shrink and database performance

I have already asked some questions about Autogrowth and shrink. But I have some doubts related to it.
1)I have a database which grows about 300 MB every month. So is it best to set Autogrowth of 300 MB to it? Now 10 MB is set.
2)I have so many small databases which really have data of about 50 MB. But it's physical size is about 900 MB.(So 850 is free space). I have not shrinked this database. As if i shrink it then again when data gets added autogrowth occurs. But adding data to
this database is rare. So I think it will not gro more than 200 mb for next one year. So should i shrink the database? If I keep it as it is Should that cause any performance problem? Or having more free space(more than 90%) in database will cause any problem.
I think most of the people think having large physical size will cause performance problem. So is it true or not?
3) SQL Server 2008 R2 Express Database Size Limit is 10 GB. So is it physical size of both mdf and ldf file together? because I need to consider this if i should shrink database at any point.

1) If this is a database where you expect performance, the current auto growth setting gives the following disadvantages:
- It will affect performance. Database files grow too often and it's unnecessary work for SQL Server during normal operations. It takes time to grow database files and especially if you have not turned Instant File Initialization on, which is highly recommended
by the way. I would prefer to grow databases off peak hours and once a year.
- The database file(s) gets fragmented. If SQL keeps allocating small 10 MB chunks the file will eventually have hundreds of fragments instead of just a few. Fragmentation reduces IO performance.
I would set the database file sizes to what you expect it to grow to within a year, plus 10 % and set auto growth to perhaps 300 MB that will kick in if your calculations were off. Next year you manually grow the database again according to your new 1-year
prediction. That way you will hopefully get less fragmentation and very few operational disturbances due to auto growth in peak hours.
2)
Keeping the database as is will not cause anything else than that the database is consuming unnecessary disk space, a lot of free space does not affect performance negatively. SQL Server does not do more IO just because you have large file(s). Having to little
free space in a db can cause internal fragmentation and will affect performance negatively. However, you should not fill up the spindles to more than 80% (some say 85%) since your storage will be noticeable slower.
You can shrink the database but that usually causes performance problems due to the shrink operation creates massive internal fragmentation in the database. But you can fix that by rebuilding all indexes and tables, starting with the clustered ones first.
So don't shrink the database so much that it must start with growing itself to accommodate for all index rebuilds.
Hope this helps!
Peter

Similar Messages

  • Application and Database Performance Issue ?

    Hi
    I am designing tables, can any one suggest me which is best for database and application performace
    1) One table more column and developer can work in single query
    2) devided the table in two part and developer wok with two query
    3) i can use table parion
    also i would like to know maximum no of record stored in a table in 11g and 10g
    regards

    user9098698 wrote:
    Hi
    I am designing tables, can any one suggest me which is best for database and application performace
    1) One table more column and developer can work in single query
    2) devided the table in two part and developer wok with two query This decision should come from normalizing your data to 3NF. Only after that is done+ should you consider de-normalizing it for performance, and then only after careful testing and consideration other options.
    3) i can use table parion ????
    >
    also i would like to know maximum no of record stored in a table in 11g and 10g
    regards

  • HR infotype log in PCL4 and overall performance

    Hi there,
    There has been a few threads about PCL4 performance with regards to reading, but I have a slightly different question:
    We are working on an export program for HR masterdata and are considering using logging in PCL4 to be able to export only changed fields in the infotypes. To achieve this we need to add quite alot of extra fields and infotypes to the configuration in the IMG.
    Does anyone have any experience about how additional fields and infotypes affect runtime and database performance of the system? How optimized is the system with regards to writing to this cluster?
    It will obviously cause more data to be logged, and the database will grow slightly faster, but does it decrease responsiveness of PA30/40 for the end users? Is it possible to archive old data from this cluster? I'm guessing that it won't be a big problem, but any feedback is greatly appreciated.
    Best regards,
    Lars G. Gudbrandsen

    Hi Lars,
    Probably you would get a better response in the HCM section as opposed to ABAP.
    Maybe you can use change pointers, and badis rather, to acheive what you want but I am not 100% sure the requirement.
    Additional fields and infotypes don't impact the system negatively in my opinion. It wouldn't affect PA30, unless the specific infotype is selected and then provided if it has been correctly created in PM01 it should be fine, also depending how many fields you are talking about of course. PA40 would only be impacted for those transacitons for which the infotype is included.
    As for archiving, I am not sure, but once again i think HCM forum is your best bet.

  • Performance with MySQL and Database connectivity toolbox

    Hi!
    I'm having quite some problems with the performance of MySQL and Database connectivity toolbox. However, I'm very happy with the ease of using database connectivity toolbox. The background is:
    I have 61 variables (ints and floats) which I would like to save in the MySQL-database. This is no problem, however, the loop time increases from 8ms to 50ms when using the database. I have concluded that it has to do with the DB Tools Insert Data.vi and I think that I have some kind of performance issue with this VI. The CPU never reach more the 15% of its maximum performance. I use a default setup and connect through ODBC.
    My questions are:
    1. I would like to save 61 variables each 8-10ms, is this impossible using this solution?
    2. Is there any way of increasing the performance of the DB Tools Insert Data.vi or use any other VI?
    3. Is there any way of adjusting the MySQL setup to achieve better performance?
    Thank you very much for your time.
    Regards,
    Mattias

    First of all, thank you very much for your time. All of you have been really good support to me.
    >> Is your database on a different computer?  Does your loop execute 61 times? 
    Database is on the same computer as the MySQL server.
    The loop saves 61 values at once to the database, in one SQL-statement.
    I have now added the front panel and block diagram for my test-VI. I have implemented the queue system and separate loops for producer and consumer. However, since the queue is building up faster then the consumer loop consumes values, the queue is building up quite fast and the disc starts working.
    The test database table that I add data to is created by a simple:
    create table test(aa int, bb char(15));
    ...I'm sure that this can be improved in some way.
    I always open and close the connection to the database "outside the loop". However, it still takes some 40-50 ms to save the data to the database table - so, unfortunatly no progress to far. I currently just want to save the data.
    Any more advise will be gratefully accepted.
    Regards,
    Mattias
    Message Edited by mattias@hv on 10-23-2007 07:50 AM
    Attachments:
    front panel 2.JPG ‏101 KB
    block diagram.JPG ‏135 KB

  • New HP EVA6000 SAN and now bad database performance problems

    Hello,
    we changed our SAN Hardware to a HP EVA6000 and moved all Data to there.
    It is intended that storagesystem is for all Servers (File/Print, Exchange, Oracle Databases and MSSQL Databases).
    According to the best practice papers of HP we created one big discgroup (fata harddiscs) and created virtual discs for our servers.
    After doing this the database performance was terrible bad!
    Especially the multiple random IO is absolutely worse.
    As a first countermeasure we created a second discgroup on faster harddiscs and moved the database contents to there. We analyzed the IO and moved several database file to different virtual discs.
    The performance is better now, but still not like the 4 year old SAN System!
    Of course we questioned HP and even let them do a performance analysis but up to now we have no solution... The performance analysis report will be available on thursday.
    Does anybody had the same experience or how did you configure the Database and EVA SAN to work with an appropiate performance?

    Hi,
    I'm not an Oracle person, but do work with EVA SAN's.
    Your 48 FATA disk, disk group is capable of 4800 I/O operations per second [48 x 100] but the disks are only rated for 30% duty cycle and spin at 7200 rpm.
    The 16 FC drive Disk Group I/O operations rating depends on the speed the disks spin at. 10K rpm disks are rated at 130 I/O per sec [2080 for the group] and if they are 15K rpm disks then 170 I/O per sec [2720 for the group]. Both are rated for 100% duty cycle.
    I seem to recall having have read somewhere that Oracle prefers to have it's logs on separate storage to it's data.
    If your shelves had the spare disk slots I would put in 72 GB 15K rpm disks up to the capacity required {+ overhead } + head room for reasonably predictable growth over the anticpated life of the equipment.
    Here is a link to the HP Best Practice guide for EVA 4/6/8000's
    http://h71028.www7.hp.com/ERC/downloads/4AA0-2787ENW.pdf
    I hope this helps you understand the storage that you are working with a bit better. The old saying of "more heads make for better performance" is still true, however budget can have some affect in performance. ;-)
    Jim

  • How to improve database and application performance

    Hi,
    Any body please help me out that how can we improve the database and Application performance.
    Regards,
    Bhatia

    bhatia wrote:
    Hi,
    Any body please help me out that how can we improve the database and Application performance.
    Regards,
    Bhatiathere is no simple answer. There is no DATABASE_FAST=TRUE initialization parameter. There are a myriad of reasons why an application is performing poorly. It could be that the application (code and data relationships) is poorly designed. It could be that individual SQL statements are poorly written. It could be that you don't have enough cpu/memory/disk bandwidth/network bandwidth.
    You need to determine the root cause of poor performance and address it. If you application is poorly designed, you can tune the database until the cows come home and it won't make any difference. If you are trying to run 100k updates per second against a database hosted on hardware that only meets minimal requirements to install Oracle ... well, hopefully you get the picture.
    First, go to tahiti.oracle.com. Drill down to your selected Oracle product and version. There you will find the complete doc library. Find the Performance Tuning Guide
    Second, go to amazon.com and browse titles by Tom Kyte and Cary Milsap. I particularly recommend "Effective Oracle by Design" and "Optimizing Oracle Performance", though I see a lot of new titles that look promising (I think I'll be doing some buying!)

  • Direct IO,Asynchronous IO and relationship with database performance

    What is concept of Direct IO, Asynchronous IO from the DBA prospective? How does it relates to the database performance?
    Any simple explanation will be highly appreciated. Thanks in advance.

    918868 wrote:
    What is concept of Direct IO, Asynchronous IO from the DBA prospective? How does it relates to the database performance?
    Any simple explanation will be highly appreciated. Thanks in advance.yet another interview question from you?

  • ERROR: Exception occured while encrypting the configuration and database

    I'm facing below issue/error during the OIM 11g R2 configuration (fresh install).  Resolutions from other blog with same error (DOMAIN_HOME misconfigured) isn't helping in my case.
    Thanks for your help
    updateMLSLocale:ORACLE_HOME :/fmw/Oracle_IDM1
    updateMLSLocale:LOCALE_PROPERTIES_FILE :/fmw/Oracle_IDM1/inventory/Scripts/ext/jlib/oim/OIMLocales.properties
    java.lang.Exception: Exception occured while encrypting the configuration and database
      at oracle.as.install.oim.config.util.EncryptConfigurationAndDB.encryptConfigurationAndDatbase(EncryptConfigurationAndDB.java:239)
      at oracle.as.install.oim.config.OIMConfigManager.encryptDB(OIMConfigManager.java:1035)
      at oracle.as.install.oim.config.OIMConfigManager.configureOIM(OIMConfigManager.java:891)
      at oracle.as.install.oim.config.OIMConfigManager.doExecute(OIMConfigManager.java:583)
      at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:371)
      at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
      at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
      at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
      at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:64)
      at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:160)
      at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
      at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
      at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.Exception: Exception occured while encrypting the database
      at oracle.as.install.oim.config.util.EncryptDataBase.encryptDBContent(EncryptDataBase.java:159)
      at oracle.as.install.oim.config.util.EncryptConfigurationAndDB.encryptConfigurationAndDatbase(EncryptConfigurationAndDB.java:230)
      ... 12 more
    Caused by: java.lang.Exception: Exception occured in updateMLSLocale method while updating Locale to OIM DB
      at oracle.as.install.oim.config.util.EncryptDataBase.updateMLSLocale(EncryptDataBase.java:318)
      at oracle.as.install.oim.config.util.EncryptDataBase.encryptDBContent(EncryptDataBase.java:125)
      ... 13 more
    Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_OIM.UK_MLS_LOCALE_MLS_LOCALE_CODE) violated
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
      at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
      at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
      at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
      at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
      at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
      at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)
      at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
      at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)
      at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3904)
      at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1512)
      at oracle.as.install.oim.config.util.EncryptDataBase.updateMLSLocale(EncryptDataBase.java:310)
      ... 14 more

    Hi
    I faced this issue before ,Reinstall is the option you have .Verify the version of RCU before you start creating schema .Set all Pre DB setting ,hostname and IP Address ,If DB and OIM  are in ifferent machines check pinging from both the sides .
    Please Drop all OLD schema ,Create a New "Prefix" for fresh installation , don't use old schema .
    Let me know .
    Thanks,
    Ari

  • Shrinking a database after a cut-down

    Hi all,
    I have the scenerio below, any hints tips most welcome ...
    - we are using oracle 9.2.0.1
    - get a copy of a production database 80gb and put into another db
    - run a cutdown script that culls about 70% of the data, for purposes of use as test databasese
    What steps do we need to take to shrink this database now. I recalled an export and a import (with some option), then an export and then another reimport.
    We need to shrink the tables that have had data removed as well as shrink the tablespaces (data files?). The resulting database will have a very small growth factor.
    Is there a relatively easy way to achieve this?
    Thanks for any advice.

    When data is removed randomly from the database there is going to be freagmentation. The best way to avoid this is
    1) Create one temporary tablespace tempdata
    2) For each tablespace
    move all objects to the tempdata tablespace
    calculate the total size of the objects in tempdata
    drop and recreate the tablespace with the required size
    move the objects from tempdata
    3) Repeat step 2 fro all tablespaces

  • Database Performance: Large execution time.

    Hi,
    I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
    Following is the query:
    select
         supp_nation,
         cust_nation,
         l_year,
         sum(volume)
    from
              select
                   n1.n_name as supp_nation,
                   n2.n_name as cust_nation,
                   YEAR (l_shipdate) as l_year,
                   l_extendedprice * (1 - l_discount) as volume
              from
                   supplier,
                   lineitem,
                   orders,
                   customer,
                   nation n1,
                   nation n2
              where
                   s_suppkey = l_suppkey
                   and o_orderkey = l_orderkey
                   and c_custkey = o_custkey
                   and s_nationkey = n1.n_nationkey
                   and c_nationkey = n2.n_nationkey
                   and (
                        (n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY')
                        or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE')
                   and l_shipdate between '1995-01-01' and '1996-12-31'
                   and o_totalprice <= 246835
                   and c_acctbal <= -422.16
         )as shipping
    group by
         supp_nation,
         cust_nation,
         l_year
    order by
         supp_nation,
         cust_nation,
         l_year
    Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
    The machine configuration and the database configuration are as follows:
    Machine:
    64-bit Windows Vista operating System.
    RAM: 8GB.
    CPU: 3.0 GHZ
    Database:
    Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
    Log Area: Volume: 1, Size: 1GB
    Data and Log are on same disk.
    Caches:
    I/O Buffer Cache: 1 GB
    Data Cache: 1 GB
    Catalog Cache: 30 MB
    Parameters:
    CacheMemorySize - 131072
    ReadAheadLobThreshold- 3000
    Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
    How to increase or better the performance? Is there any other parameter that remains to be set?

    > I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
    > Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
    Such general statements are usually total crap.
    MaxDB is running for many SAP customer and SAP internally in many installations - even for BI systems.
    We don't know your Oracle server, we don't know the execution plans - so there's nothing to tell why it may be the case here.
    > Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
    It's a rule of thumb - having just one volume is a rather bad idea since you don't get parallel I/O with that.
    > Log Area: Volume: 1, Size: 1GB
    > Data and Log are on same disk.
    Although this is irrelevant for the query performance it's nonsense in productive environments and a performance killer as well.
    > I/O Buffer Cache: 1 GB
    > Data Cache: 1 GB
    Why don't you allow more Cache ?
    > Catalog Cache: 30 MB
    What for? Do you understand the catalog cache in MaxDB?
    It's  a per session setting...
    > Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
    Can you be more specific here?
    What MaxDB version are you using? What parameter settings do you use?
    > How to increase or better the performance? Is there any other parameter that remains to be set?
    How about showing us the execution plan for the statement and the index structure?
    How should we know what MaxDB does here that takes so much time?
    Did you have the DBanalyzer running while the query ran?
    TPC-H is a benchmark for ad-hoc, decision making support: did you enable any of the BI feature pack features of MaxDB? What about prefetching? What about table clustering, column compression, star join optimization ...?
    All in all - you left us here with "MaxDB is slower than Oracle" and nothing to work on.
    That's not useful in any way.
    Want some answers - provide some information!
    regards,
    Lars

  • Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

    I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

    Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
    a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
    b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
    Hemant K Chitale

  • Regarding Database Performance

    Hi All,
    I have installed *10gR2 on RHEL4 (4GB -- RAM, space is enough)*. One application (oracle ucm) is running on that. Its contains apache and content server. After 2-3 weeks, developers were saying taking long time for opening url. So done gather database statistics (after that daily gathering db stats using scheduler). After that, it was working fine. Again after week they are having the prob. They are doing lot of dml on db. Checked in os level using top command. But oracle ( installed entire application as oracle) user is not consuming that much memory. set pga_aggregate_target to about 500M. Sga (sga_max_size --- 950M) is auto tuning. db is of size 8GB. workarea_policy_size is auto.
    Please suggest any solutions for improving database performance.
    Thanks,
    Manikandan.

    daily gathering db stats using scheduler)Done by default on V10+
    Please suggest any solutions for improving database performance.Ready, Fire, Aim!
    Is any OS resource the bottleneck; CPU, RAM, IO, network?
    During slow period what is reported by AWR?
    Please read these:
    When your query takes too long
    When your query takes too long ...
    How to Post a SQL statement tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    Edited by: sb92075 on Jul 27, 2010 10:01 AM

  • Coherence and database backend updates

    Hi
    I am new to coherence, I liked the features of Coherence replicated cache, cache through etc..
    My Question is if I am using Coherence with cache through and partitioned caching and I have a back end update on data through a oracle database stored procedure how the coherence cache get the latest data changed by the stored procedure. Is there any event driven mechanism to invalidate the cache to reload the data or it is not a good practice in these scenario.
    Rgds
    Anil

    Hi Anil,
    it really depends on what you need to achieve.
    There is a very good wiki which describes most of the things you can do with Coherence at the url: http://wiki.tangosol.com/display/COH33UG/Coherence+3.3+Home
    However, since you have your existing database model which you want to retain because you want the data still reside in the database, depending on the consistency requirements you might not be totally free in representing data in Coherence.
    The best feature of Coherence to significantly reduce the load on the database is the write-behind cache.
    Write-behind functionality allows you to coalesce multiple updates to the same DB row into a single update as data is written out only after a certain amount of time thereby combining the changes from multiple updates to a single one.
    It also allows ripe updates to multiple cached entries for which the primary copies reside in the same cache node to be written out in the same database operation (preferably in batch mode).
    Due to these behaviors write-behind has a profound effect on write-heavy applications.
    However that way of operation requires that for any logic that needs to query consistently from the data-set and all operations changing the data-set go to the cache, because the database is not guaranteed to be consistent. Therefore it might not be good for you.
    Another approach is that if you want to do your DB changes directly in the DB, you can simply cache data in whatever structures that suit your access patterns in a read-through cache, and if there are any changes to the database you invalidate entries which are stale.
    The cache structures can be whatever which you choose appropriate to your logic, you can cache single entries, you can cache entire top-down object hierarchies, you can cache query results keyed by the query parameters.
    The point is that you are free to choose the most appropriate structure of what to cache as opposed to the caching features of other frameworks which choose the caching structures to be aligned to their classes and not your needs.
    Just keep in mind that without doing serious locking (which adversely affects both read and write performance), between reading any two or more entries from the cache a change might have occurred to one or more of those entries. This means that when using multiple entries from the cache, there might not be any transaction-set in the database which contains all entries in the state which you were getting them.
    So if you need any such guarantees, then the data you need such guarantees on must reside in a single cache entry and that cache entry must have been retrieved from the database with a transaction which provides those guarantees at all (if you read data from the database with READ_COMMITTED isolation and with multiple queries, then you don't get that consistency even from the database, as some of the entries read by the previous operations in the transaction might have been overwritten when another transaction committed before subsequent read operations in your transaction).
    There can be other approaches as well.
    It really all depends on your access patterns and without knowing more about that it is hard to suggest the correct solution.
    Best regards,
    Robert

  • Database performance degradation issue

    Hi,
    We are having the database performance related problem.
    Oracle database 8.1.7.0
    when we use statement,
    SQL> select name,value from v$sysstat where name ='redo buffer allocation retries';
    NAME VALUE
    redo buffer allocation retries 2540
    Here, Redo retries value shown above is too big, which it should not be.
    Currently we are having log_buffer = 65536 bytes (64 kb)
    Is it necessary to increase the size of log_buffer ? does increasing the size of log_buffer will improve the database performance issue upto some extent ?
    Also, regarding database buffer cache,
    SQL> SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME IN ('db block gets', 'consistent gets', 'physical reads');
    NAME VALUE
    db block gets 4365099
    consistent gets 1309280457
    physical reads 103708616
    From the above values, buffer cache hit ratio is 0.921052817
    So, is it necessary to increase the size of database buffer cache ?
    With Regards

    Log_buffer 64k is likely too small. The default is 512k per CPU.
    Increasing log buffer will decrease the number of redo allocation retries.
    You need to set to 512K or 1M.
    Buffer Cache Hit Ratio is a Meaningless Indicator of the Performance of the System, as Connor McDonald has demonstrated on http://www.oracledba.co.uk
    You'd better strive to reduce I/O.
    Also you will notice you need very big amounts of memory to get very little improvement.
    Personally I would probably do something if BCHR was below 80 percent, but I know of situations where the problem is in the application and no value of db_blockf_buffers will be big enough.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • Database Performance Problem

    Hi,
    I am running Oracle10g in Windows and i have
    SGA - 289406976
    Fixed Size- 1248576
    Variable Size - 96469696
    Database Buffer - 184549376
    Redo Buffer - 7139328
    i am enclosing the init.ora file for better understanding
    # Cache and I/O
    db_block_size=8192
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=orcl
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\product\10.2.0/admin/orcl/bdump
    core_dump_dest=D:\oracle\product\10.2.0/admin/orcl/cdump
    user_dump_dest=D:\oracle\product\10.2.0/admin/orcl/udump
    # File Configuration
    control_files=("D:\oracle\product\10.2.0\oradata\orcl\control01.ctl", "D:\oracle\product\10.2.0\oradata\orcl\control02.ctl", "D:\oracle\product\10.2.0\oradata\orcl\control03.ctl")
    db_recovery_file_dest=D:\oracle\product\10.2.0/flash_recovery_area
    db_recovery_file_dest_size=2147483648
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.2.0.1.0
    # Processes and Sessions
    processes=150
    # SGA Memory
    sga_target=287309824
    # Security and Auditing
    audit_file_dest=D:\oracle\product\10.2.0/admin/orcl/adump
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    dispatchers="(PROTOCOL=TCP) (SERVICE=orclXDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=95420416
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    and the Total Physical Memory - 1037864
    Available - 206124
    kindly pls explain why the database is running slow?Pls tell me what parameter shuld i change in the init.ora so that the database performance increases?

    Is only Oracle running slow?
    Are some query running slow?
    I think that you might not be able to increase performance
    by changing only oracle parameter.
    What kind of programs and services are running on your Windows?
    Are they disturbing <s>Oracle sleeping</s> Oracle running?
    Please check them first.
    Oops, I'm not native, so I have mistake in using word.
    Sorry.
    Message was edited by:
            ushitaki

Maybe you are looking for

  • User password not accepted after migration

    Source: iMac Lion OSX.7.4  Destination iMac OSX6.8 Problem: user password is not accepted after migration What I did: Tried to migrate over the LAN, but in the migration assistant the other Mac was mutually never recognized. (Did migrations several t

  • Srgb exports look diff in safari and LR

    I've read as much as I can on the subject of colour management, LR's colour spaces and browser issues etc, but am now stuck. Photos from my cameras (canon ixus 30, nikon d70s, nikon d80, fujifinepix 5500) look the same in windows explorer, picasa, fi

  • Free object of a picture

    Dear Abapers , I made a report and in the selection-criteria screen i inserted an image . The problem is that when the reports runs the image can't disappear  and is visible in the output of the report . How can i delete when the reports is showing t

  • Need a global Logger for various packages

    I want to create a global Logger object like MyLogger that will be used by a few packages. The Logger should generate logging files in diferent packages. I hope MyLogger is so easy to use that other programmers just need to type one line to log a tex

  • How to add mathematical equations?

    I would like to place some mathematical equations (created using a LaTeX system) on a Photoshop page as a vector object. How can this be done?