Reducing db_cache_size

Hi,
I try to reduce the db_cache_size to zero (Oracle 10g on Suse 9). Therefore, I've set SGA_TARGET to 0. After issueing alter system set db_cache_size=0;, I got the error message
ORA-00383: DEFAULT cache for blocksize 8192 cannot be reduced to zero
The manual says that all tablespaces that use this blocksize should be offlined and then the operation shall be performed again. My problem is that the SYSTEM, TEMP, and UNDOTBS1 cannot be brought offline (at least not with the Enterprise Manager GUI).
Can anybody help me setting db_cache_size to 0? Thanks in advance!

Hi,<br>
<br>
Very strange manear to tune database... in fact. Will it be only one user to connect to your db ?<br>
Did you know what is cache size ? I'm not sure, 4Gb seems enormous, and 16Mb seems too small...<br>
Try to not set this parameter, and Oracle will choose for you the default value, it sometimes a good idea (at least >= 4MB * number of CPUs * granule size).<br>
Read the following doc, to help you to set a valuable value : Configuring and Using the Buffer Cache<br>
But, I think that you need to read the buffer cache definition : Database Buffer Cache<br>
<br>
Nicolas.

Similar Messages

  • Reduce buffer busy waits

    Can you please provide me the suggestion on how to reduce the number of buffer busy waits for the below query?
    Please find the query where the buffer busy waits is taking 11091 and 13160 seconds.
    INSERT INTO RPM_CLEARANCE (CLEARANCE_ID, CLEARANCE_DISPLAY_ID, STATE,
    REASON_CODE, CLEARANCE_RESET_ID, RESET_IND, ITEM, ZONE_ID, LOCATION,
    ZONE_NODE_TYPE, EFFECTIVE_DATE, OUT_OF_STOCK_DATE, RESET_DATE, CHANGE_TYPE,
    CHANGE_AMOUNT, CHANGE_PERCENT, CHANGE_CURRENCY, VENDOR_FUNDED_IND,
    CREATE_DATE, CREATE_ID, APPROVAL_DATE, APPROVAL_ID, TSL_EVENT_REF,
    TSL_MARKDOWN_REF, TSL_EVENT_PHASE, TSL_COVER_GROUP, TSL_END_DATE,
    TSL_EVENT_POS_IND, TSL_EVENT_SEL_IND, TSL_HOPOS_TEMPLATE_ID )
    VALUES
    (:B23 , 'reset:'||:B22 , :B21 , :B20 , NULL, '1', :B19 , :B18 , :B17 , :B16 ,
    :B15 , :B15 - 1, NULL, :B14 , :B13 , :B12 , :B11 , '0', :B10 , :B9 , :B10 ,
    :B9 , :B8 , :B7 , :B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
    call count cpu elapsed disk query current rows
    Parse 1849 0.06 0.05 0 0 0 0
    Execute 1895091 2539.06 2606.23 32 119694 31084693 1895091
    Fetch 0 0.00 0.00 0 0 0 0
    total 1896940 2539.12 2606.28 32 119694 31084693 1895091
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 202 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 32 0.01 0.22
    buffer busy waits 11091 0.02 0.29
    latch: cache buffers chains 30 0.00 0.00
    enq: TX - index contention 563 0.01 0.11
    log file switch completion 15 0.06 0.42
    latch: In memory undo latch 7 0.00 0.00
    buffer deadlock 88 0.00 0.00
    cursor: pin S 59 0.00 0.00
    cursor: pin S wait on X 1 0.00 0.00
    INSERT INTO RPM_CLEARANCE (CLEARANCE_ID, CLEARANCE_DISPLAY_ID, STATE,
    REASON_CODE, CLEARANCE_RESET_ID, RESET_IND, ITEM, ZONE_ID, LOCATION,
    ZONE_NODE_TYPE, EFFECTIVE_DATE, OUT_OF_STOCK_DATE, RESET_DATE, CHANGE_TYPE,
    CHANGE_AMOUNT, CHANGE_PERCENT, CHANGE_CURRENCY, VENDOR_FUNDED_IND,
    CREATE_DATE, CREATE_ID, APPROVAL_DATE, APPROVAL_ID, TSL_EVENT_REF,
    TSL_MARKDOWN_REF, TSL_EVENT_PHASE, TSL_COVER_GROUP, TSL_END_DATE,
    TSL_EVENT_POS_IND, TSL_EVENT_SEL_IND, TSL_HOPOS_TEMPLATE_ID )
    VALUES
    (:B23 , 'reset:'||:B22 , :B21 , :B20 , NULL, '1', :B19 , :B18 , :B17 , :B16 ,
    :B15 , :B15 - 1, NULL, :B14 , :B13 , :B12 , :B11 , '0', :B10 , :B9 , :B10 ,
    :B9 , :B8 , :B7 , :B6 , :B5 , :B4 , :B3 , :B2 , :B1 )
    call count cpu elapsed disk query current rows
    Parse 575 0.02 0.01 0 0 0 0
    Execute 1066687 1460.30 1478.77 0 121065 17556349 1066687
    Fetch 0 0.00 0.00 0 0 0 0
    total 1067262 1460.32 1478.79 0 121065 17556349 1066687
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 202 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    latch: cache buffers chains 28 0.01 0.01
    buffer busy waits 13160 0.07 0.46
    enq: TX - index contention 522 0.00 0.09
    buffer deadlock 108 0.00 0.00
    latch: In memory undo latch 17 0.00 0.00
    cursor: pin S 8 0.00 0.00
    log file switch completion 2 0.05 0.07
    cursor: pin S wait on X 1 0.00 0.00
    ********************************************************************************

    For reducing buffer busy waits you need to increase DB_CACHE_SIZE parameter value.
    Try the query below
    14:40:23 SQL> select name,size_for_estimate,size_factor,estd_physical_reads,buffers_for_estimate fro
    m v$db_cache_advice order by 4;
    NAME    SIZE_FOR_ESTIMATE SIZE_FACTOR ESTD_PHYSICAL_READS BUFFERS_FOR_ESTIMATE
    DEFAULT               960       1.875              252983               119760
    DEFAULT               912      1.7813              268597               113772
    DEFAULT               864      1.6875              269415               107784
    DEFAULT               816      1.5938              269673               101796
    DEFAULT               768         1.5              270449                95808
    DEFAULT               720      1.4063              270923                89820
    DEFAULT               672      1.3125              272107                83832
    DEFAULT               624      1.2188              276651                77844
    DEFAULT               576       1.125              282272                71856
    DEFAULT               528      1.0313              308869                65868
    DEFAULT               512           1              334346                63872
    DEFAULT               480       .9375              411617                59880
    DEFAULT               432       .8438              467955                53892
    DEFAULT               384         .75              520223                47904
    DEFAULT               336       .6563              575829                41916
    DEFAULT               288       .5625              628226                35928
    DEFAULT               240       .4688              670286                29940
    DEFAULT               192        .375              725289                23952
    DEFAULT               144       .2813              784512                17964
    DEFAULT                96       .1875              921481                11976
    DEFAULT                48       .0938             1948144                 5988
    21 rows selected.You will get a result like above. SIZE_FOR_ESTIMATE column shows values in MB.
    Then from the result, choose a size for the buffer cache in which there is minimal physical reads but according to the avaliability of your physical memory and set it.

  • Slow query - db_cache_size ?

    Hi,
    Oracle 9.2.0.5.0 ( solaris )
    I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
    The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
    When running on preprod, the hit ratio for the query is .90 + , on production it drops down to .70 - .80 ( as per query below )
    I have plenty of memory available on the machine, would it be wise to size up the caches? db_cache_size, db_keep_cache_size, db_recycle_cache_size ?
       SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
         FROM   v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
                v$sesstat P3, v$statname N3
         WHERE  N1.name = 'db block gets'
         AND    P1.statistic# = N1.statistic#
         AND    P1.sid = &sid
         AND    N2.name = 'consistent gets'
         AND    P2.statistic# = N2.statistic#
         AND    P2.sid = P1.sid
         AND    N3.name = 'physical reads'
         AND    P3.statistic# = N3.statistic#
         AND    P3.sid = P1.sid
    PRE-PRODUCTION
      call     count       cpu    elapsed       disk      query    current        rows   
      Parse        1      0.64       0.64          0          0          0           0      
      Execute      1      0.00       0.00          0          0          0           0      
      Fetch        2    186.92     329.88     162174    5144281          5           1      
      total        4    187.56     330.53     162174    5144281          5           1      
      Elapsed times include waiting on following events:
        Event waited on                             Times   Max. Wait  Total Waited
        ----------------------------------------   Waited  ----------  ------------
        SQL*Net message to client                       2        0.00          0.00
        db file sequential read                    160098        1.44        162.52
        db file scattered read                          1        0.00          0.00
        direct path write                              27        0.66          3.36
        direct path read                               97        0.00          0.02
        SQL*Net message from client                     2      985.79        985.79
    PRODUCTION
      call     count       cpu    elapsed       disk      query    current        rows
      Parse        1      2.41       2.34         79         16          0           0  
      Execute      1      0.00       0.00          0          0          0           0  
      Fetch        2    844.76   12305.06    1507519    5226663          0           1  
      total        4    847.17   12307.41    1507598    5226679          0           1  
      Elapsed times include waiting on following events:
        Event waited on                             Times   Max. Wait  Total Waited
        ----------------------------------------   Waited  ----------  ------------
        SQL*Net message to client                       2        0.00          0.00
        db file sequential read                   1502104        4.40      11849.13
        direct path write                             361        0.57          3.06
        direct path read                              361        0.05          0.88
        buffer busy waits                              36        0.02          0.17
        latch free                                      5        0.01          0.01
        log buffer space                                2        1.00          1.37
        SQL*Net message from client                     2      687.95        687.95
      Suggestions for further investigation more than welcome.

    user12044475 wrote:
    Hi,
    Oracle 9.2.0.5.0 ( solaris )
    I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
    The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
    There are more physical reads, and the average read time is longer. This may simply be a reflection of the fact that other people are working on the production database at the same time and (a) kicking your data out of the cache and (b) causing you to queue at the disc as they do their I/O. A larger cache MIGHT protect your data a little longer, and MAY reduce their I/O at the same time so that the I/Os are faster - but we have no idea what side effects might then appear.
    It's also worth considering whether you did something as you tranferred the data from production to pre-production that helped to improve query performance. (As a simple example, an export/import could have eliminated a lot of row migration - and the nature of your plan means you MIGHT be suffering a lot of excess I/O from "table fetch continued row"). So, how did you get the data from production to test, how long ago, what's happened to it since, and do you have any session statistics taken as you ran the two queries ?
    Since your execution plan (prediction) is a long way off the actual run time, though, (even on the pre-production system), it's probably more important to work out whether you can make your query much more efficient before you make any dramatic changes to the system. I notice that you have three existences subqueries that appear at the end of the plan - such subqueries wreck the optimizer's arithmetic in your version of Oracle and can make it do very silly things. (See for example this blog note: http://jonathanlewis.wordpress.com/2006/11/08/subquery-selectivity )
    The effect of the subqueries may (for example) be why you have a full tablescan on the second table in a nested loop join at one point in your query. The expectation of a vastly reduced number of rows may be why you are seeing nested loops all over the place when (possibly) a couple of hash joins would be more appropriate.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
    Isaac Asimov                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Reduce downtime required for support patch application

    Dear Experts,
    I would like to know if there are any tips / notes we need to follow during application of
    support patches to reduce the downtime window ( as during patch application , system is not
    usable for normal users ) , any temparary parameter changes to Oracle RDBMS which we can
    activate before appication of support patch and then revert the same once Patches are loaded
    for normal Day operations.
    Is there any note which talks specific settings which could enhance the performance
    of support patch application process on Oracle RDBMS.
    Regards,
    RR

    To my understanding if i select Downtime minimized some phases of Patch application could
    be carried out when users are online and for others would get notified in SPAM for downtime
    , so we just need to lock users and perform start for those rest of Import phases.
    Yes this is correct, theoretically you can apply single patches with downtime minimized one after another. But this does not make a lot of sense obviously, because you will have several distinct downtimes then. So to apply one queue containing all packages is the way to go.
    Regarding your questions on the benefits, i am afraid the answer here is: it depends, as for most performance tuning options you will have to test carefully if the are working.
    For the db_cache_size consider:
    - is my db_cache_size smaller than 1000M?
    - do i have enough free memory on my server?
    - am i seeing low buffer hit ratio while applying support packages?
    If all questions are a yes, then i recommend you increase it to 2000M, it is easy to do. But if you don't have free memory on the server, then you will severly hurt performance by causing paging. If you don't have low buffer hit ratio (or you have lightspeed disks , then the tuning will not speed up anything.
    As you see, you have options, but there isn't a single speed button that solves your problem.
    Regards, Michael

  • Reduced SGA_TARGET, but SGA size not changing?

    I reduced the sga_taget from 1536M to 512M:
    alter system set sga_target = 500M scope = memory;
    System altered.
    select VERSION from v$instance;
    VERSION
    10.2.0.3.0
    show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 1536M
    sga_target big integer 512M
    But the real memory still showing the original value
    show sga
    Total System Global Area 1610612736 bytes
    Fixed Size 2030456 bytes
    Variable Size 1509950600 bytes
    Database Buffers 83886080 bytes
    Redo Buffers 14745600 bytes
    why is that while it's a dynamic parameter?
    Thanks a lot for any help.
    Edited by: user10484253 on May 13, 2011 8:36 AM
    Edited by: user10484253 on May 13, 2011 8:39 AM
    Edited by: user10484253 on May 13, 2011 8:41 AM

    I would suggest you to check v$sgastat to find out the exact SGA memory you are using currently instead of using "SHOW SGA" when when you set SGA_MAX_SIZE & SGA_TARGET initialization parameters.
    Below is a sample output from one of my test dbs. As you can see below my SGA size is only 1GB.
    SHOW SGA shows 2GB thats because I have set SGA_MAX_SIZE to 2gb ( which only means that I can grow my sga up till 2 gig , it may not be my current sga size).
    you can try increasing or decreasing SGA_TARGET and check memory usage on OS level to see the difference.
    SQL>show parameter sga
    NAME                                 TYPE        VALUE
    lock_sga                             boolean     FALSE
    pre_page_sga                         boolean     FALSE
    sga_max_size                         big integer 2000M
    sga_target                           big integer 1008M
    SQL>show sga
    Total System Global Area 2087780352 bytes
    Fixed Size                  2155336 bytes
    Variable Size            1744833720 bytes
    Database Buffers          318767104 bytes
    Redo Buffers               22024192 bytes
    SQL>select name, round(sum(mb),1) mb
      2        from (
      3      select case when name = 'buffer_cache' then 'db_cache_size'
      4                  when name = 'log_buffer'   then 'log_buffer'
      5                  else pool
      6              end name,
      7              bytes/1024/1024 mb
      8                   from v$sgastat
      9           )group by name
    10  /
    NAME                  MB
    db_cache_size        304
    java pool            128
    large pool            16
    log_buffer            21
    shared pool          528
                         2.1
    6 rows selected.
    SQL> -- V$SGA_DYNAMIC_FREE_MEMORY: Information about the amount of SGA memory available for future dynamic SGA resize operations.
    SQL>select * from V$SGA_DYNAMIC_FREE_MEMORY;
    CURRENT_SIZE
      1040187392- Krishna

  • How to reduce redo space wait

    os:x86_64 x86_64 x86_64 GNU/Linux
    oracle:9.2.0.6
    running : Data guard
    Problem : Redo space wait is very high
    Init.ora paramaeters
    *.background_dump_dest='/u01/app/oracle/admin/PBPR01/bdump'
    *.compatible='9.2.0'
    *.control_files='/s410/oradata/PBPR01/control01.ctl','/s420/oradata/PBPR01/control02.ctl','/s430/oradata/PBPR01/control03.ctl'
    *.core_dump_dest='/u01/app/oracle/admin/PBPR01/cdump'
    *.cursor_space_for_time=true
    *.db_block_size=8192
    *.db_cache_size=576000000
    *.db_domain='cc.com'
    *.db_file_multiblock_read_count=16
    *.db_files=150
    *.db_name='PBPR01'
    *.db_writer_processes=1
    *.dbwr_io_slaves=2
    *.disk_asynch_io=false
    *.fast_start_mttr_target=1800
    *.java_pool_size=10485760
    *.job_queue_processes=5
    *.log_archive_dest_1='LOCATION=/s470/oraarch/PBPR01'
    *.log_archive_dest_3='service=DR_PBPR01 LGWR ASYNC=20480'
    *.log_archive_format='PBPR01_%t_%s.arc'
    *.log_archive_start=true
    *.log_buffer=524288
    *.log_checkpoints_to_alert=true
    *.max_dump_file_size='500000'
    *.object_cache_max_size_percent=20
    *.object_cache_optimal_size=512000
    *.open_cursors=500
    *.optimizer_mode='CHOOSE'
    *.processes=500
    *.pga_aggregate_target=414187520
    *.replication_dependency_tracking=false
    *.undo_management=AUTO
    *.undo_retention=10800
    *.undo_tablespace=UNDOTBS1
    *.undo_suppress_errors=TRUE
    *.session_cached_cursors=20
    *.shared_pool_size=450000000
    *.user_dump_dest='/u01/app/oracle/admin/PBPR01/udump'
    SGA :
    SQL> show sga
    Total System Global Area 1108839248 bytes
    Fixed Size 744272 bytes
    Variable Size 520093696 bytes
    Database Buffers 587202560 bytes
    Redo Buffers 798720 bytes
    SQL>
    I created log groups with 2 memebers each and with size 25 mb.
    Redo space waits shows as
    SQL> SELECT name, value
    FROM v$sysstat
    WHERE name = 'redo log space requests';
    NAME VALUE
    redo log space requests 152797
    this is running between 140000 and 160000
    some of the trace file error
    [oracle@hipclora6b bdump]$ cat PBPR01_lns0_23689.trc
    Dump file /u01/app/oracle/admin/PBPR01/bdump/PBPR01_lns0_23689.trc
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.6.0 - Production
    ORACLE_HOME = /u01/app/oracle/product/9.2.0.6
    System name: Linux
    Node name: hipclora6b.clickipc.hipc.clickcommerce.com
    Release: 2.4.21-37.EL
    Version: #1 SMP Wed Sep 7 13:32:18 EDT 2005
    Machine: x86_64
    Instance name: PBPR01
    Redo thread mounted by this instance: 1
    Oracle process number: 34
    Unix process pid: 23689, image: [email protected]
    *** SESSION ID:(82.51071) 2008-04-14 23:40:04.122
    *** 2008-04-14 23:40:04.122 46512 kcrr.c
    NetServer 0: initializing for LGWR communication
    NetServer 0: connecting to KSR channel
    : success
    NetServer 0: subscribing to KSR channel
    : success
    *** 2008-04-14 23:40:04.162 46559 kcrr.c
    NetServer 0: initialized successfully
    *** 2008-04-14 23:40:04.172 46819 kcrr.c
    NetServer 0: Request to Perform KCRRNSUPIAHM
    NetServer 0: connecting to remote destination DR_PBPR01
    *** 2008-04-14 23:40:04.412 46866 kcrr.c
    NetServer 0: connect status = 0
    A Sample alert Log
    Thread 1 advanced to log sequence 275496
    Current log# 1 seq# 275496 mem# 0: /s420/oradata/PBPR01/redo01a.log
    Current log# 1 seq# 275496 mem# 1: /s420/oradata/PBPR01/redo01b.log
    Tue Apr 15 09:10:03 2008
    ARC0: Evaluating archive log 4 thread 1 sequence 275495
    ARC0: Archive destination LOG_ARCHIVE_DEST_3: Previously completed
    ARC0: Beginning to archive log 4 thread 1 sequence 275495
    Creating archive destination LOG_ARCHIVE_DEST_1: '/s470/oraarch/PBPR01/PBPR01_1_275495.arc'
    Tue Apr 15 09:10:03 2008
    Beginning global checkpoint up to RBA [0x43428.3.10], SCN: 0x0000.3c1594fd
    Completed checkpoint up to RBA [0x43428.2.10], SCN: 0x0000.3c1594fa
    Completed checkpoint up to RBA [0x43428.3.10], SCN: 0x0000.3c1594fd
    Tue Apr 15 09:10:03 2008
    ARC0: Completed archiving log 4 thread 1 sequence 275495
    Tue Apr 15 09:29:15 2008
    LGWR: Completed archiving log 1 thread 1 sequence 275496
    Creating archive destination LOG_ARCHIVE_DEST_3: 'DR_PBPR01'
    LGWR: Beginning to archive log 5 thread 1 sequence 275497
    Beginning log switch checkpoint up to RBA [0x43429.2.10], SCN: 0x0000.3c15bc33
    Tue Apr 15 09:29:16 2008
    ARC1: Evaluating archive log 1 thread 1 sequence 275496
    ARC1: Archive destination LOG_ARCHIVE_DEST_3: Previously completed
    ARC1: Beginning to archive log 1 thread 1 sequence 275496
    Creating archive destination LOG_ARCHIVE_DEST_1: '/s470/oraarch/PBPR01/PBPR01_1_275496.arc'
    Tue Apr 15 09:29:16 2008
    Thread 1 advanced to log sequence 275497
    Current log# 5 seq# 275497 mem# 0: /s420/oradata/PBPR01/redo05a.log
    Current log# 5 seq# 275497 mem# 1: /s420/oradata/PBPR01/redo05b.log
    Tue Apr 15 09:29:16 2008
    ARC1: Completed archiving log 1 thread 1 sequence 275496
    Log file size
    SQL> select GROUP#,MEMBERS ,sum(bytes)/(1024*1024) from v$log group by
    2 GROUP#,MEMBERS;
    GROUP# MEMBERS SUM(BYTES)/(1024*1024)
    1 2 25
    2 2 25
    3 2 25
    4 2 25
    5 2 25
    Pl. give your view what can be thought of to reduce redospace wait

    Below are my suggestion:
    Increase log buffer between [ 5Mb and 15Mb]
    differ the the commit: COMMIT_WRITE=NOWAIT,BATCH
    You can also increase your redo log fil, but read the following
    Sizing Redo Logs with Oracle 10g
    Oracle has introduced a Redo Logfile Sizing Advisor that will recommend a size for our redo logs that limit excessive log switches, incomplete and excessive checkpoints, log archiving issues, DBWR performance and excessive disk I/O. All these issues result in transactions bottlenecking within redo and performance degradation. While many DBAs' first thought is throughput of the transaction base, not very many give thought to the recovery time required in relation to the size of redo generated or the actual size of the redo log groups. With the introduction of Oracle's Mean Time to Recovery features, DBAs can now specify through the FAST_START_MTTR_TARGET initialization variable just how long a crash recovery should take. Oracle will then try its best to issue the proper checkpoints during normal system operation to help meet this target. Since the size of redo logs and the checkpointing of data have a key role in Oracle's ability to recover within a desired time frame, Oracle will now use the value of FAST_START_MTTR_TARGET to suggest an optimal redo log size. In actuality, the setting of FAST_START_MTTR_TARGET is what triggers the new redo logfile sizing advisor, and if you do not set it, Oracle will not provide a suggestion for your redo logs. If you do not have any real time requirement for recovery you should at least set this to its maximum value of 3600 seconds, or one hour and you will then be able to take advantage of the advisory. After setting the FAST_START_MTTR_TARGET initialization parameter a DBA need only query the V$INSTANCE_RECOVERY view for the column OPTIMAL_LOGFILE_SIZE value, in MEG, and then rebuild the redo log groups with this recommendation.
    Simple query to show the optimal size for redo logs
    SQL> SELECT OPTIMAL_LOGFILE_SIZE
    FROM V$INSTANCE_RECOVERY
    OPTIMAL_LOGFILE_SIZE
    64
    A few notes about setting FAST_START_MTTR_TARGET
    •     Specify a value in seconds (0-3600) that you wish Oracle to perform recovery within.
    •     Is overridden by LOG_CHECKPOINT_INTERVAL:
    Since LOG_CHECKPOINT_INTERVAL requests Oracle to checkpoint after a specified amount of redo blocks have been written, and FAST_START_MTTR_TARGET basically attempts to size the redo logs in such a way as to perform a checkpoint when they switch, you can easily see that these two parameters are of conflicting interest. You will need to unset LOG_CHECKPOINT_INTERVAL if you wish to use the redo log sizing advisor and have checkpoints occur with log switches. This is how it was recommended to be done in the v7 days and really I can't quite see any reason for anything else.
    •     Is overridden by LOG_CHECKPOINT_TIMEOUT:
    LOG_CHECKPOINT_TIMEOUT controls the amount of time in between checkpoints if a log switch or the amount of redo generated has not yet triggered a checkpoint. Since our focus is now on Mean Time to Recovery (MTTR) this parameter is no longer of concern because we are asking Oracle to determine when to checkpoint based on our crash recovery requirements.
    •     Is overridden by FAST_START_IO_TARGET:
    Actually, the FAST_START_IO_TARGET parameter is deprecated and you should switch over to the FAST_START_MTTR_TARGET parameter
    Thanks

  • Reducing physical RAM consumption by oracle instance in my laptop

    I wanted to reduce the amount of RAM consumed by my Oracle Instance in my machine.
    OS: Windows Server 2003
    DB version: 10GR2.So i tried to reduce the sga_max_size to 200mb then restarted the instance. But sga_max_size remains the same. What am i doing wrong?
    SQL> show parameter sga_max_size
    NAME                                 TYPE        VALUE
    sga_max_size                         big integer 584M
    SQL> alter system set sga_max_size=200m scope=spfile;
    System altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area  612368384 bytes
    Fixed Size                  1250428 bytes
    Variable Size             281021316 bytes
    Database Buffers          322961408 bytes
    Redo Buffers                7135232 bytes
    Database mounted.
    Database opened.
    SQL> show parameter sga_max_size
    NAME                                 TYPE        VALUE
    sga_max_size                         big integer <font color="red"><b>584M</b></font>

    You can reduce SGA_MAX_SIZE as well as increase it, but it requires instance restart. The reason you cannot change it is that you have SGA_TARGET set to a higher value, or total value of these parameters is larger than your value for SGA_MAX_SIZE:
    shared_pool_size
    large_pool_size
    java_pool_size
    db_cache_size
    By the way, if you want to reclaim memory for your OS, you shouldn't worry about SGA_MAX_SIZE. Decrease SGA_TARGET instead.
    Message was edited by:
    Anvar Huseynov

  • Performance reduced under Bootcamp after EFI update on Macbook Pro Retina

    after i upgraded EFI on my retina macbook pro the proformence reduced to unusable. under heavy load under bootcamp windows7. the cpu clock drops and as well as cpu clock. this is the screen shot of gpu clock monitoring from windows u can see the gpu clock is running at 270Mhz at most of the time and trying to be back to 725Mhz(yea, not 900Mhz!) when im running a graphic benchmark at same time cpu will run about 1.1Ghz as well. i noticed that it will run 10-20sec normally then CPU and GPU clock will start dropping. It's almost like speedstep is programmed backwards. My CPU is constantly running at 3.1-3.2GHz, and then as soon as a game loads, it drops to 1.2GHz. I've lost track of how many times I reset the SMC  reseting SMC PRAM, reinstalling windows using Bootcamp or even erease the entire drive and start over from network recovery MacOS did not solve this problem. so all the recent games like COD MW3, Starcraft II, BF3,or D3 will only run about about 10 fps as i was playing them with no ploblem before the EFI update.
    this is killing me! please help.
    im in NY, in this season i dont think is the overheating problem. when i check the temperature, it stays about 80C (fans will run about 4k rpm). but some time heavy loaded Mac OS cpu is about 90 degree but only with 3k rpm fan kicked in.
    i did search online, seems that some other people have the same problem with rMBP or MBA 2012.
    anyone knows if i take my macbook pro to the store, will those people work at genuines bar help me with this "windows" problem?
    thanks!

    Hey Shadowyani, please take a time and fill a bug report at http://developer.apple.com/bugreporter/ . The machine is useless for me as it is now because I develop and use very GPU/CPU demanding scientific applications, after the update my realtime applications are completely useless as they drop from something like 25fps to 9 fps even on Mac OS X (no SMC reset fixed the issue). In bootcamp the situation is worse, my software runs at 4 fps and gaming turns to be impossible to play as well.
    Please let Apple know about the problem, fill a bug report, even if SMC reset once in a while fixed the issue for you (this is not a common behaviour and must be fixed). We paid the price for cutting edge technology.
    Apple shall not do a downgrade of the system after few weeks, remember that most of reviews and benchmarks were done BEFORE the EFI Update, so people are being fooled by Apple in this sense.

  • A simple and free way of reducing PDF file size using Preview

    Note: this is a copy and update of a 5 year old discussion in the Mac OS X 10.5 Leopard discussions which you can find here: https://discussions.apple.com/message/6109398#6109398
    This is a simple and free solution I found to reduce the file size of PDFs in OS X, without the high cost and awful UI of Acrobat Pro, and with acceptable quality. I still use it every day, although I have Acrobat Pro as part of Adove Creative Cloud subscription.
    Since quite a few people have found it useful and keep asking questions about the download location and destination of the filters, which have changed since 2007, I decided to write this update, and put it in this more current forum.
    Here is how to install it:
    Download the filters here: https://dl.dropboxusercontent.com/u/41548940/PDF%20compression%20filters%20%28Un zip%20and%20put%20in%20your%20Library%20folder%29.zip
    Unzip the downloaded file and copy the filters in the appropriate location (see below).
    Here is the appropriate location for the filters:
    This assumes that your startup disk's name is "Macintosh HD". If it is different, just replace "Macintosh HD" with the name of your startup disk.
    If you are running Lion or Mountain Lion (OS X 10.7.x or 10.8.x) then you should put the downloaded filters in "Macintosh HD/Library/PDF Services". This folder should already exist and contain files. Once you put the downloaded filters there, you should have for example one file with the following path:
    "Macintosh HD/Library/PDF Services/Reduce to 150 dpi average quality - STANDARD COMPRESSION.qfilter"
    If you are running an earlier vesion of OS X (10.6.x or earlier), then you should put the downloaded filters in "Macintosh HD/Library/Filters" and you should have for example one file with the following path:
    "Macintosh HD/Library/Filters/Reduce to 150 dpi average quality - STANDARD COMPRESSION.qfilter"
    Here is how to use it:
    Open a PDF file using Apple's Preview app,
    Choose Export (or Save As if you have on older version of Mac OS X) in the File menu,
    Choose PDF as a format
    In the "Quartz Filter" drop-down menu, choose a filter "Reduce to xxx dpi yyy quality"; "Reduce to 150 dpi average quality - STANDARD COMPRESSION" is a good trade-off between quality and file size
    Here is how it works:
    These are Quartz filters made with Apple Colorsinc Utility.
    They do two things:
    downsample images contained in a PDF to a target density such as 150 dpi,
    enable JPEG compression for those images with a low or medium setting.
    Which files does it work with?
    It works with most PDF files. However:
    It will generally work very well on unoptimized files such as scans made with the OS X scanning utility or PDFs produced via OS X printing dialog.
    It will not further compress well-optimized (comrpessed) files and might create bigger files than the originals,
    For some files it will create larger files than the originals. This can happen in particular when a PDF file contains other optomizations than image compression. There also seems to be a bug (reported to Apple) where in certain circumstances images in the target PDF are not JPEG compressed.
    What to do if it does not work for a file (target PDF is too big or even larger than the original PDF)?
    First,a good news: since you used a Save As or Export command, the original PDF is untouched.
    You can try another filter for a smaller size at the expense of quality.
    The year being 2013, it is now quite easy to send large files through the internet using Dropbox, yousendit.com, wetransfer.com etc. and you can use these services to send your original PDF file.
    There are other ways of reducing the size of a PDF file, such as apps in the Mac App store, or online services such as the free and simple http://smallpdf.com
    What else?
    Feel free to use/distribute/package in any way you like.

    Thanks ioscar.
    The original link should be back online soon.
    I believe this is a Dropbox error about the traffic generated by my Dropbox shared links.
    I use Dropbox mainly for my business and I am pretty upset by this situation.
    Since the filters themsemves are about 5KB, I doubt they are the cause for this Dropbox misbehavior!
    Anyway, I submitted a support ticket to Dropbox, and hope everything will be back to normal very soon.
    In the meantime, if you get the same error as ioscar when trying to download them, you can use the link in the blog posting he mentions.
    This is out of topic, but for those interested, here is my understanding of what happened with Dropbox.
    I did a few tests yesterday with large (up to 4GB) files and Dropbox shared links, trying to find the best way to send a 3 hour recording from French TV - French version of The Voice- to a friend's 5 year old son currently on vacation in Florida, and without access to French live or catch up TV services. One nice thing I found is that you can directly send the Dropbox download URL (the one from the Download button on the shared link page) to an AppleTV using AirFlick and it works well even for files with a large bitrate (except of course for the Dropbox maximum bandwidth per day limit!). Sadly, my Dropbox shared links were disabled before I could send anything to my friend.
    I may have used  a significant amount of bandwidth but nowhere near the 200GB/day limit of my Dropbox Pro account.
    I see 2 possible reasons to Dropbox freaking out:
    - My Dropbox Pro account is wronngly identified as a free account by Dropbox. Free Dropbox accounts have a 20GB/day limit, and it is possible that I reached this limit with my testing, I have a fast 200Mb/s internet access.
    - Or Dropbox miscalculates used bandwidth, counting the total size of the file for every download begun, and I started a lot of downloads, and skipped to the end of the video a lot of times on my Apple TV.

  • I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have reduced functionality what it this about and how do I get my product back

    I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have reduced functionality what it this about and how do I get my product back

    Hi there
    I have version 5.7 and every time I opened it I was told that updates are available and to click on the icon to access these.  Instead it just took me to the
    adobe page with nowhere visible to update.  I then  sought to download lightroom cc and this is when I could not access the 'develop' section due to reduced
    functionality  It was apparent that my photos had been put in cc but no way to access them unless I wanted to subscribe. 
    I have since remedied the problem as  my original lightroom 5.7 icon is still available on the desktop and have gone back to that.  I do feel that this is a bit
    of a rip off and an unnecessary waste of my time though.
    Thank you for your prompt reply by the way.
    Carlo
    Message Received: May 04 2015, 04:52 PM
    From: "dj_paige" <[email protected]>
    To: "Carlo Bragagnolo" <[email protected]>
    Cc:
    Subject:  I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have
    reduced functionality what it this about and how do I get my product back
    dj_paige  created the discussion
    "I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have reduced functionality what it
    this about and how do I get my product back"
    To view the discussion, visit: https://forums.adobe.com/message/7510559#7510559
    >

  • Best practice to reduce downtime  for fulllaod in Production system

    Hi Guys ,
    we have  options like "Initialize without data transfer  ", Initialization with data transfer"
    To reduce downtime of production system for load setup tables , first I will trigger info package for  Initialization without data transfer so that pointer will be set on table , from that point onwards any record added as delta record , I will trigger Info package for Delta , to get delta records in BW , once delta successful, I will trigger info package for  the repair full request to  get all historical data into setup tables , so that downtime of production system will be reduced.
    Please let me know your thoughts and correct me if I am wrong .
    Please let me know about "Early delta Initialization" option .
    Kind regards.
    hari

    Hi,
    You got some wrong information.
    Info pack - just loads data from setup tables to PSA only.
    Setup tables - need to fill by manual by using related t codes.
    Assuming as your using LO Data source.
    In this case source system lock is mandatory. other wise you need to go with early delta init option.
    Early delta init - its useful to load data into bw without down time at source.
    Means at same time it set delta pointer and will load into as your settings(init with or without).
    if source system not able lock as per client needs then better to go with early delta init options.
    Thanks

  • My firefox browser window will not stay in full screen mode, it keeps reducing itself so I can't use the scroll down button on the side, so I have to constantly minimize and maximize to get to the scroll bar, How can I stop this?

    Like I said it happens every few minutes... It is annoying when you are on the internet for more than a few minutes because you constantly have to either minimize it or reduce the screen manually by hitting the button and then maximize it again to get the scroll bar to be in view to use it.

    Delete the localstore.rdf file at the following location:
    C:\Users\'''USERNAME'''\AppData\Roaming\Mozilla\Firefox\Profiles\'''PROFILE NAME'''
    I did this and it fixed it for several days, but it broke again eventually.

  • Reducing Load Times / Design Architecture

    I'm designing a flash site with a Dynamic Gallery (loading images from a XML file). what can I do to reduce load time at the beginning, or limit loading times to individual images?
    I've seen designs which loads another SWF file on top of the current one. Is this feasible?
    Any details would be greatly appreciated.

    If loading the images is the goal of the file, then the quickest way to do that is to concentrate on the images.  Any way you load them, they have to be loaded.  Be sure they are optimized for the web.  If the gallery intent would allow for it, have the images loaded on request, using thumbnails or other button-like interfaces.  Or maybe load different sections at a time.  Or load so many at first, then load the rest in the background so that there's something to look at while the rest load.
    I don't know what purpose you have in mind for loading another swf into the file, so I can't offer any ideas there.

  • Problem with gross requirement planning pir is not getting reduced

    Dear Gurus
    I am trying to use strategy 11 for a particular finished product.(having bom and routing)
    In the material master I have set mrp type pd lot size ex .strategy group 11, mixed mrp 2,item category group norm, availabilty checking group 02 .
    Then I made manually pir for that material in md61 with  BSF requirement type .
    Then ran the mrp for that material in md02.
    Planned orders came .(In md04 screen that gross requirement line text also came as usual )Converted them to prodn order and then did the gr in mb31 with 101 mvt type. After doing the gr the pir should get reduced in md04 screen which is not happening . Where I am going wrong
    Regards
    Sandip Sarkar

    Dear Sandip Sarkar,
                                      It usually happens run MRP and ckeck.
    Strategy 11 in summary :-
    1. Sales Order creation - no impact.
    2. Goods Receipt - minus the quantity for the oldest planned independent in demand management.
       For e.g. if PIR is 100 and delivery 90, PIR becomes 10 (withdrawal 90).
    3. Delivery - no impact as delivery is issue from sales order.
    This strategy is particularly useful if you need to produce, regardless of whether you have stock or not. For instance, steel or cement producers might want to use this strategy because they cannot shut down production; a blast furnace or a cement factory must continue to produce, even if this means having to produce to stock.
    You need to maintain the following master data for the finished product:
    Maintain strategy group 11 on the MRP screen.
    Set the Mixed MRP indicator to 2 on the MRP screen.
    Maintain the item category group (for example, NORM) on the Sales Organization screen.
    Maintain the Availability check field so that you perform an availability check without the replenishment lead time (checking group 02 in the standard system).
    Strategy                                                        10                                           
    Stock is taken into account                           Yes                                          
    reduction of planned independent
    requirements takes place during u2026   u2026 goods issue for                                                                                delivery                                  order (discrete production),                                                                               
    11
    Stock is taken into account                          No
    Reduction of planned independent
    requirements takes place during u2026            ... goods receipt for a production
                                                                            order (discrete production),                                                                               
    for a planned order (repetitive
                                                                             manufacturing), or for a purchase
                                                                             order (trading goods).
    Edited by: Vachanala Rajasekhar on May 7, 2010 11:55 AM

  • Does anyone know any apps or programs where you can edit voice memos? i.e.- reduce the amount of static

    hey
    so i decided to record some friends of mine as a joke. i was using the voice memos app on my iPhone 6.. i thought i put the phone down close enough to pick up everything, but it didn't quiet work out as well as i hoped. i ended up getting more static than voice.. does anyone know of a good app or program where i can edit this voice memo. reduce all that static and raise their voices to they can be audible?
    hope y'all can help

    Did you ever figure out how to fix this?

Maybe you are looking for

  • Link to page not working in browser

    Scenario: two .pdfs generated from FrameMaker with full Acrobat 7.0. Document A contains a cross-reference to a heading on page 7 of document B. Behaviour in desktop Reader: clicking on the link in Document A opens Document B, at page 7 Behaviour in

  • How do i get rid of all the songs on my ipod mini?

    I want to get rid of the songs on my ipod mini but i dont want to delete the songs on my itunes? i am doing this because my itunes are only 1.1 GB but on my ipod mini it says ive used 2.81 GB. whats going on??

  • ADAPTER.HTTP_EXCEPTION - HTTP 400 Bad request

    Hi Folks , Error : "com.sap.aii.af.ra.ms.api.DeliveryException: SOAP: response message contains an error XIAdapter/HTTP/ADAPTER.HTTP_EXCEPTION - HTTP 400 Bad request" Scenario : Webservice --> XI (ccBPM) -->SAP  (By verfying the response from SAP and

  • How can I know the number of ejb instances

    Hi all, We need to know how can we get the number of ejb instances (entity and session) while the application is running. We have a memory leak, so we need to know the amount opf memory used. Please, it´s quite urgent, can anybody help us? It seems

  • Via BlackBerry 10

    Good morning everyone, to start with, well thank you so much for all the super info that i get from here - atleast am up to date. I download Facebook from BlackBerry App World and the installation was successful. But i cannot see that notification th