Different full backup size of identical databases

Hello,
I am on Oracle 10GR2.
I have 1 database instance with size approximately 50GB. Today I created second instance with RMAN duplication process from the first instance. So now I have two similar DB instances with same size 50GB.
What is strange to me is size of FULL LEVEL0 backups of these databases.
Backup size of original database has approximately 22 GB, backup size of second (duplicated) instance has 7GB.
Can you explain me why? Or what should I do with original database to have same small backup size.
Executed RMAN command for backup is: BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;
Thank you

select sum(bytes)/1024/1024/1024 GB from dba_segments;
This select gives me 6,79 GB in both instances.
I did not used UNTIL TIME for duplication.
RMAN settings is same for both instances and I don't use any compression
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default

Similar Messages

  • Time Machine gives different full backup sizes

    After this recent Time Machine update I am no longer able to back up. When Time Machine computes the backup it tells me that it is 320GB, which is larger than my backup drive, so it can't back up. However, when I go into the Time Machine preferences it computes the estimated backup size as 272.52GB. Before this Time Machine update was applied my backups were happening with no issues.

    Mark Trolley wrote:
    Thank you. They must have changed the amount of extra space required with this latest patch
    It's been 20% since the early days of Leopard.
    because it was working fine up until that point.
    It may be a combination of things: the drive is clearly too small; while it varies widely depending on how you use your Mac, our general "rule of thumb" is that TM needs at least twice the space of the data it's backing-up.
    If you're like most of us, the amount of data on your system has been growing.
    And apparently Time Machine is doing a full backup, so it just got beyond the capacity of the disk to hold it all.
    Looks like I need a larger Time Machine disk.
    Yup. The good thing is, they continue to get less and less expensive.

  • How can I do a FULL Backup to Oracle 8i Database?

    Hi,
    We are using a product that uses Oracle 8i and we are trying to perform a FULL Backup to the database. I am not an oracle person and not sure how to do this.
    Please also inform me on how to perform a FULL Restor for the backed up database. I would really appreaciate it.
    null

    Before I can amply reply to your post, I need to know what tools you have.
    What is your O.S.?
    What do you want to backup to? (Tape Drive/Hard Disk)
    What time frame requirements do you have?
    Are you running in Archive Log Mode? <-- (in case you do not know, log in with svrmgrl, connect internal, run ARCHIVE LOG LIST; and tell me what is reported on the top line of output.)
    Do you have requirements that the database must be Online at all times?
    Are you using 3rd party Backup Software, if so, which one(s)?
    I know these are a lot of questions, but it helps me help you. :)
    Rgds,
    Mark Brown
    SAP Japan

  • For Patching full backup of application and database

    Hi
    we are using oracle11i (11.5.10.2) and database version 10.2.0.3 on windows 2003
    every time of patching usually i take full backup of database and application (DATA_TOP,DB_TOP,APPL_TOP,COMMON_TOP,ORA_TOP)
    But can i take only DATA_TOP and APPL_TOP,COMMON_TOP,ORA_TOP for patching (WithOut DB_TOP)?
    Thanks
    Regards
    AUH

    Hi,
    But can i take only DATA_TOP and APPL_TOP,COMMON_TOP,ORA_TOP for patching (WithOut DB_TOP)?For the application patches, you do not need to take a backup of the database ORACLE_HOME. Backup this ORACLE_HOME only if you apply database patches (using opatch or OUI).
    Regards,
    Hussein

  • Different optimizer paths between two identical databases

    Hello!
    I'm running into a problem with a query that is pretty amazing.  I have two different databases that I work with - Development and Acceptance.  According to my DBA, they are absolutely identical in every respect - including data.  Both of them gather statistics every night at the same time.
    On development my query executes beautifully - 0.6 seconds.
    On acceptance the query executes in 60 - 70 seconds.
    They both evalute completely differently thought an explain plan - obviously from the difference in performance.  Here are the two explain plans for the query...
    GOOD
    explain plan for
    select e.employee_name,
           e.worker_id       id,
           e.vac_ent         "Vac<BR>Ent",
           e.vac_taken       "Vac<BR>Taken",
           e.vac_presched    "Vac<BR>PreSched",
           e.vac_remain      "Vac<BR>Remain",
           e.banked_hours    "Banked<BR>Hours",
           e.banked_taken    "Banked<BR>Hours<BR>Taken",
           e.banked_presched "Banked<BR>Hours<BR>Presched",
           e.banked_remain   "Banked<BR>Hours<BR>Remain",
           e.edo_earned      "EDO<BR>Hours<BR>Earned",
           e.edo_taken       "EDO<BR>Hours<BR>Taken",
           e.edo_presched    "EDO<BR>Hours<BR>Presched",
           e.edo_remain      "EDO<BR>Hours<BR>Remain",
           e.ado_earned      "ADO<BR>Hours<BR>Earned",
           e.ado_taken       "ADO<BR>Hours<BR>Taken",
           e.ado_presched    "ADO<BR>Hours<BR>Presched",
           e.ado_remain      "ADO<BR>Hours<BR>Remain"
      from tas.benefit_summary_curr_year_v e /* USESYSDATEFORSECURITY */
    where 1 = 1
       and (e.worker_id in
           (select worker_id
               from worker_cost_centre_v ecc2
              where ecc2.cost_centre = '100033'
                and ((ecc2.effective_date <= dtutil.todate('2013/10/08') and
                    ecc2.expiration_date >= dtutil.todate('2013/10/08')) or
                    (ecc2.effective_date <= dtutil.todate('2013/11/08') and
                    ecc2.expiration_date >= dtutil.todate('2013/11/08')) or
                    (ecc2.effective_date >= dtutil.todate('2013/10/08') and
                    ecc2.expiration_date <= dtutil.todate('2013/11/08')))))
       and pkg_taw_security.user_worker_access('CA17062',
                                               'TIMEKEEPER',
                                               e.worker_id,
                                               trunc(sysdate)) = 1
    union
    select 'ZZZTOTALS',
           sum(e.vac_ent),
           sum(e.vac_taken),
           sum(e.vac_presched),
           sum(vac_remain),
           sum(e.banked_hours),
           sum(e.banked_taken),
           sum(e.banked_presched),
           sum(e.banked_remain),
           sum(e.edo_earned),
           sum(e.edo_taken),
           sum(e.edo_presched),
           sum(e.edo_remain),
           sum(e.ado_earned),
           sum(e.ado_taken),
           sum(e.ado_presched),
           sum(e.ado_remain)
      from tas.benefit_summary_curr_year_v e
    where 1 = 1
       and (e.worker_id in
           (select worker_id
               from worker_cost_centre_v ecc2
              where ecc2.cost_centre = '100033'
                and ((ecc2.effective_date <= dtutil.todate('2013/10/08') and
                    ecc2.expiration_date >= dtutil.todate('2013/10/08')) or
                    (ecc2.effective_date <= dtutil.todate('2013/11/08') and
                    ecc2.expiration_date >= dtutil.todate('2013/11/08')) or
                    (ecc2.effective_date >= dtutil.todate('2013/10/08') and
                    ecc2.expiration_date <= dtutil.todate('2013/11/08')))))
       and pkg_taw_security.user_worker_access('CA17062',
                                               'TIMEKEEPER',
                                               e.worker_id,
                                               trunc(sysdate)) = 1
    order by 1;
    select * from table(dbms_xplan.display);
    GOOD PLAN_TABLE_OUTPUT
    Plan hash value: 432971565
    | Id  | Operation                            | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                     |                        |     2 |   274 |    14  (43)| 00:00:01 |
    |   1 |  SORT UNIQUE                         |                        |     2 |   274 |    13  (70)| 00:00:01 |
    |   2 |   UNION-ALL                          |                        |       |       |            |          |
    |   3 |    HASH GROUP BY                     |                        |     1 |    66 |     6  (34)| 00:00:01 |
    |   4 |     NESTED LOOPS                     |                        |       |       |            |          |
    |   5 |      NESTED LOOPS                    |                        |     1 |    66 |     4   (0)| 00:00:01 |
    |   6 |       NESTED LOOPS                   |                        |     1 |    44 |     3   (0)| 00:00:01 |
    |*  7 |        TABLE ACCESS BY INDEX ROWID   | WORKER_COST_CENTRE_TBL |     1 |    29 |     2   (0)| 00:00:01 |
    |*  8 |         INDEX RANGE SCAN             | WORKER_CC_CC_IDX       |    29 |       |     1   (0)| 00:00:01 |
    |*  9 |        INDEX RANGE SCAN              | BENEFIT_IND            |     1 |    15 |     1   (0)| 00:00:01 |
    |* 10 |       INDEX UNIQUE SCAN              | WORKER_PK              |     1 |       |     1   (0)| 00:00:01 |
    |  11 |      TABLE ACCESS BY INDEX ROWID     | WORKER_TBL             |     1 |    22 |     1   (0)| 00:00:01 |
    |  12 |    SORT AGGREGATE                    |                        |     1 |   208 |     7  (43)| 00:00:01 |
    |  13 |     VIEW                             | VM_NWVW_0              |     1 |   208 |     6  (34)| 00:00:01 |
    |  14 |      HASH GROUP BY                   |                        |     1 |    66 |     6  (34)| 00:00:01 |
    |  15 |       NESTED LOOPS                   |                        |       |       |            |          |
    |  16 |        NESTED LOOPS                  |                        |     1 |    66 |     5  (20)| 00:00:01 |
    |  17 |         NESTED LOOPS                 |                        |     1 |    44 |     4  (25)| 00:00:01 |
    |  18 |          SORT UNIQUE                 |                        |     1 |    29 |     2   (0)| 00:00:01 |
    |* 19 |           TABLE ACCESS BY INDEX ROWID| WORKER_COST_CENTRE_TBL |     1 |    29 |     2   (0)| 00:00:01 |
    |* 20 |            INDEX RANGE SCAN          | WORKER_CC_CC_IDX       |    29 |       |     1   (0)| 00:00:01 |
    |* 21 |          INDEX RANGE SCAN            | BENEFIT_IND            |     1 |    15 |     1   (0)| 00:00:01 |
    |* 22 |         INDEX UNIQUE SCAN            | WORKER_PK              |     1 |       |     1   (0)| 00:00:01 |
    |  23 |        TABLE ACCESS BY INDEX ROWID   | WORKER_TBL             |     1 |    22 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       7 - filter("X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/10/08') OR
                  "X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/11/08') AND
                  "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/11/08') OR
                  "X"."EFFECTIVE_DATE">="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE"<="DTUTIL"."TODATE"('2013/11/08'))
       8 - access("X"."COST_CENTRE"='100033')
       9 - access("X"."WORKER_ID"="X"."WORKER_ID")
           filter(TO_CHAR(INTERNAL_FUNCTION("X"."ACTIVITY_DATE"),'YYYY')>=TO_CHAR(SYSDATE@!,'YYYY') AND
                  "PKG_TAW_SECURITY"."USER_WORKER_ACCESS"('CA17062','TIMEKEEPER',"X"."WORKER_ID",TRUNC(SYSDATE@!))=1)
      10 - access("X"."WORKER_ID"="X"."WORKER_ID")
      19 - filter("X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/10/08') OR
                  "X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/11/08') AND
                  "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/11/08') OR
                  "X"."EFFECTIVE_DATE">="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE"<="DTUTIL"."TODATE"('2013/11/08'))
      20 - access("X"."COST_CENTRE"='100033')
      21 - access("X"."WORKER_ID"="X"."WORKER_ID")
           filter(TO_CHAR(INTERNAL_FUNCTION("X"."ACTIVITY_DATE"),'YYYY')>=TO_CHAR(SYSDATE@!,'YYYY') AND
                  "PKG_TAW_SECURITY"."USER_WORKER_ACCESS"('CA17062','TIMEKEEPER',"X"."WORKER_ID",TRUNC(SYSDATE@!))=1)
      22 - access("X"."WORKER_ID"="X"."WORKER_ID")
    The bad plan - based off the same query but run against a different database.
    Bad PLAN_TABLE_OUTPUT
    Plan hash value: 3742309457
    | Id  | Operation                        | Name                        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                 |                             |     4 |  1158 |       | 10065   (2)| 00:02:01 |
    |   1 |  SORT UNIQUE                     |                             |     4 |  1158 |       | 10064  (51)| 00:02:01 |
    |   2 |   UNION-ALL                      |                             |       |       |       |            |          |
    |*  3 |    HASH JOIN                     |                             |     3 |   915 |       |  5031   (2)| 00:01:01 |
    |   4 |     JOIN FILTER CREATE           | :BF0000                     |     1 |    29 |       |     2   (0)| 00:00:01 |
    |*  5 |      TABLE ACCESS BY INDEX ROWID | WORKER_COST_CENTRE_TBL      |     1 |    29 |       |     2   (0)| 00:00:01 |
    |*  6 |       INDEX RANGE SCAN           | WORKER_CC_CC_IDX            |    28 |       |       |     1   (0)| 00:00:01 |
    |*  7 |     VIEW                         | BENEFIT_SUMMARY_CURR_YEAR_V |   204K|    53M|       |  5027   (1)| 00:01:01 |
    |   8 |      HASH GROUP BY               |                             |   204K|  7403K|  9656K|  5027   (1)| 00:01:01 |
    |   9 |       JOIN FILTER USE            | :BF0000                     |   204K|  7403K|       |  3040   (2)| 00:00:37 |
    |* 10 |        HASH JOIN                 |                             |   204K|  7403K|  5392K|  3040   (2)| 00:00:37 |
    |  11 |         TABLE ACCESS FULL        | WORKER_TBL                  |   162K|  3485K|       |   584   (1)| 00:00:08 |
    |* 12 |         INDEX FULL SCAN          | BENEFIT_IND                 |   204K|  3001K|       |  1927   (2)| 00:00:24 |
    |  13 |    SORT AGGREGATE                |                             |     1 |   243 |       |  5032   (2)| 00:01:01 |
    |* 14 |     HASH JOIN RIGHT SEMI         |                             |    16 |  3888 |       |  5031   (2)| 00:01:01 |
    |  15 |      JOIN FILTER CREATE          | :BF0001                     |     1 |    29 |       |     2   (0)| 00:00:01 |
    |* 16 |       TABLE ACCESS BY INDEX ROWID| WORKER_COST_CENTRE_TBL      |     1 |    29 |       |     2   (0)| 00:00:01 |
    |* 17 |        INDEX RANGE SCAN          | WORKER_CC_CC_IDX            |    28 |       |       |     1   (0)| 00:00:01 |
    |* 18 |      VIEW                        | BENEFIT_SUMMARY_CURR_YEAR_V |   204K|    41M|       |  5027   (1)| 00:01:01 |
    |  19 |       HASH GROUP BY              |                             |   204K|  7403K|  9656K|  5027   (1)| 00:01:01 |
    |  20 |        JOIN FILTER USE           | :BF0001                     |   204K|  7403K|       |  3040   (2)| 00:00:37 |
    |* 21 |         HASH JOIN                |                             |   204K|  7403K|  5392K|  3040   (2)| 00:00:37 |
    |  22 |          TABLE ACCESS FULL       | WORKER_TBL                  |   162K|  3485K|       |   584   (1)| 00:00:08 |
    |* 23 |          INDEX FULL SCAN         | BENEFIT_IND                 |   204K|  3001K|       |  1927   (2)| 00:00:24 |
    Predicate Information (identified by operation id):
       3 - access("E"."WORKER_ID"="X"."WORKER_ID")
       5 - filter("X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/10/08') OR "X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/11/08')
                  AND "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/11/08') OR
                  "X"."EFFECTIVE_DATE">="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE"<="DTUTIL"."TODATE"('2013/11/08'))
       6 - access("X"."COST_CENTRE"='100033')
       7 - filter("PKG_TAW_SECURITY"."USER_WORKER_ACCESS"('CA17062','TIMEKEEPER',"E"."WORKER_ID",TRUNC(SYSDATE@!))=1
      10 - access("X"."WORKER_ID"="X"."WORKER_ID")
      12 - filter(TO_CHAR(INTERNAL_FUNCTION("X"."ACTIVITY_DATE"),'YYYY')>=TO_CHAR(SYSDATE@!,'YYYY'))
      14 - access("E"."WORKER_ID"="X"."WORKER_ID")
      16 - filter("X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/10/08') OR "X"."EFFECTIVE_DATE"<="DTUTIL"."TODATE"('2013/11/08')
                  AND "X"."EXPIRATION_DATE">="DTUTIL"."TODATE"('2013/11/08') OR
                  "X"."EFFECTIVE_DATE">="DTUTIL"."TODATE"('2013/10/08') AND
                  "X"."EXPIRATION_DATE"<="DTUTIL"."TODATE"('2013/11/08'))
      17 - access("X"."COST_CENTRE"='100033')
      18 - filter("PKG_TAW_SECURITY"."USER_WORKER_ACCESS"('CA17062','TIMEKEEPER',"E"."WORKER_ID",TRUNC(SYSDATE@!))=1
      21 - access("X"."WORKER_ID"="X"."WORKER_ID")
      23 - filter(TO_CHAR(INTERNAL_FUNCTION("X"."ACTIVITY_DATE"),'YYYY')>=TO_CHAR(SYSDATE@!,'YYYY'))
    So I can definitely tune the query to work against my acceptance database - that isn't really the problem.  The problem is that I can't count on the optimizations to be the same between development and acceptance.  So if I move this to production, how do I know I don't get a third plan?!?!?
    Can anyone suggest anything that might cause what I'm seeing above - and / or anything that I can do to prevent it.  This is just one query out of 10,000 or so that I'm working with - we are migrating from Oracle 9 to 11g and the queries all worked perfectly on 9.
    Thanks in Advance!
    Cory Aston

    Here are the two plans for the simplified query - using the OUTLINE format as per your request
    This is the bad plan from acceptance.
    PLAN_TABLE_OUTPUT
    SQL_ID  3zfrdhqpqk1mw, child number 1
    select /*+ gather_plan_statistics */        w.worker_id, w.worker_name
    from worker_v                   w,        worker_cost_centre_v       c
    where w.worker_id = c.worker_id    and c.effective_date <=
    trunc(sysdate)    and c.expiration_date >= trunc(sysdate)    and
    c.cost_centre = '100033'    and pkg_taw_security.user_worker_access('CA1
    7062',                                            'TIMEKEEPER',
                                       w.worker_id,
                       trunc(sysdate)) = 1  order by w.worker_name
    Plan hash value: 1726112176
    | Id  | Operation                      | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                        |       |       |   589 (100)|          |
    |   1 |  SORT ORDER BY                 |                        |     4 |   400 |   589   (2)| 00:00:08 |
    |*  2 |   HASH JOIN                    |                        |     4 |   400 |   588   (1)| 00:00:08 |
    |   3 |    VIEW                        | WORKER_COST_CENTRE_V   |     4 |   124 |     2   (0)| 00:00:01 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| WORKER_COST_CENTRE_TBL |     4 |   116 |     2   (0)| 00:00:01 |
    |*  5 |      INDEX RANGE SCAN          | WORKER_CC_CC_IDX       |    28 |       |     1   (0)| 00:00:01 |
    |*  6 |    VIEW                        | WORKER_V               |   162K|    10M|   584   (1)| 00:00:08 |
    |   7 |     TABLE ACCESS FULL          | WORKER_TBL             |   162K|  3485K|   584   (1)| 00:00:08 |
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
          DB_VERSION('11.2.0.3')
          OPT_PARAM('optimizer_index_cost_adj' 10)
          ALL_ROWS
          OUTLINE_LEAF(@"SEL$2")
          OUTLINE_LEAF(@"SEL$3")
          OUTLINE_LEAF(@"SEL$1")
          NO_ACCESS(@"SEL$1" "C"@"SEL$1")
          NO_ACCESS(@"SEL$1" "W"@"SEL$1")
          LEADING(@"SEL$1" "C"@"SEL$1" "W"@"SEL$1")
          USE_HASH(@"SEL$1" "W"@"SEL$1")
          FULL(@"SEL$2" "X"@"SEL$2")
          INDEX_RS_ASC(@"SEL$3" "X"@"SEL$3" ("WORKER_COST_CENTRE_TBL"."COST_CENTRE"
                  "WORKER_COST_CENTRE_TBL"."EFFECTIVE_DATE"))
          END_OUTLINE_DATA
    Predicate Information (identified by operation id):
       2 - access("W"."WORKER_ID"="C"."WORKER_ID")
       4 - filter("X"."EXPIRATION_DATE">=TRUNC(SYSDATE@!))
       5 - access("X"."COST_CENTRE"='100033' AND "X"."EFFECTIVE_DATE"<=TRUNC(SYSDATE@!))
       6 - filter("PKG_TAW_SECURITY"."USER_WORKER_ACCESS"('CA17062','TIMEKEEPER',"W"."WORKER_ID",TRUN
                  C(SYSDATE@!))=1)
    Note
       - cardinality feedback used for this statement
    62 rows selected.
    This is the good one from development.
    PLAN_TABLE_OUTPUT
    SQL_ID  3zfrdhqpqk1mw, child number 0
    select /*+ gather_plan_statistics */        w.worker_id, w.worker_name
    from worker_v                   w,        worker_cost_centre_v       c
    where w.worker_id = c.worker_id    and c.effective_date <=
    trunc(sysdate)    and c.expiration_date >= trunc(sysdate)    and
    c.cost_centre = '100033'    and pkg_taw_security.user_worker_access('CA1
    7062',                                            'TIMEKEEPER',
                                       w.worker_id,
                       trunc(sysdate)) = 1  order by w.worker_name
    Plan hash value: 3435904055
    | Id  | Operation                      | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                        |       |       |     5 (100)|          |
    |   1 |  SORT ORDER BY                 |                        |    18 |   918 |     5  (20)| 00:00:01 |
    |   2 |   NESTED LOOPS                 |                        |       |       |            |          |
    |   3 |    NESTED LOOPS                |                        |    18 |   918 |     4   (0)| 00:00:01 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| WORKER_COST_CENTRE_TBL |    18 |   522 |     2   (0)| 00:00:01 |
    |*  5 |      INDEX RANGE SCAN          | WORKER_CC_CC_IDX       |    29 |       |     1   (0)| 00:00:01 |
    |*  6 |     INDEX UNIQUE SCAN          | WORKER_PK              |     1 |       |     1   (0)| 00:00:01 |
    |   7 |    TABLE ACCESS BY INDEX ROWID | WORKER_TBL             |     1 |    22 |     1   (0)| 00:00:01 |
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
          DB_VERSION('11.2.0.3')
          OPT_PARAM('optimizer_index_cost_adj' 10)
          ALL_ROWS
          OUTLINE_LEAF(@"SEL$5428C7F1")
          MERGE(@"SEL$2")
          MERGE(@"SEL$3")
          OUTLINE(@"SEL$1")
          OUTLINE(@"SEL$2")
          OUTLINE(@"SEL$3")
          INDEX_RS_ASC(@"SEL$5428C7F1" "X"@"SEL$3" ("WORKER_COST_CENTRE_TBL"."COST_CENTRE"
                  "WORKER_COST_CENTRE_TBL"."EFFECTIVE_DATE"))
          INDEX(@"SEL$5428C7F1" "X"@"SEL$2" ("WORKER_TBL"."WORKER_ID"))
          LEADING(@"SEL$5428C7F1" "X"@"SEL$3" "X"@"SEL$2")
          USE_NL(@"SEL$5428C7F1" "X"@"SEL$2")
          NLJ_BATCHING(@"SEL$5428C7F1" "X"@"SEL$2")
          END_OUTLINE_DATA
    Predicate Information (identified by operation id):
       4 - filter("X"."EXPIRATION_DATE">=TRUNC(SYSDATE@!))
       5 - access("X"."COST_CENTRE"='100033' AND "X"."EFFECTIVE_DATE"<=TRUNC(SYSDATE@!))
       6 - access("X"."WORKER_ID"="X"."WORKER_ID")
           filter("PKG_TAW_SECURITY"."USER_WORKER_ACCESS"('CA17062','TIMEKEEPER',"X"."WORKER_ID",TRUN
                  C(SYSDATE@!))=1)
    60 rows selected.
    I'm not sure how to interpret what dbms_xplan is telling me above.  Any help would be greatly appreciated!!
    Thanks,
    Cory

  • Recover full backup into another database

    Hello,
    I have a particular need that does not seems to be done so often and I just cannot get it.
    So here is the situation : I have a backup of a full database. That means that I have the init parameter file, the autobackuped controlfile (+pfile) and the autobackuped backupset.
    The source database is release 10.2.0.5.0, RAC instance.
    On another server, I have a simple instance, same release and I would like to recover the full backup in the second database.
    I have already done that once before but I had both pfile and controlfile backuped manually and the two instances were simple ones.
    Here I have tried the same way : shutdown my target database, changing my pfile backup parameters to match the target database. Startup the target database in nomount mode using the pfile. Create spfile from pfile. Then restore controlfile from the backuped controlfile with rman.
    But here this step is a problem.
    My question is simple : what is the best way / good practices to get this working?
    Thanks in advance for your help. Ask if you need any further informations.
    Max

    Hello,
    Here is the original init pfile content :
    instdb1.__db_cache_size=1107296256
    instdb2.__db_cache_size=1023410176
    instdb2.__java_pool_size=16777216
    instdb1.__java_pool_size=16777216
    instdb2.__large_pool_size=16777216
    instdb1.__large_pool_size=16777216
    instdb1.__shared_pool_size=436207616
    instdb2.__shared_pool_size=520093696
    instdb2.__streams_pool_size=16777216
    instdb1.__streams_pool_size=16777216
    *.audit_trail='DB'
    *.background_dump_dest='/u1/app/oracle/admin/instdb/bdump'
    *.cluster_database_instances=2
    *.cluster_database=TRUE
    *.compatible='10.2.0.0.0'
    *.control_file_record_keep_time=95
    *.control_files='+DG_DATA/instdb/controlfile/backup.305.615208725','+DG_FLASH/instdb/controlfile/current.256.614223119'
    *.core_dump_dest='/u1/app/oracle/admin/instdb/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DG_DATA'
    *.db_create_online_log_dest_1='+DG_FLASH'
    *.db_domain='inst.xx'
    *.db_file_multiblock_read_count=16
    *.db_flashback_retention_target=1440
    *.db_name='inst'
    *.db_recovery_file_dest='+DG_DATA'
    *.db_recovery_file_dest_size=53687091200
    instdb1.instance_number=1
    instdb2.instance_number=2
    *.job_queue_processes=10
    instdb1.local_listener='LISTENER_INST1.INST.XX'
    instdb2.local_listener='LISTENER_INST2.INST.XX'
    instdb1.log_archive_dest_1='LOCATION=/u1/app/oracle/admin/inst/arch_orainst1'
    instdb2.log_archive_dest_1='LOCATION=/u1/app/oracle/admin/inst/arch_orainst2'
    *.log_archive_dest_2='SERVICE=INSTB.INST.XX VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) OPTIONAL LGWR ASYNC NOAFFIRM NET_TIMEOUT=10'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='inst_%t_%s_%r.arc'
    *.max_dump_file_size='200000'
    *.open_cursors=300
    *.parallel_max_servers=20
    *.pga_aggregate_target=824180736
    *.processes=550
    instdb1.remote_listener='LISTENER_INST1.INST.XX'
    instdb2.remote_listener='LISTENER_INST2.INST.XX'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.resource_limit=TRUE
    *.session_max_open_files=20
    *.sessions=480
    *.sga_target=1610612736
    instdb1.thread=1
    instdb2.thread=2
    *.undo_management='AUTO'
    instdb1.undo_tablespace='UNDOTBS1'
    instdb2.undo_tablespace='UNDOTBS2'
    *.user_dump_dest='/u1/app/oracle/admin/inst/udump'
    And here is the test I have done :
    *1. modified the init pfile to this :*
    inst.__db_cache_size=1107296256
    inst.__java_pool_size=16777216
    inst.__large_pool_size=16777216
    inst.__shared_pool_size=436207616
    inst.__streams_pool_size=16777216
    *.audit_trail='DB'
    *.background_dump_dest='C:\Oracle\admin\inst\bdump'
    *.compatible='10.2.0.5.0'
    *.control_file_record_keep_time=95
    *.control_files='C:\Oracle\oradata\inst\control01.ctl','C:\Oracle\oradata\inst\control02.ctl','C:\Oracle\oradata\inst\control03.ctl'
    *.core_dump_dest='C:\Oracle\admin\inst\cdump'
    *.db_block_size=8192
    *.db_create_file_dest='C:\Oracle\oradata\inst'
    *.db_create_online_log_dest_1='C:\Oracle\inst'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_flashback_retention_target=1440
    *.db_name='inst'
    *.db_recovery_file_dest='C:\Oracle\oradata'
    *.db_recovery_file_dest_size=53687091200
    *.job_queue_processes=10
    inst.log_archive_dest_1='LOCATION=C:\Oracle\oradata'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='inst_%t_%s_%r.arc'
    *.max_dump_file_size='200000'
    *.open_cursors=300
    *.parallel_max_servers=20
    *.pga_aggregate_target=824180736
    *.processes=550
    *.remote_login_passwordfile='EXCLUSIVE'
    *.resource_limit=TRUE
    *.session_max_open_files=20
    *.sessions=480
    *.sga_target=1610612736
    inst.thread=1
    *.undo_management='AUTO'
    inst.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\Oracle\admin\inst\udump'
    *2. shutdown the database, startup in nomount and restore controlfile (with the error when trying to restore controlfile) :*
    RMAN> shutdown immediate;
    Oracle instance shut down
    RMAN> startup nomount pfile='C:\Oracle\init\initInst.ora';
    connected to target database (not started)
    Oracle instance started
    Total System Global Area 1610612736 bytes
    Fixed Size 1305856 bytes
    Variable Size 369099520 bytes
    Database Buffers 1233125376 bytes
    Redo Buffers 7081984 bytes
    RMAN> restore controlfile from 'C:\Oracle\ctl\inst_ctrl_c-2972284490-20120318-00';
    Starting restore at 04-MAY-12
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=596 devtype=DISK
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 05/04/2012 14:20:12
    RMAN-06172: no autobackup found or specified handle is not a valid copy or piece
    Thank you for your help.
    Max

  • SharePoint backup size estimation

    How to estimate the size of the database that need to be backup? I read some reference that the size is about 2.5*content database.
    How about the backup size using back-spfarm? Is there any compression when backup as bak files? How to set the compression ratio?
    So, is the full backup size calculated as 2.5*content database*compress ratio? And what is the differential backup size estimation?
    Overall, I need to plan the space required for a weekly full and 5-6 differential backup.
    Thanks for providing any ideas that I am not clear.

    Hi, we are using a powershell script to calculate the SPSite size in MOSS 2007, same script could work in 2010 and 2013. The result from this + 6 % will return you the ruff size of the backup: It is working for us. Cheers
    function Site-Space ($sitecola)
    [reflection.assembly]::LoadWithPartialName("Microsoft.SharePoint") | out-null
    $sitea = [Microsoft.SharePoint.SPSite]($sitecola)
    [int]$sitespacesite = $sitea.Usage.Storage /1MB
    $siterecyclebin = $sitea.RecycleBin | Where {$_.ItemState.ToString() -eq 'SecondStageRecycleBin'}
    If ($siterecyclebin -ne $null)
    ForEach($recycleitem in $siterecyclebin)
    [int]$recyclesize = $recyclesize + ($recycleitem.Size /1MB)
    $sitespace += ($sitespacesite + $recyclesize)
    Else
    $sitespace = $sitespacesite
    return $sitespace

  • Still need to do full backup after using data guard?

    In our system, there is a physical standby database in data guard configuration. Is there still a need to do full backup and incremental backup?

    Jackliusr wrote:
    In our system, there is a physical standby database in data guard configuration. Is there still a need to do full backup and incremental backup?Preferred to have full backup daily.
    May be you can perform failover of your standby, in case there is no availability on primary database. Then do you think the standby system stability is same as primary and can give same performance?
    standby location can be too far and this is only if in case of disaster.
    Another case, lets suppose your standby is behind than primary for 4-5 days due to some issues. At the same time your production crashed then you may have chance to loose 4-5 days fo data. So recommended to have full backup always from primary database.
    If you are checking stability of standby database daily and able to check your data by opening properly and you wont prefer to RMAN backup, Then it is fine. But it is highly recommended to have RMAN backup.
    BTW, You can have RMAN full backup on standby, if you want to avoid resources to use on primary

  • Rman backup size for Dr db is very much higher than that of primary db

    Hi All,
    My production database on Oracle 10.2.0.4 had a physical size of 897 Gb and logical size of around 800 Gb.
    Old tables were truncated from the database and its logical size got reduced to 230 Gb.
    Backup size is now 55Gb which used to be of 130 Gb before truncation.
    Now this database has a DR configured. Backup of this DR database is daily taken which is used to refresh test environments.
    But the backup size for DR database has not decreased. The restoration time while refreshing test environments is also same as before.
    We had predicted that the backup size for DR database will also decrease and hence reducing the restoration time.
    We take compressed RMAN backup.
    What is the concept behind this?

    When you duplicate a database it will restore all the datafiles from the RMAN backup. You will find the physical space of your source database. Remove the fragmented space using object movement. Then shrink the tablespaces and take fresh RMAN backup and restore.
    Regards
    Asif Kabir

  • Backup Size Oddity

    I have an internal drive showing 120GB used. I have set up exclusions and under the exclusions list (in which the excluded items add up to 50GB), the estimated full backup size shows as 70GB as I'd expect. However, after my initial backup, the Time Machine drive shows 3GB used not the 70GB I was expecting. Any suggestions as to why it's only backing up this small amount?

    Time Machine does automatically exclude some things, such as system work files, most caches and logs, trash, etc. Usually those don't add up to more than a few GBs, though.
    So there are two possibilities: you have about 67 GB of that sort of stuff; or there's a problem with Time Machine.
    What have you excluded? It's usually not a good idea to exclude much, as it can make recovery after a hard drive or other failure much more difficult. See #11 in [Time Machine - Frequently Asked Questions|http://web.me.com/pondini/Time_Machine/FAQ.html] (or use the link in *User Tips* at the top of this forum).
    You can use the "Star Wars" display per #15 in the FAQ to see what was backed-up; you may want to spot-check what you expect to be there. If such things aren't there, the preferences file (where the exclusions are stored) may be corrupted. Try a "full reset" of Time Machine, per #A4 of [Time Machine - Troubleshooting|http://web.me.com/pondini/Time_Machine/Troubleshooting.html] (or use the link in *User Tips* at the top of this forum).
    If that doesn't fix it, add-up the sizes of the visible folders on your system; that should be fairly close to the 120 GB that's shown as used on your internal HD. If you have multiple users, log on as each one to see the size of that user's home folder, as one user, even an Admin user, won't see the actual size of another user's stuff.
    There are some hidden system folders, so the total won't be 120 GB. But it should be within 10 or 20 GB or so.
    Post back with your results, and we'll dig a bit deeper if necessary.

  • Estimating the backup size for full database backup?

    How to estimate the backup file size for the following backup command?
    Oracle 10g R2 on HPUX
    BACKUP INCREMENTAL LEVEL 0 DATABASE;
    Thank you,
    Smith

    Depends on the number of used blocks, block size etc. You could probably get a rough formula for backup size based on the contents of dba_tab_statistics (subtract empty_blocks), dba_ind_statistics etc.

  • Speeding up full backup of Replicate database ASE 15.5

    Greetings all
    I need to speed up replicate database backup.
    ASE version 15.5
    Adaptive Server Enterprise/15.5/EBF 20633 SMP ESD#5.2/P/RS6000/AIX 5.3/asear155/2602/64-bit/FBO/Sun Dec  9 11:59:29 2012
    Backup Server/15.5/EBF 20633 ESD#5.2/P/RS6000/AIX 5.3/asear155/3350/32-bit/OPT/Sun Dec  9 08:34:37 2012
    RS version
    Replication Server/15.7.1/EBF 21656 SP110 rs1571sp110/RS6000/AIX 5.3/1/OPT64/Wed Sep 11 12:46:38 2013
    Primary database is 1.9 TB about 85% occupied
    Replicate database is same size but used about 32%  (mostly dbo tables are replicated)
    As noted above backup sever is 32 bit on AIX.
    SIMILARITIES
    Both servers use SAN with locally mounted folder for backup files/stripes.
    Databases are on  'raw' devices for data and log
    Both backup servers have the similar RUN files with following
    -N25 \
    -C20 \
    -M/sybase/15/ASE-15_0/bin/sybmultbuf \
    -m2000 \
    Number of stripes are 20 for both primary and replicate databases.
    DIFFERENCES
    Replicate has less memory and less number of engines.
    Devices on primary are mostly 32 GB and those on replicate are mostly 128 GB
    OBSERVATIONS
    Full Back up times on primary consistently about 1 hour.
    Full Back up times on replicate are consistently double that (120 to 130 minutes).
    Full Backup with suspended  replication or with minimal activity does not reduce the run times much.
    What do I need to capture to pinpoint cause of the slow backup on replicate side ?
    Thanks in advance
    Avinash

    Mark
    Thanks for the inputs.
    We use compression level 2 on both primary and replicate.
    This was tried out before upgrade to 15.5 and seems good enough.
    BTW on a different server I also tried new compression levels 100 and 101.
    for database of same size and did not get substantial reduction in run times.
    Stripe sizes increased from 23 GB to 30-33 GB.
    As far as I have noted Replicate side is not starved for CPU.
    sp_sysmon outputs during the backup period do not show high CPU usage.
    Wll it be accurate to to say that like a huge report query,  backup activity also churns the caches ?
    (i.e. each allocated/used page if not found in the cache is brought in cache by a physical read )
    Avinash

  • Differential backup files are almost the same size as full backups.

    Hello All,
    I have done a little research on this topic and feel like we are not doing anything to cause this issue. Any assistance is greatly appreciated. 
    The details: Microsoft SQL Server 2008 R2 (SP2) - 10.50.4297.0 (X64)   Nov 22 2013 17:24:14   Copyright (c) Microsoft Corporation  Web Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor).  The
    database I am working with it 23GB. The full backup files are 23GB, differentials are 16GB (and growing) and transaction logs bounce between 700KB to 20MB. The backup schedules with T-SQL follow:
    T-Log: daily every four hours
    BACKUP LOG [my_dabase] TO  DISK = N'F:\Backup\TLog\my_dabase_backup_2015_03_23_163444_2725556.trn' WITH NOFORMAT, NOINIT,  NAME = N'my_dabase_backup_2015_03_23_163444_2725556', SKIP, REWIND, NOUNLOAD,  STATS = 10
    GO
    Diff: once daily
    BACKUP DATABASE [my_database] TO  DISK = N'F:\Backup\Diff\my_database_backup_2015_03_23_163657_1825556.dif' WITH  DIFFERENTIAL , NOFORMAT, NOINIT,  NAME = N'my_database_backup_2015_03_23_163657_1825556', SKIP, REWIND, NOUNLOAD,  STATS =
    10
    GO
    declare @backupSetId as int
    select @backupSetId = position from msdb..backupset where database_name=N'my_database' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'my_database' )
    if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''my_database'' not found.', 16, 1) end
    RESTORE VERIFYONLY FROM  DISK = N'F:\Backup\Diff\my_database_backup_2015_03_23_163657_1825556.dif' WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND
    GO
    Full: once weekly
    BACKUP DATABASE [my_database] TO  DISK = N'F:\Backup\Full\my_database_backup_2015_03_23_164248_7765556.bak' WITH NOFORMAT, NOINIT,  NAME = N'my_database_backup_2015_03_23_164248_7765556', SKIP, REWIND, NOUNLOAD,  STATS = 10
    GO
    declare @backupSetId as int
    select @backupSetId = position from msdb..backupset where database_name=N'my_database' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'my_database' )
    if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''my_database'' not found.', 16, 1) end
    RESTORE VERIFYONLY FROM  DISK = N'F:\Backup\Full\my_database_backup_2015_03_23_164248_7765556.bak' WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND
    GO
    As you can probably tell we are not doing anything special in the backups, they are simply built out in MSSQL Management Studio. All databases are set to full recovery mode. We do not rebuild indexes but do reorganize indexes once weekly and also update
    statistics weekly.
    Reorganize Indexes T-SQL (there are 255 indexes on this database)
    USE [my_database]
    GO
    ALTER INDEX [IDX_index_name_0] ON [dbo].[table_name] REORGANIZE WITH ( LOB_COMPACTION = ON )
    GO
    Update Statistics T-SQL (there are 80 tables updated)
    use [my_database]
    GO
    UPDATE STATISTICS [dbo].[table_name]
    WITH FULLSCAN
    GO
    In a different post I saw a request to run the following query:
    use msdb
    go
    select top 10 bf.physical_device_name, bs.database_creation_date,bs.type
    from  dbo.backupset bs
    inner join dbo.backupmediafamily bf on bf.media_set_id=bs.media_set_id
    where   bs.database_name='my_database'
    order by bs.database_creation_date
    Results of query:
    physical_device_name database_creation_date type
    F:\Backup\Full\my_database_backup_2015_03_07_000006_2780149.bak 2014-02-08 21:14:36.000 D
    F:\Backup\Diff\Pre_Upgrade_OE.dif 2014-02-08 21:14:36.000 I
    F:\Backup\Diff\my_database_backup_2015_03_11_160430_7481022.dif 2015-03-07 02:58:26.000 I
    F:\Backup\Full\my_database_backup_2015_03_11_160923_9651022.bak 2015-03-07 02:58:26.000 D
    F:\Backup\Diff\my_database_backup_2015_03_11_162343_7071022.dif 2015-03-07 02:58:26.000 I
    F:\Backup\TLog\my_database_backup_2015_03_11_162707_4781022.trn 2015-03-07 02:58:26.000 L
    F:\Backup\TLog\my_database_backup_2015_03_11_164411_5825904.trn 2015-03-07 02:58:26.000 L
    F:\Backup\TLog\my_database_backup_2015_03_11_200004_1011022.trn 2015-03-07 02:58:26.000 L
    F:\Backup\TLog\my_database_backup_2015_03_12_000005_4201022.trn 2015-03-07 02:58:26.000 L
    F:\Backup\Diff\my_database_backup_2015_03_12_000005_4441022.dif 2015-03-07 02:58:26.000 I
    Is your field ready?

    INIT basically intializes the backup file, in other words, it will overwrite the contents of the existing backup file with the new backup information. 
    basically, what  you have now is you are appending all you backup files  (differentials) one after the other (like chain).
    you do not necessarily have to do it.  these differential backups can exist as different files.
    Infact, I would prefer them to separate, as it gives quick insight on the file, instead doing a  "restore filelist" command to read the contents of the backup file.
    The point Shanky, was mentioning is that : he wants to make sure that you are not getting confused between the actual differential backup file size to the physicial file size(since you are appending the backups) example : if you differential backup is 2
    gb, and over the next five you take a differential backup and append to a single file,like you are doing now,  the differential backup file size is 2gb but you physicial file size is 10Gb.  he is trying to make sure you are confused between these
    two.
    Anyways, did you get a chance to run the below query and also did you refer to the link I posted above. It talks a case when differential backups can be bigger than full backups and ' inex reorganize' or 'dbcc shrinks' can cause this. 
    --backup size in GB
    select database_name,backup_size/1024/1024/1024,Case WHEN type='D' then 'FULL'
    WHEN type='L' then 'Log'
    When type='I' then 'Differential' End as [BackupType],backup_start_date,backup_finish_date, datediff(minute,backup_start_date,backup_finish_date) as [BackupTime]
    from msdb.dbo.backupset where database_name='mydatabase' and type in ('D','I')
    order by backup_set_id desc
    Hope it Helps!!

  • DPM is Only Allowing Express Full Backups For a Database Set to Full Recovery Model

    I have just transitioned my SQL backups from a server running SCDPM 2012 SP1 to a different server running 2012 R2.  All backups are working as expected except for one.  The database in question is supposed to be backuped up iwht a daily express
    full and hourly incremental schedule.  Although the database is set to full recovery model, the new DPM server says that recovery points will be created for that database based on the express full backup schedule.  I checked the logs on the old DPM
    server and the transaction log backups were working just fine up until I stopped protection the data source.  The SQL server is 2008 R2 SP2.  Other databases on the same server that are set to full recovery model are working just fine.  If we
    switch the recovery model of a database that isn't protected by DPM and then start the wizard to add it to the protection group it properly sees the difference when we flip the recovery model back and forth.  We also tried switching the recovery model
    on the failing database from full to simple and then back again, but to no avail.  Both the SQL server and the DPM server have been rebooted.  We have successfully set up transaction log backups in a SQL maintenance plan as a test, so we know the
    database is really using the full recovery model.
    Is there anything that someone knows about that can trigger a false positive for recovery model to backup type mismatches?

    I was having this same problem and appear to have found a solution.  I wanted hourly recovery points for all my SQL databases.  I was getting hourly for some but not for others.  The others were only getting a recovery point for the Full Express
    backup.  I noted that some of the databases were in simple recovery mode so I changed them to full recovery mode but that did not solve my problem.  I was still not getting the hourly recovery points.
    I found an article that seemed to indicate that SCDPM did not recognize any change in the recovery model once protection had started.  My database was in simple recovery mode when I added it (auto) to protection so even though I changed it to full recovery
    mode SCDPM continued to treat it as simple. 
    I tested this by 1) verify my db is set to full recovery, 2) back it up and restore it with a new name, 3) allow SCDPM to automatically add it to protection over night, 4) verify the next day I am getting hourly recovery points on the copy of the db. 
    It worked.  The original db was still only getting express full recovery points and the copy was getting hourly.  I suppose that if I don't want to restore a production db with an alternate name I will have to remove the db from protection, verify
    that it is set to full, and then add it back to protection.   I have not tested this yet.
    This is the article I read: 
    Article I read

  • Estimate database backup size

    Hi All,
    AIX 5.3
    oracle 10.2.0.3
    I have production database of 80 GB. I want to take a cold backup during the upgrade process. Now i want to know database backup size. so we can free space from some mount point accordingly.
    Is there any way to estimate database backup size?
    Thanks,
    Vishal

    Question: how did you establish that your database is actually 80G?
    Is that 80G of data in the database, or is it the sum of all data files, temp tablespace files, undo tablespace files, redo logs, oracle binary software files, control files, etc?
    If you intend to do an upgrade of the software, my recommendation is to shut the database down and do a full copy of:
    1) your entire oracle product tree (usually /oracle/product), including ORACLE_HOME.
    2) your entire 'oradata' structure which (should) include all of your Oracle database data files and all other associated files.

Maybe you are looking for

  • Including Sound In A Rollover that works on all browsers

    I am working on a website for my company that we wanted to have a musical logo play when you rollover the logo on the page. I added a behavior to a hotspot link that uses "onMouseOver" to playsound. This works great in the safari browser. However, in

  • FM PROB

    HI.. Iam new in using ALV... this is the following report i created... When iam executing the following report its getting short dumped.. can anyone help me out in this... I didnt know how to use the FM reuse_alv_fieldcatalog_merge and iam facing pro

  • How to install the Nokia Music on my Asha 311?

    how to install nokia music on my asha 311 to download songs on internet radio Moderator's Note: We have moved your post and changed the title into a subject-related title. This is to keep the forum organized and let other forum users easily see and r

  • I need help in Custom Reporting

    Hi, i need help in auditing, SCOM 2012 R2 is deployed in 2 servers with ACS, i have 2 file server that has some shared folder in which users work on some certain document i need to have a report that i can run on certain file that will show me who ac

  • ACL Missing message in Disk Utility

    I changed my main HD on my MacPro and used Disk Utility to copy my System "Macintosh HD" to the new disk. Everything seems OK, I can start up, but when I did a "verify permissions" on the "new" system disk I get many "ACL missing on "System/Library/U