Exadata在rman备份时候的offloading功能需要数据库打开BCT吗?
同题目,BCT=Block Change Tracking。
oracle数据库中BCT是使用文件来记录一组数据块中,修改过的数据块做个标记。
rman备份时,exadata的 offloading是如何实现的呢?
exadata的 offload incremental backup optimization 是基于数据块为单位, 而block change tracking通过位图维护一组block, 所以 offload incremental backup的 粒度更细化 也更智能。
With Exadata,changes are tracked at the individual oracle block level rather than at the level of a large group of blocks. This result in less I/O bandwidth being consumed for backups and faster running backups.
*<font color="red" size="2" face="courier">如果觉得本回复有意义,请点击本条回复右手边的Correct按钮,谢谢!</font>*
如何在OTN中文技术论坛提一个问题?
论坛礼仪需知及学习oracle的方法论
Maclean Liu
Oracle Database Administrator
Oracle Certified 10g/11g Master
www.askmaclean.com
Similar Messages
-
Cell Offload will Happen for pl/sql Block with variables
Hello Experts,
i am working on procedures on exadata now. i was confused with cell offload in exadata. somehow offload is not happening when i ran the sql statement in in pl/sql block with variables.
here are my findings.
when i ran insert into from select with values at toad, my query response time is very good. the total process is completed in less than a minute.
i checked offload is happening.
same sql statement is placed in plsql block with variable, procedure is taking lot of time and it is not completing. this case offload is not happening.
is it true, if i use variables in pl/sql block will not use cell offload and smart scan?
if yes, what is the work around.
Thanks
#! PavanHello Marc,
Thanks for quick response.
when i ran the query with literals in toad session i am getting response.
when i run it with pl/sql block , block is not completing at all.
here is the plsql block:
My Apologies for sending big code,with out proper format.
DECLARE
P_BUSINESS_DATE DATE;
P_BATCH_ID NUMBER;
UTC_OFFSET NUMBER;
BEGIN
P_BUSINESS_DATE := to_date('02/01/2012', 'MM/DD/YYYY');
P_BATCH_ID := 1;
UTC_OFFSET := 0;
INSERT /*+ APPEND */ INTO UPL_CLIENT_tbl
( reportdate,
LastName,
FirstName,
MiddleInitial,
AccountNumber,
Address,
City,
State,
Zip,
HomePhone,
WorkPhone,
BirthDate,
Age,
Sex,
NumberOfChildren,
Occupation,
LeadSource,
Consultant,
ProgramDirector,
CallTaker,
LeadDate,
FirstVisitDate,
LastVisitDate,
BillType,
ClientType,
PreviousClientType,
AppointmentDate,
DoctorLetterRequired,
OneYearPermStabilizationDate,
UnlimitedPermStabilizationDate,
MaritalStatus,
ReferrerName,
ReferrerCentreID,
CentreID,
PaymentDateOne,
PaymentAmountOne,
PaymentDateTwo,
PaymentAmountTwo,
PaymentDateThree,
PaymentAmountThree,
PaymentDateFour,
PaymentAmountFour,
LibraryPurchased,
BalanceDue,
FoodNSFBalance,
ProductNSFBalance,
ProgramNSFBalance,
StartWeight,
CurrentWeight,
GoalWeight,
Height,
DateGoalWeightAchieved,
DateSuccessPlusPurchased,
ReturnToActiveDate,
VersionNumber,
HalfWayDate,
LastLSCDate,
LastUpdatedDate,
VitaminWaiverSigned,
LastSupplementPurchaseDate,
LastSupplementCodePurchased,
LastTotalSupplementSupplyCycle,
LastAddtlSupplPurchaseDate,
LastAddtlSupplCodePurchased,
LastAddtlSupplSupplyCycle,
DiabetesClient,
DietControlled,
TakingOralMed,
TakingInsulin,
EmailId,
CTADate,
RWLDate,
Address2)
(SELECT /*+ full(S_CONTACT) full(REFERRER) full(Consultant) full(ProgramDirector) full(CallTaker) full(S_CONTACT_X) full(a) full(a2) full (a3) */ distinct p_business_date reportdate,
SUBSTR(S_CONTACT.LAST_NAME,1,25) AS LastName,
SUBSTR(S_CONTACT.FST_NAME,1,25) AS FirstName,
SUBSTR(S_CONTACT.MID_NAME,1,1) AS MiddleInitial,
S_CONTACT.X_JC_ACNT_NUM + 900000000 AS AccountNumber,
SUBSTR(S_ADDR_PER.ADDR,1,40) AS ADDRESS,
SUBSTR(S_ADDR_PER.CITY,1,20) AS City,
S_ADDR_PER.STATE AS State,
SUBSTR(S_ADDR_PER.ZIPCODE,1,15) AS Zip,
SUBSTR(REPLACE(S_CONTACT.HOME_PH_NUM,'-',''),1,10) AS HomePhone,
SUBSTR(REPLACE(S_CONTACT.WORK_PH_NUM,'-',''),1,10) AS WorkPhone,
S_CONTACT.BIRTH_DT AS BirthDate,
CASE WHEN FLOOR((p_business_date - S_CONTACT.BIRTH_DT)/360) < 0 THEN NULL ELSE FLOOR((p_business_date - S_CONTACT.BIRTH_DT)/360) END AS AGE,
S_CONTACT.SEX_MF AS SEX,
NULL AS NumberOfChildren,
S_CONTACT_X.ATTRIB_34 AS OCCUPATION,
CASE WHEN SUBSTR(S_CONTACT_X.ATTRIB_37,1,4)='Othe' THEN 'Othr'
WHEN SUBSTR(S_CONTACT_X.ATTRIB_37,1,4)='Inte' THEN 'Intr'
WHEN SUBSTR(S_CONTACT_X.ATTRIB_37,1,4)='Prin' THEN 'News'
WHEN SUBSTR(S_CONTACT_X.ATTRIB_37,1,4)='Gues' THEN 'Gst'
ELSE SUBSTR(S_CONTACT_X.ATTRIB_37,1,4) END AS LeadSource,
SUBSTR(Consultant.EMP_NUM,1,10) AS CONSULTANT,
ProgramDirector.EMP_NUM AS ProgramDirector,
CallTaker.EMP_NUM CallTaker,
S_CONTACT.X_LEAD_DT AS LeadDate,
LEAST(nvl(S_CONTACT.X_LAST_CONSULTATION_DATE,O.FirstPurchaseDate ), nvl(O.FirstPurchaseDate,S_CONTACT.X_LAST_CONSULTATION_DATE+1) ) AS FirstVisitDate, --X_LAST_CONSULTATION_DATE stores the performed date or the legacy client firstvisitdate
GREATEST(nvl(S_CONTACT_XM.X_CONSULTATION_DT ,S_CONTACT_X.ATTRIB_29), nvl(S_CONTACT_X.ATTRIB_29, S_CONTACT_XM.X_CONSULTATION_DT-1) ) AS LastVisitDate,
CASE WHEN S_CONTACT.X_INSTALLMENT_BALANCE > 0 THEN 'B' ELSE NULL END AS BillType,
ct.current_client_type ClientType,
SUBSTR(ct.saved_client_type,1,1) PreviousClientType,
S_CONTACT.LAST_CREDIT_DT AS AppointmentDate,
CASE WHEN a.X_DR_LETTER_STATUS IS NOT NULL THEN 'Y' ELSE 'N' END AS DoctorLetterRequired,
NULL AS OneYearPermStabilizationDate,
DECODE(S_PROD_INT.X_PROGRAM_CLASSIFICATION,'Premium',a.START_DT ,NULL) AS UnlimitedPermStabilizationDate,
SUBSTR(S_CONTACT.MARITAL_STAT_CD,1,1) AS MaritalStatus,
SUBSTR(REFERRER.FST_NAME ||' '|| REFERRER.LAST_NAME,1,34) AS ReferrerName,
ORGEXT_REF.LOC AS ReferrerCentreID,
S_ORG_EXT.LOC AS CentreID,
NULL AS PaymentDateOne,
NULL AS PaymentAmountOne,
NULL AS PaymentDateTwo,
NULL AS PaymentAmountTwo,
NULL AS PaymentDateThree,
NULL AS PaymentAmountThree,
NULL AS PaymentDateFour,
NULL AS PaymentAmountFour,
NULL AS LibraryPurchased,
nvl(S_CONTACT.X_INSTALLMENT_BALANCE,0) + nvl(S_CONTACT.X_PREPAID_BALANCE,0) AS BalanceDue, -- Changed operation from (-) prepaid to (+) prepaid since the sign was flipped in OLTP.
NULL AS FoodNSFBalance,
NULL AS ProductNSFBalance,
NULL AS ProgramNSFBalance,
a2.X_START_WEIGHT AS StartWeight,
a2.X_CURRENT_WEIGHT AS CurrentWeight,
a2.X_GOAL_WEIGHT AS GoalWeight,
a3.X_HEIGHT AS Height,
a2.X_FAXSENT_DATETIME DateGoalWeightAchieved,
DECODE(S_PROD_INT.X_PROGRAM_CLASSIFICATION,'Premium',a.START_DT,NULL) AS DateSuccessPlusPurchased,
CASE WHEN A2.ARCHIVE_FLG = 'N' THEN a2.START_DT ELSE NULL END AS ReturnToActiveDate,
600 VersionNumber,
a2.X_FAXRECV_DATETIME AS HalfWayDate,
NULL AS LastLSCDate,
TRUNC(S_CONTACT.LAST_UPD-UTC_OFFSET/24) AS LastUpdatedDate,
NULL AS VitaminWaiverSigned,
LastSupplementPurchaseDate,
LastSupplementCodePurchased,
LastTotalSupplementSupplyCycle,
LastAddtlSupplPurchaseDate,
LastAddtlSupplCodePurchased,
LastAddtlSupplSupplyCycle,
CASE WHEN (a.X_DIABETES_NO_MEDS_FLG='Y' OR a.X_DIABETES_ORAL_MEDS_FLG = 'Y' OR a.X_DIABETES_ON_INSULIN_FLG = 'Y') THEN 'Y' ELSE 'N' END AS DiabetesClient,
DECODE(a.X_DIABETES_NO_MEDS_FLG,'Y','Y','N') AS DietControlled,
a.X_DIABETES_ORAL_MEDS_FLG AS TakingOralMed,
a.X_DIABETES_ON_INSULIN_FLG AS TakingInsulin,
S_CONTACT.EMAIL_ADDR AS EmailId,
NULL CTADATE,
NULL RWLDATE,
SUBSTR(S_ADDR_PER.ADDR_LINE_2,1,40) AS Address2
FROM S_CONTACT,
S_CONTACT REFERRER,
S_CONTACT Consultant,
S_CONTACT ProgramDirector,
S_CONTACT CallTaker,
S_CONTACT_X,
(SELECT /*+ parallel full(S_CONTACT_XM) */ PAR_ROW_ID, attrib_05, MAX(X_CONSULTATION_DT) AS X_CONSULTATION_DT FROM S_CONTACT_XM
WHERE (S_CONTACT_XM.last_upd_by < '1-14WD'
or S_CONTACT_XM.last_upd_by > '1-14WD')
AND S_CONTACT_XM.ATTRIB_05 IN (SELECT row_id FROM S_ORG_EXT WHERE S_ORG_EXT.ACCNT_TYPE_CD IN ('Corporate Centre','Franchise Centre')) LOC IN (SELECT centreid FROM UPL_LIVE_CENTRES WHERE LIVE = 'Y' AND BATCHID = p_batch_id)) where S_ORG_EXT.ACCNT_TYPE_CD IN ('Corporate Centre','Franchise Centre')) --
GROUP BY PAR_ROW_ID, attrib_05) S_CONTACT_XM,
(SELECT CONTACT_ID, ACCNT_ID,
MAX(LastSupplementPurchaseDate) AS LastSupplementPurchaseDate,
MAX(LastSupplementCodePurchased) AS LastSupplementCodePurchased,
MAX(LastTotalSupplementSupplyCycle) AS LastTotalSupplementSupplyCycle,
MAX(LastAddtlSupplPurchaseDate) AS LastAddtlSupplPurchaseDate,
MAX(LastAddtlSupplCodePurchased) AS LastAddtlSupplCodePurchased,
MAX(LastAddtlSupplSupplyCycle) AS LastAddtlSupplSupplyCycle,
MIN(FirstPurchaseDate) AS FirstPurchaseDate,
MAX(LastPurchaseDate) AS LastPurchaseDate
FROM (
SELECT /*+ parallel full(S_ORDER) full(S_ORDER_XM) */ S_ORDER.CONTACT_ID AS CONTACT_ID,S_ORDER.ACCNT_ID,
NULL AS LastSupplementPurchaseDate,
NULL AS LastSupplementCodePurchased,
NULL AS LastTotalSupplementSupplyCycle,
NULL AS LastAddtlSupplPurchaseDate,
NULL AS LastAddtlSupplCodePurchased,
NULL AS LastAddtlSupplSupplyCycle,
(S_ORDER_XM.X_BUSINESS_DATE) FirstPurchaseDate,
(S_ORDER_XM.X_BUSINESS_DATE) LastPurchaseDate
FROM S_ORDER,S_ORDER_XM
WHERE S_ORDER.ROW_ID = S_ORDER_XM.PAR_ROW_ID
AND S_ORDER.STATUS_CD IN ('Complete', 'Submitted', 'Ready')
AND TRUNC(S_ORDER_XM.X_BUSINESS_DATE - UTC_OFFSET/24) <= (p_business_date)
--GROUP BY S_ORDER.CONTACT_ID
UNION ALL
SELECT /*+ parallel full(S_ORDER) full (S_ORDER_ITEM) */ S_ORDER.CONTACT_ID AS CONTACT_ID,S_ORDER.ACCNT_ID,
(CASE WHEN SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) = '931' THEN S_ORDER.CREATED ELSE NULL END) AS LastSupplementPurchaseDate,
(CASE WHEN SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) = '931' THEN 931 ELSE NULL END) AS LastSupplementCodePurchased,
(CASE WHEN SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) = '931' THEN 7 ELSE NULL END) AS LastTotalSupplementSupplyCycle,
(CASE WHEN SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) = '920' THEN S_ORDER.CREATED ELSE NULL END) AS LastAddtlSupplPurchaseDate,
(CASE WHEN SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) = '920' THEN 920 ELSE NULL END) AS LastAddtlSupplCodePurchased,
(CASE WHEN SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) = '920' THEN 28 ELSE NULL END) AS LastAddtlSupplSupplyCycle,
NULL FirstPurchaseDate,
NULL LastPurchaseDate
FROM S_ORDER,S_ORDER_ITEM, S_PROD_INT
WHERE S_ORDER_ITEM.PROD_ID = S_PROD_INT.ROW_ID
AND S_ORDER.ROW_ID = S_ORDER_ITEM.ORDER_ID
AND S_ORDER_ITEM.qty_req <> 0
AND s_order.created_by <> '1-14WD'
AND S_ORDER_ITEM.PAR_ORDER_ITEM_ID is null
AND (S_ORDER_ITEM.PAR_ORDER_ITEM_ID is null
OR EXISTS (select 1 from S_ORDER_ITEM i2,s_prod_int p
where i2.row_id = S_ORDER_ITEM.PAR_ORDER_ITEM_ID
and SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) IN ('931','920')
and i2.prod_id = p.row_id
AND S_ORDER.status_cd in ('Complete', 'Submitted', 'Ready')
and SUBSTR(SUBSTR(S_PROD_INT.PART_NUM,1,INSTR(S_PROD_INT.PART_NUM,'-',1,1)-1),2,4) IN ('931','920')
GROUP BY CONTACT_ID,ACCNT_ID) O,
S_CONTACT_TNTX,
S_ORG_EXT,
S_ORG_EXT ORGEXT_REF,
S_ADDR_PER,
S_ASSET a,
S_PROD_INT,
S_ASSET a2,
S_ASSET a3,
UPL_CLIENT_TYPES ct,
(select /*+ parallel */ o.contact_id, o.accnt_id
from S_ORDER o, S_ORDER_XM oxm
where o.row_id = oxm.par_row_id
and trunc(oxm.X_BUSINESS_DATE - (UTC_OFFSET/24)) = trunc(p_business_date)
group by o.contact_id, o.accnt_id) oxm2
WHERE S_CONTACT.ROW_ID = S_CONTACT_X.PAR_ROW_ID
AND S_CONTACT_X.ROW_ID = S_CONTACT_XM.PAR_ROW_ID (+)
AND (S_ORG_EXT.ROW_ID = S_CONTACT.PR_DEPT_OU_ID
OR S_ORG_EXT.ROW_ID = oxm2.accnt_id
OR S_ORG_EXT.ROW_ID = S_CONTACT_XM.attrib_05)
AND ORGEXT_REF.ROW_ID(+) = REFERRER.PR_DEPT_OU_ID
AND S_CONTACT.CON_ASST_PER_ID = Consultant.ROW_ID
AND S_ORG_EXT.X_DIRECTOR_ID = ProgramDirector.ROW_ID (+)
AND S_CONTACT.CREATED_BY = CallTaker.ROW_ID
AND S_CONTACT.ROW_ID = a.PR_CON_ID (+)
AND S_CONTACT.PR_PER_ADDR_ID = S_ADDR_PER.ROW_ID (+)
AND S_CONTACT_TNTX.PAR_ROW_ID (+) = S_CONTACT.ROW_ID
AND REFERRER.ROW_ID(+) = S_CONTACT_TNTX.REFERRED_BY_ID
AND a.PROD_ID = S_PROD_INT.ROW_ID (+)
AND O.CONTACT_ID (+) = S_CONTACT.ROW_ID
AND a.STATUS_CD (+) = 'Active'
AND a.TYPE_CD (+) ='Program'
AND S_CONTACT.ROW_ID = a2.PR_CON_ID (+)
AND a2.STATUS_CD (+) = 'Active'
AND a2.TYPE_CD (+) = 'Lifecycle'
AND a3.PR_CON_ID(+) = S_CONTACT.ROW_ID
AND a3.STATUS_CD (+) = 'Active'
AND a3.TYPE_CD (+) = 'HealthSheet'
AND S_CONTACT.X_JC_ACNT_NUM = ct.CLIENT_NUMBER (+)
--AND S_ORG_EXT.LOC NOT LIKE 'F%'
AND S_ORG_EXT.ACCNT_TYPE_CD NOT IN 'Division'
--AND S_ORG_EXT.Loc in (select to_char(centreid) from UPL_LIVE_CENTRES where LIVE = 'Y')
AND (trunc(S_CONTACT.LAST_UPD - (UTC_OFFSET/24)) = trunc(p_business_date) or trunc(S_CONTACT_X.LAST_UPD - (UTC_OFFSET/24)) = trunc(p_business_date) OR (S_CONTACT_XM.X_CONSULTATION_DT = p_business_date) OR oxm2.CONTACT_ID IS NOT NULL)
AND S_CONTACT.last_upd_by not in ('1-14WD')
AND oxm2.CONTACT_ID (+) = o.CONTACT_ID
AND S_ORG_EXT.LOC <> 'CW_846'
AND (a.pr_accnt_id in (select row_id from S_ORG_EXT where S_ORG_EXT.LOC IN (Select CentreID from UPL_Live_Centres where BATCHID = p_batch_id)) or a.pr_accnt_id is null)
AND (a2.pr_accnt_id in (select row_id from S_ORG_EXT where S_ORG_EXT.LOC IN (Select CentreID from UPL_Live_Centres where BATCHID = p_batch_id)) or a2.pr_accnt_id is null)
AND (a3.pr_accnt_id in (select row_id from S_ORG_EXT where S_ORG_EXT.LOC IN (Select CentreID from UPL_Live_Centres where BATCHID = p_batch_id)) or a3.pr_accnt_id is null));
rollback;
END;
-------------------------------------------------------------------------------------------------- -
How to do performance tuning in EXadata X4 environment?
Hi, I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded.
Now the application is testing against this database on exadata.
However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
Thanks a bunch.
db version is 11.2.0.4Hi 9233598 -
Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
Hope this helps.
-Kasey -
I would like to ask for any input regarding any major differences of exadata database, listener, processes than regular RAC environment.
I know now the exadata not only have SCAN listener, but a lot other listeners. can expert here provide more details?
ThanksWe will use zfs to do backups, is that possible for the initiband listener to communicate with ZFS? where I can find documents for zfs?
The SDP (Infiniband) Listener is for client connections connecting to the database on Exadata over the SDP protocol on the Infiniband network. You should use the Infiniband network to connect the ZFS to Exadata and mount the ZFS shares to the Exadata compute nodes using DNFS; but this doesn't use the SDP listener because there is no database connection originating from the ZFS you are just backing up your databases to it. See the "Oracle ZFS Storage: FAQ: Exadata RMAN Backup with The Oracle ZFS Storage Appliance (Doc ID 1354980.1)" MOS note for good information and references regarding backing up from Exadata to ZFS.
Plus how do I know we have exalogic? I dont think we have one, but is there a way to check to be sure?
Exalogic is another of Oracle's engineered systems. It contains integrated compute, network and storage - similar to Exadata - but is used for running application environments, specifically for fusion middleware (e.g. Weblogic) applications, instead of databases. It uses a ZFS storage appliance for the storage, as opposed to the storage cells on Exadata, and for virtualized environments uses the Exalogic Elastic Cloud software. You would know if you have one.
So basically after oracle engineer installed onecommand and created sample database, as oracle dba, we can dbca to create a database just like regular RAC environment? There is nothing specific on exadata perspective?
Yes... Oracle on Exadata is still the same RDBMS - same Oracle Enterprise edition with the RAC option. The Exadata difference comes with the hardware integration and the storage cell software... the database software is the same.
Another question is do I have to configure exadata specific parameters in order for all those features of smart scan, storage index, comrpession, etc work?
Some Exadata features are mostly "black box" and work without any configuration others may take some configuration or tuning to take advantage of. For example, storage indexes are created dynamically on the storage cells... there is not much you can do to control them. For smart scans you need full table/index scans with direct path reads. So you need to tune for direct path reads. HCC requires setting up your tables/partitions to compress at one of the HCC compression levels and using direct path loads. -
RMAN clone from Windoze to Exadata Linux
Has anyone migrated from MS Windows to Exadata Linux using RMAN convert? I know they are both little endian and should be possible. Oracle ACS has published this note, but it's not Exadata specific:
http://blogs.oracle.com/AlejandroVargas/resource/Database-Migration-Windows-Linux-with-RMAN.pdf
You feedback is welcome.
--Rich
Edited by: Rich Headrick on Oct 31, 2011 12:16 PMThanks Dan,
I didn't think that would matter either, but just thought I put it out there. I found a few notes on the same subject: 1079563.1 and 881421.1
Cheers,
Rich -
Exadata X2-2 1/4 RACK上的并行测试
imageinfo
Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
Image version: 11.2.3.1.1.120607
Image activated: 2012-08-14 19:16:01 -0400
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> alter system flush buffer_cache;
System altered.
SQL> //
System altered.
SQL> set timing on;
SQL> select a.name,b.value
2 from v$sysstat a , v$mystat b
where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%'
or a.name like '%flash cache read hits'); 3 4 5 6 7
NAME VALUE
physical read total bytes 114688
physical write total bytes 0
cell physical IO interconnect bytes 114688
cell physical IO bytes pushed back due to excessive CPU on cell 0
cell physical IO bytes saved during optimized file creation 0
cell physical IO bytes saved during optimized RMAN file restore 0
cell physical IO bytes eligible for predicate offload 0
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan 0
cell IO uncompressed bytes 0
cell flash cache read hits 0
11 rows selected.
Elapsed: 00:00:00.01
SQL> select count(*) from larget;
COUNT(*)
242778112
Elapsed: 00:00:16.83
SQL> set timing on;
SQL> select a.name,b.value
2 from v$sysstat a , v$mystat b
where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%'
or a.name like '%flash cache read hits'); 3 4 5 6 7
NAME VALUE
physical read total bytes 2.6262E+10
physical write total bytes 0
cell physical IO interconnect bytes 3018259592
cell physical IO bytes pushed back due to excessive CPU on cell 0
cell physical IO bytes saved during optimized file creation 0
cell physical IO bytes saved during optimized RMAN file restore 0
cell physical IO bytes eligible for predicate offload 2.6262E+10
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan 3018112136
cell IO uncompressed bytes 2.6285E+10
cell flash cache read hits 18639
11 rows selected.
Elapsed: 00:00:00.01
SQL> select count(*) from larget;
COUNT(*)
242778112
SQL> select /*+ parallel */ count(*) from larget;
COUNT(*)
242778112
Elapsed: 00:00:02.71
SQL> set timing on;
SQL> select a.name,b.value
2 from v$sysstat a , v$mystat b
where
a.statistic#=b.statistic#
and (a.name in ('physical read total bytes','physical write total bytes',
'cell IO uncompressed bytes') or a.name like 'cell phy%'
or a.name like '%flash cache read hits'); 3 4 5 6 7
NAME VALUE
physical read total bytes 7.8787E+10
physical write total bytes 0
cell physical IO interconnect bytes 9054570192
cell physical IO bytes pushed back due to excessive CPU on cell 0
cell physical IO bytes saved during optimized file creation 0
cell physical IO bytes saved during optimized RMAN file restore 0
cell physical IO bytes eligible for predicate offload 7.8787E+10
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan 9054340816
cell IO uncompressed bytes 7.8855E+10
cell flash cache read hits 73375
11 rows selected.
Elapsed: 00:00:00.00
SQL> select /*+ parallel(larget 128) */ count(*) from larget;
COUNT(*)
242778112
Elapsed: 00:00:09.02
SQL> SQL> select /*+ parallel(larget 64) */ count(*) from larget;
COUNT(*)
242778112
Elapsed: 00:00:02.33通过_px_trace了解 并行详情:
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug tracefile_name;
/u01/app/oracle/diag/rdbms/dbm/dbm1/trace/dbm1_ora_10230.trc
SQL> alter session set "_px_trace"="compilation","execution","messaging";
Session altered.
Elapsed: 00:00:00.00
SQL> select /*+ parallel */ count(*) from larget;
COUNT(*)
242778112
Elapsed: 00:00:02.63
kxfpsori
Sorted: 1(2400:1085669688) 2(2400:1085669760)
kxfplist
Getting instance info for default group
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@22076:kxfpGetTotalCpuCount(): kxfplist returned status: 2
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@17529:kxfpiinfo(): inst [cpus:mxslv]
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@17533:kxfpiinfo(): 1 [24 :128 ]
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@17533:kxfpiinfo(): 2 [24 :128 ]
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@22112:kxfpGetTotalCpuCount(): instance_id: 1, cpu_count: 24
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@22112:kxfpGetTotalCpuCount(): instance_id: 2, cpu_count: 24
2012-09-02 10:28:29.527933*:PX_Messaging:kxfp.c@18160:kxfpunit(): var=0 limit=768 use_aff=0 aff_num=0 unit=24
kxfpgsg
getting 1 sets of 12 threads, client parallel query execution flg=0x230
Height=0, Affinity List Size=0, inst_total=2, coord=1
Insts 1 2
Threads 0 12
kxfpg1sg
q:0x457f2e610 req_threads:12 nthreads:12 unit:24 #inst:2 normal
jStart:0 jEnd:128 jIncr:1 isGV:0 i:0 instno:1 kxfpilthno:0
jStart:0 jEnd:128 jIncr:1 isGV:0 i:1 instno:2 kxfpilthno:12
kxfpg1srv
trying to get slave P000 on instance 2 for q:0x457f2e610
slave P000 is remote (inst:2)
Slave P000 acquired dp:(nil)
Got It. 1 so far.
kxfpg1srv
trying to get slave P001 on instance 2 for q:0x457f2e610
slave P001 is remote (inst:2)
Slave P001 acquired dp:(nil)
Got It. 2 so far.
kxfpg1srv
trying to get slave P002 on instance 2 for q:0x457f2e610
slave P002 is remote (inst:2)
kxfpg1sg
got 12 servers (sync), errors=0x0 returning
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10138:kxfpgsg(): Acquired 12 slaves on 1 instances avg height:12 #set:1 qser:2835457
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P000 inst 2 spid 10369
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P001 inst 2 spid 10372
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P002 inst 2 spid 10374
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P003 inst 2 spid 10376
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P004 inst 2 spid 10378
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P005 inst 2 spid 10380
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P006 inst 2 spid 10382
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P007 inst 2 spid 10384
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P008 inst 2 spid 10386
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P009 inst 2 spid 10388
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P010 inst 2 spid 10393
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10148:kxfpgsg(): P011 inst 2 spid 10395
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10164:kxfpgsg(): Instance(servers):
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10167:kxfpgsg(): inst:1 #slvs:0
2012-09-02 10:28:29.532850*:PX_Messaging:kxfp.c@10167:kxfpgsg(): inst:2 #slvs:12
kxfpValidateSlaveGroup
qcq:0x457f2e610 flg:30
kxfxcp1
Sending parse to nprocs:12 slave_set:1
kxfxcPutSession获得 1 set of 12 threads ,并行度为12, 这里可以看到Oracle Exadata自动的并行度还是比较合理的
如何在OTN中文技术论坛提一个问题?
论坛礼仪需知及学习oracle的方法论
Maclean Liu
Oracle Database Administrator
Oracle Certified 10g/11g Master
www.askmaclean.com忘记列出并行参数了:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> show parameter parallel
NAME TYPE VALUE
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 128
parallel_min_percent integer 0
parallel_min_servers integer 32
parallel_min_time_threshold string AUTO
parallel_server boolean TRUE
parallel_server_instances integer 2
parallel_servers_target integer 384
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0 -
New Exam - (1z0-027) - Oracle Exadata Database Machine X3 Administrator
Hi Friends,
Exadata Database Machine Overview
Identify the benefits of using Database Machine for different application classes
Describe the integration of the Database Machine with Oracle Database Clusterware and ASM
Describe Exadata Storage Server and the different Database Machine configurations
Describe the key capacity and performance specifications for Database Machine
Describe the key benefits associated with Database Machine
Exadata Database Machine Architecture
Describe the Database Machine network architecture
Describe the Database Machine software architecture
Describe the Exadata Storage Server storage entities and their relationships
Describe how multiple Database Machines can be interconnected
Describe site planning requirements for Database Machine
Describe network requirements for Database Machine
Key Capabilities of Exadata Database Machine
Describe the key capabilities of Exadata Database Machine
Describe the Exadata Smart Scan capabilities
Describe the capabilities of hybrid columnar compression
Describe the capabilities and uses of the Smart Flash Cache
Describe the capabilities of the Smart Flash Log
Describe the purpose and benefits of Storage Indexes
Describe the capabilities and uses of Exadata Secure Erase
Exadata Database Machine Initial Configuration
Describe the installation and configuration process for Database Machine
Describe the default configuration for Database Machine
Describe supported and unsupported customizations for Database Machine
Describe database machine operating system options and configurations
Configure Exadata Storage Server
Configure Exadata software
Create and configure ASM disk groups using Exadata
Use the CellCLI Exadata administration tool
Describe Exadata Storage Server security
I/O Resource Management
Use Exadata Storage Server I/O Resource Management to manage workloads within a database and across multiple databases
Configure database resource management plans
Configure category plans
Configure inter-database plans
Describe and configure the I/O resource manager objectives
Monitor I/O using I/O Metrics
Recommendations for Optimizing Database Performance
Optimize database performance in conjunction with Exadata Database Machine
Monitor and configure table indexes, accounting for the presence of Exadata
Using Smart Scan
Describe Smart Scan and the query processing that can be offloaded to Exadata Storage Server
Describe the requirements for Smart Scan
Describe the circumstances that prevent using Smart Scan
Identify Smart Scan in SQL execution plan
Use database statistics and wait events to confirm how queries are processed
Consolidation Options and Recommendations
Describe the options for consolidating multiple databases on Database Machine
Describe the benefits and costs associated with different options
Identify the most appropriate approach for consolidation in different circumstances
Migrating Databases to Exadata Database Machine
Describe the steps to migrate your database to Database Machine
Explain the main approaches for migrating your database to Database Machine
Identify the most appropriate approach for migration in different circumstances
Identify the most appropriate storage configuration for different circumstances
Bulk Data Loading using Oracle DBFS
Use Oracle DBFS for bulk data loading into Database Machine
Configure the Database File System (DBFS) feature for staging input data files
Use external tables based on input data files stored in DBFS to perform high-performance data loads
Exadata Database Machine Platform Monitoring
Describe the purpose and uses of SNMP for the Database Machine
Describe the purpose and uses of IPMI for the Database Machine
Describe the purpose and uses of ILOM for the Database Machine
Configuring Enterprise Manager Grid Control 11g to Monitor Exadata Database Machine
Describe the Enterprise Manager Grid Control architecture as it specifically applies to Exadata Database Machine
Describe the placement of agents, plug-ins and targets
Describe the recommended configuration for high availability
Describe the plug-ins associated with Exadata Database Machine and how they are configured
Use setupem.sh
Configure a dashboard for Exadata Database Machine
Monitoring Exadata Storage Servers
Describe Exadata Storage Server metrics, alerts and active requests
Identify the recommended focus areas for Exadata Storage Server monitoring
Monitor the recommended Exadata Storage Server focus areas
Monitoring Exadata Database Machine Database Servers
Describe the monitoring recommendations for Exadata Database Machine database servers
Monitoring the InfiniBand Network
Monitor InfiniBand switches
Monitor InfiniBand switch ports
Monitor InfiniBand ports on the database servers
Monitor the InfiniBand subnet master location
Monitor the InfiniBand network topology
Monitoring other Exadata Database Machine Components
Monitor Exadata Database Machine components: Cisco Catalyst Ethernet Switch, Sun Power Distribution Units, Avocent MergePoint Unity KVM Switch
Monitoring Tools
Use monitoring tools: Exachk, DiagTools, ADRCI, Imageinfo and Imagehistory, OSWatcher
Backup and Recovery
Describe how RMAN backups are optimized using Exadata Storage Server
Describe the recommended approaches for disk-based and tape-based backups of databases on Database Machine
Describe the recommended best practices for backup and recovery on Database Machine
Perform backup and recovery
Connect a media server to the Database Machine InfiniBand network
Database Machine Maintenance tasks
Power Database Machine on and off
Safely shut down a single Exadata Storage Server
Replace a damaged physical disk on a cell
Replace a damaged flash card on a cell
Move all disks from one cell to another
Use the Exadata cell software rescue procedure
Patching Exadata Database Machine
Describe how software is maintained on different Database Machine components
Locate recommended patches for Database Machine
Describe the recommended patching process for Database Machine
Describe the characteristics of an effective test system
Database Machine Automated Support Ecosystem
Describe the Auto Service Request (ASR) function and how it relates to Exadata Database Machine
Describe the implementation requirements for ASR
Describe the ASR configuration process
Describe Oracle Configuration Manager (OCM) and how it relates to Exadata Database Machine
Quality of Service Management
Describe the purpose of Oracle Database Quality of Service (QoS) Management
Describe the benefits of using Oracle Database QoS Management
Describe the components of Oracle Database QoS Management
Describe the operations of Oracle Database QoS Management
Thanks
LaserSoftHere's the source document from Oracle Education with the exam details: http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&get_params=p_exam_id:1Z0-027&p_org_id=&lang=
This is the non-partner equivalent of "Oracle 11g Essentials" (1Z0-536 http://www.oracle.com/partners/en/knowledge-zone/database/1z1-536-exam-page-169969.html) that has existed under various names since 2010, but with additional content relevant to new features like flash logging, QoS management and ASR.
Marc -
When executing 'duplicate target database for standby from active' the system tablespace/datafile (datafile 1) is not cloned. All other datafiles clone successfully. The RMAN process aborts with the following errors while attempting to clone the system tablespace/datafile.
ORA-19558: error de-allocating device
ORA-19557: device error, device type: DISK, device name:
ORA-17627: ORA-01041: internal error. hostdef extension doesn't exist
ORA-17627: ORA-01041: internal error. hostdef extension doesn't exist
ORA-03135: connection lost contact
Here are the details:
Primary is 11.2.0.2 RAC database on an Exadata platform
Standby is 11.2.0.2 Single Instance database (same patch level as primary) on a Red Hat Linux box
This is an ASM to ASM duplication.
This is not unique to this database. We tried another database and go the same behavior - all datafiles clone successfully with the exception of the system tablespace/datafile.
We have traced the RMAN execution and it seems to fail when it is trying to assign a NEWNAME to the system tablespace/datafile.
We even issued an explicit SET NEWNAME command but RMAN ignored it.
We also shutdown the primary and started is up in mount mode thinking that something had ahold of the System Tablespace/datafile.
We also opened up the network firewall to allow permit any,any traffic.
We increased the max_server_processes
and added TCP.NODELAY=yes to the sqlnet.ora file.
There seems to be some artifact present in our Primary System tablespace/data file that is preventing it form being cloned.
checked all alert files grid, asm, and dbhome - no abnormal messages.
We are in the process of restoring the database from a backup but we would prefer to get this working using the 'Active Database' methodologyI successfully created the standby database using RMAN backup and recovery.
I started the managed recovery. Archive logs are being sent from the primary to the standby ( I can see them in ASM), but the standby is not applying them.
I get the following messages in the standby alert log...
Fetching gap sequence in thread 2, gap sequence 154158-154257
Tue Nov 26 16:19:58 2013
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Nov 26 16:20:01 2013
Fetching gap sequence in thread 2, gap sequence 154158-154257
Tue Nov 26 16:20:11 2013
Fetching gap sequence in thread 2, gap sequence 154158-154257
Tue Nov 26 16:20:22 2013
Fetching gap sequence in thread 2, gap sequence 154158-154257
Tue Nov 26 16:20:32 2013
Fetching gap sequence in thread 2, gap sequence 154158-154257
I don't see any MRP processes:
select process,
status,
thread#,
sequence#,
block#,
blocks
7 from v$managed_standby;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CLOSING 2 154363 1 132
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 145418 121 1
RFS IDLE 0 0 0 0
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
RFS IDLE 0 0 0 0
12 rows selected.
SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;
THREAD# SEQUENCE# APPLIED
2 154356 NO
2 154357 NO
1 145411 NO
2 154358 NO
2 154360 NO
2 154361 NO
1 145414 NO
1 145415 NO
2 154362 NO
2 154363 NO
1 145416 NO
11 rows selected.
I do have the archive logs that cover sequences 154158-154257
Crosschecked 38 objects
Crosschecked 62 objects
Finished implicit crosscheck backup at 26-NOV-13
Starting implicit crosscheck copy at 26-NOV-13
using channel ORA_DISK_1
using channel ORA_DISK_2
Crosschecked 2 objects
archived log file name=+RECO_XORA/nmuasb00/archivelog/2013_11_26/thread_2_seq_154377.344.832521989 RECID=29 STAMP=832521990
validation succeeded for archived log
archived log file name=+RECO_XORA/nmuasb00/archivelog/2013_11_26/thread_2_seq_154378.346.832521991 RECID=31 STAMP=832521993
Crosschecked 31 objects -
How to restore a Rman backup from Tape
Hi,
We need to restore the database backup taken through RMAN on 24-JUNE-2013 from tape as this backup is not currently available on disk.
Can you please help me on the following,
1) how i can check if the same backup on 24-JUNE-2013 is available on Tape.
2) If yes how i can restore to Disk.
Because we need to create a new Clone instance from the backup taken on 24-JUNE-2013.
We are having 12.1.3 Application on Exalogic and 11.2.0.3 Database on Exadata machine.
Our support is highly appreciated.
Thanksuser11969666 wrote:
Hi,
We need to restore the database backup taken through RMAN on 24-JUNE-2013 from tape as this backup is not currently available on disk.
Can you please help me on the following,
1) how i can check if the same backup on 24-JUNE-2013 is available on Tape.
2) If yes how i can restore to Disk.
Because we need to create a new Clone instance from the backup taken on 24-JUNE-2013.
We are having 12.1.3 Application on Exalogic and 11.2.0.3 Database on Exadata machine.
Our support is highly appreciated.
Thanks
Your questions are answered in the Backup and Recovery User's Guide -- Contents
Thanks,
Hussein -
RMAN Cumulative and differential level 1taking too much time
hi,
I am attempting to HOT backup my 600 GB database to backup into Tape using NMO 5 EMC Networker 7.6.
My networker server is on Win Serv 2003.
My oracle database is on RHEL 4.5 Architecture ia64
Oracle DB Version 10.2.0.4.0
Using ASM
Using EMC Storage as Databse storage
Using tape backup media type LTO-Ultrium-5
No of chaneels used same for bothLevel 0 & 1 is 4
there are 60 Datafiles fior the database
i am atttempting incremental backup[Hot] backup
for Incrementa Level 0 is taking 90 Minutes to complete.
BUT leve1 backup [Both differential and cumulative] are taking almost the same time as taken for Level 0 backup
almost 80Mins.
but the backup Set size for Level 0 is almost 500 GB and Sizes for any Level 1 backup not more than 200MB.
i am confused if both LEVEL 0 AND LEVEL 1 BACKUP should take the same span of time.
please help to reduce the time to complete the Level 1 backups..
thanks in advanceRMAN incremental level 1 and up will have to verify every block in the data files to identify if any modifications have occurred. The time it takes to complete the incremental backup will depend on how much changed. Are you using the latest patches? There are known bugs that can affect performance problems with RMAN backup and recovery. Otherwise, check the Oracle documentation to troubleshoot RMAN.
Block change tracking as already mantioned, introduced in 10g, can greatly speed up your incremental level 1 and up backups.
From what I understand:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/mydir/rman_change_track.f';
As soon as block change tracking is enabled, Oracle starts to record every block that updates. The information is stored in a bitmap inside the BCT file. Every incremental backup causes a bitmap swtich in the BCT file.
If there exists a previous bitmap beside the current bitmap, then an incremental level 1 backup will only backup the blocks according to the current bitmap. Incremental level 1 backups are differential backups by default. If there is no previous bitmap, the RMAN backup will perform a conventional scan of the database as usual.
The bitmap logic applies also to cumulative level 1 incremental backups, which will use all the bitmaps recorded since the last bitmap switch from a level 0 incremental backup. Due to the limit of 8 bitmaps, a cumulative incremental level 1 backup will have to perform a conventional scan of the database, if you make a level 0 database backup followed by 7 differential incremental backups. -
Opinion on non-clustered file system to offload backups
3 node 11.2 rac using ASM on SAN.
All of the RMAN backups go to +FRA. Any node can initiate RMAN and create backups.
I want to offload the backups from +FRA to a filesystem. Our current backup system can only read cooked file systems.
This LUN also comes from the same SAN. It will have a file system, ext4. It will be visible to all 3 nodes and will be mounted to ONLY the first node. First node will be the one to copy from asm to this filesystem.
If first node is out of commission, I can mount the backup lun to one of the other remaining nodes.
Does this sound like a decent plan or should I go with a clustered file system?
Thanks for you opinions!Have you looked at using ACFS? Create a very big diskgroup, then create an ACFS volume and finally an ACFS filesystem. create a path on all nodes on which the ACFS filesystem will be mounted.
Example: Using asmca do the following
mkdir /d01/FRA on all systems
DGFRA (4disks at 500G)
Create ACFS volume 1.8T
Create ACFS filesystem mount point of /d01/FRA
set your db_recovery_file_dest=/d01/FRA scope=both sid='*'
Now any node can backup to this FRA location AND any node can copy files to tape or where ever...
This is "supported" as of 11.2.0.3 ( I have used it on 11.2.0.1 and 11.2.0.2 for testing) -
Standby DB running on different hardware if production on Exadata v2
We where looking to by Exadata v2 Database Machine, but what is stopping us are 2 things.
As per Oracle:
1) It is impossible to connect existing Fiber storage to Exadata V2 to offload archived data from primary storage.
2) Standby database can't be built on different hardware (same OS and same DB version) except exadata V2 Database Machine.
We, probably, could survive with #1 limitation, but buying second Exadata V2 database machine is too much, especially for DR side.
Is any one has experience with this problems, or knows some Doc to answer this 2 questions.
Thanks in advance.Yes it is true if you are not using Exadata hybrid columnar compression features.
"With Oracle Database 11g Release 2, the Exadata Storage Servers in the Sun Oracle Database Machine also enable new hybrid columnar compression technology that provides up to a 10 times compression ratio, with corresponding improvements in query performance. And, for pure historical data, a new archival level of hybrid columnar compression can be used that provides up to 50 times compression ratios."
When you enable this feature, you can't build standby database on different hardware. It won't work.
I am still researching what else could be a stopper or I could say, which other Exadata V2/11gR2 features I should avoid to have standby database working on non Exadata V2 hardware? -
Dears,
Hi,
sorry if i can not write English well,
my database size(datafiles) is about 1.4 GB.
i take RMAN incremental level 0 backup and size of output file is about 1.1 GB
after this i insert on record to one table (for test change)
and i take incremental level 1 backup and size of output file is about 1.1 GB(same level 0)
but i think the level 1 just changed block and should be small than level 0.
RMAN> backup incremental level 0 database FORMAT 'd:\rm\L0_%d_t%t_s%s_p%p';
RMAN> backup incremental level 1 database FORMAT 'd:\rm\L1_%d_t%t_s%s_p%p';
why size for level 0 and 1 is same?
Best regards and thanks
HassanIn theory, the L1 backup should be smaller as it should capture only changed blocks (it is not a requirement that you create a BCT file -- a BCT file helps speed up the incremental backup , it's presence or absence has no relation to the size of the L1 incremental backup).
You need to determine if there were other, multiple, changes to the database that also occurred between the two backups.
Hemant K Chitale -
Exadata + Physical Standby (non-exadata) + Backups
Some folks are asking me for my opinion on a backup strategy - I don't think its possible/feasible, but I will put it out here for comments..
Proposal:
Create physical dg db (non exadata) with netapp storage of an Exadata (full rac v2).
Create rman backups of the physical dg using netapp snapshot technologies.
I cant see how this would work. Sure we can take the snaps on the pdg and restore the pdg, but ..
a) what if we lose a datafile on the exadata?
Its asm - what rman command would you run to restore the datafile? a simple restore datafile? I would think it would be more complicated.
b) what if we lost all of the exadata - corruption, physical,logical whatever..
rman restore database wouldn't work would it? Would it know to restore from the backups done on the pdg?
EIther way - it seems that you are inevitably restoring upwards of ~100tb from the pdg back to the Exadata via some pipe (nic/Ib).
I see one comment that says you can use pdg backups if they went to tape, but not if they went to disk. I assume the netapp snaps would be considered disk backups.
Thoughts?Objectives -- quick restore times (< 8hrs) - not to rely on tapes - full backups - reduce impact on source db
Size being upwards of 50-100Tb
The vendor has outlined the product which "seems" like a great thing - but will it work with Exadata
See this thread on the RAC-RMAN with snapshot on NetApp -
Exadata Architecture & I/O monitor tool - ExadataViewer
Hi Experts,
During the work, I got some knowledge and experiences about Exadata. And I found that we really need a tool to monitor Exadata performance and workflow, such as smart scan offload processing statistics and I/O dataflow path in Exadata. So, I developed ExadataViewer in my free time after work.
ExadataViewer is a Exadata performance monitoring tool. ExadataViewer can help you to understand Exadata architecture and observe smart scan offload statistics and physical I/O dataflow in a graphical view.
I hope this little tool useful to you. You can download it from http://www.exadataviewer.com
Screen Snapshot:
http://www.exadataviewer.com/wp-content/uploads/2013/05/exadata_smart_scan_demo.png
Demo Movie:
http://www.exadataviewer.com/?dl_name=exadata_demo_movie(www.exadataviewer.com).wmv
Download ExadataViewer:
http://www.exadataviewer.com/index.php/category/download/Thank you for your time and efforts Qing, but we (Oracle) already provide a very detailed reporting tool in the form of Enterprise Manger 12c. It can be tuned and refined to provide a basically limitless range of statistics, hopefully you'll be able to download it and use it.
Regards,
Dan
Maybe you are looking for
-
White box in background on menu
I have a white box behind my photos in almost every theme. It wasn't there initially and now I can't get rid of it. You can't click on it and it's not in the drop zones. Any ideas? Thanks!
-
Process is Running on the Taskbar instead of coming on the Forground
Hi Team, Recently i have used the ProcesStartInfo Class to launch one exe through the UI. It is opening currently in the Debug mode but when i am trying from the UI. The process is opening the Exe in the Taskbar. It should run the Exe in the Forgrou
-
IDOC Invoice posting, LC3 doesnt have any amount.
Hi GURUs, I have an IDOC invoice posting. Inside this document, I have line item which LC3 dont have any amount. Document Currenty, LC1 and LC2 have amount, only LC3 dont have any amount. For all these items, they are having same material number. Any
-
SiteVisit/Information Request for Customizing Time Entry/Approval on ESS
Our organization is considering adding Time Entry & Time Approval to our Employee Self Service. The portal was just upgraded from EP6 to BW7.0 (our system is ECC 6.0). We currently use ESS for employee paystubs, benefits enrollment, membership campa
-
The burn to the SuperDrive drive failed. The disc drive didn't respond properly and can't recover or retry. What do I do