MDM 7.1 on HP-UX Archiving on a daily basis
Hi all,
We are migrating a MDM landscape from Windows to HP-UX.
We used to run CLIX .bat files scheduled on the Windows Server to perform daily archive of our repositories.
As CLIX is only available for Windows Plataform (you can check the MDMCX7100*.ZIP from http://service.sap.com/swdc), we need to find a way to perform Repository Archiving natively on our HP-UX environment.
Is there a HP-UX tool, or MDM commands we can schedule on our HP-UX Server?
(We were recommended to run CLIX commando from our workstations, but is not really an option we can consider)
Thanks in advance
Yours faithfully,
Cristian Marin
If anyone my find this helpfull,
CLIX is installed as a native tool of the MDM Server, so all we had to do was to rebuild our CLIX commands under (sh) shell.
Below you may find the code we have modifiy it to replace the final a2a file.
#!/bin/sh
#El archivo resultante queda en la ruta
#/usr/sap/MDP/MDSXX/mdm/archives
FECHA=`date +%F_%T`
clix cpyArchive "<mdm-host>:<mdm-password>" "<REP_NAME>:<SID>:Oracle:system:<oracle-system-password>" -A <REP_NAME>_$FECHA.a2a
Good luck!
Cristian Marin
Similar Messages
-
ERROR:Could not read archive file - during Processing Base System Part 2
Hi there,
What I have:
* I have problem with installation of the Mac OS X Panther v10.3 on iMac.
* I have 3 original CDs (black with silver X).
What I did:
* I made new 1 partition and formated disk as Mac OS Extended (Journaled) over the Disk Utility in the first CD as normal without additional options (if I tried to format disk with zero all data and 8 way random write format after 3 days computer freeze)
* I verified disk over this utility - that was ok - HFS volume verifed.
* Then I restarted the computer and ran installation from this first CD
What happened:
* installation did not finish, because there were some problems during installation process.
* i tried to customize installation just for essential work (without other languages support, printers etc), but it was still the same problem
Installation log:
After I choosed installation type, there was first error, but it did not look like important.. - root: bootstraplookup(): unknown error code
Checking your installation disk - OK
Preparing disk - OK
Preparing base 1 and 2 - OK
Other preparing.. - OK
Processing Base System Part 1 - OK
Processing Base System Part 2
==
ERROR:Could not read archive file. - pax: Invalid header, starting valid header search.
ERROR:Could not read archive file. - pax: Invalid header, starting valid header search.
last message repeated 2 times
ERROR:Could not write file. - pax: WARNING! These patterns were not matched:
Some files for Essentials may not have been writen correctly.
root: Installer[108] Exception raised during posting of notification. Ignored. exception: Some files for Essentials may not have been written correctly. (code 1)
Install Failed
Error detected
Name: NSGenericException
Reason: Some files for Essentials may not nave been written correctly. (code 1)
==
It seems like a problem with reading some data from the CD, but during the installation checking of disk was ok.. maybe it can be problem with the cd-rom..? Or problem with data on cd-rom - I mean bad archive file..? But it is original CD.. What do you think??
Thank you!Tomas,
On THIS Page, locate your iMac model.
From the Documents column, click on the appropriate number link.
Using the info in the document that opens, locate the serial number of your iMac.
On THIS Page, in the text field for Search Tech Specs, enter that serial number.
Click on the model of iMac that appears.
Post a link to the page that opens, or post the info requested below.
Exactly which model iMac is it?
What is the Processor speed?
What size is the Hard Drive?
How much Memory is installed?
What type of internal Optical Drive does it have?
Which version of OS, was the original Installed Software?
ali b -
Error logging using DBMS_ERRLOG package
Hi All,
We have following tables, which are growing day-by-day and giving performance problems.
There are total 25 tables (1 parent and 24 child tables, with different structures).
Here i gave some Samples, NOT actual table structures.
Actually we don't require all the data for our regular activities.
So we thought of moving part of the data into Other (Archive) tables, on daily basis.
Using SOME criteria, we are finding eligible records to be moved into Archive tables.
All child records follows the Parent.
Original Tables
==================
create table customer (c_id number(5), c_name varchar2(10),c_address varchar2(10));
create table orders (o_id number(5),c_id number(5),o_info clob);
create table personal_info (p_id number(5),c_id number(5), age number(3), e_mail varchar2(25), zip_code varchar2(10)):
Archive Tables
==============
create table customer_arch (c_id number(5), c_name varchar2(10),c_address varchar2(10));
create table orders_arch (o_id number(5),c_id number(5),o_info varchar2(100));
create table personal_info_arch (p_id number(5),c_id number(5), age number(3), e_mail varchar2(25), zip_code varchar2(10)):
Temp table
==========
create table C_temp (rnum number(5), ids number(5));
Sample Code
============
PROCEDURE payment_arch
IS
l_range_records NUMBER (4) := 2000;
l_total_count NUMBER(10) := 0;
l_prev_count NUMBER(10) := 0;
l_next_count NUMBER(10) := 0;
BEGIN
--Finding eligible records to be moved into Archive tables.
INSERT INTO C_TEMP
SELECT ROWNUM,c_id FROM customer;
SELECT NVL(MAX(ID),0) INTO l_total_count FROM OPP_PAYMENT_ID_TEMP;
IF l_total_count > 0 -- Start Count check
THEN
LOOP -- Insert Single payments
IF ((l_total_count - l_prev_count) >= l_next_count )
THEN
l_next_count := l_prev_count + l_range_records;
ELSE
l_next_count := l_total_count;
END IF;
l_prev_count := l_prev_count ;
INSERT INTO customer_ARCH
SELECT * FROM customer a
WHERE c_id in (SELECT c_id
FROM C_TEMP b WHERE rnum BETWEEN l_prev_count AND l_next_count);
INSERT INTO orders_ARCH
SELECT * FROM orders a
WHERE c_id in (SELECT c_id
FROM C_TEMP b WHERE rnum BETWEEN l_prev_count AND l_next_count);
INSERT INTO personal_info_ARCH
SELECT * FROM personal_info a
WHERE c_id in (SELECT c_id
FROM C_TEMP b WHERE rnum BETWEEN l_prev_count AND l_next_count);
-- Delete Archived Single Payments
DELETE customer a
WHERE c_id in (SELECT c_id
FROM C_TEMP b WHERE ID BETWEEN l_prev_count AND l_next_count);
COMMIT;
IF l_next_count = l_total_count
THEN
EXIT;
else
l_prev_count := l_next_count;
END IF;
END LOOP; -- Insert Single payments
END IF; -- End Count check
EXCEPTION
WHEN NO_DATA_FOUND
THEN
NULL;
WHEN OTHERS
THEN
RAISE_APPLICATION_ERROR('-20002','payment_arch: ' || SQLCODE ||': ' || SQLERRM);
END Payment_Arch;
In production, we may require to archive 25000 Parent records and 25000*4*3 child records per day.
Now the problem is:
By any chance, if record fails, We want to know the Exact record ,just "c_id" for the particular table
where the error raised and Error message.
We thought of using DBMS_ERRLOG package, but we CAN NOT log errors of Different tables into SINGLE error log table.
In the above case It require 3 Different tables and it logs all columns from the original table.
It's a un-necessary burden on the database.
We would be very glad, if anyone can help us with some good thought.
Thanks in advance.
srinivasduplicate post
Insufficient privilege error when executing DBMS_ERRLOG through PLSQL -
Hi,
We have our DB 10g R2 on Linux redhat named DBP. In order to be safe and do not lose data and be always available we envision following scenario and we ask your opinion please. Many thanks.
1-Create another DBP on another Linux machine.
2-Have a shared file system for the two linux machines.
3-Full RMAN Hot Backup of the original DBP Two times a day. The backup files would be storaged on the shared file system.
4-in case of crash we will restore DBP on seconde linux machine.
What are the advantages and disadvantages ?
Many thanks before.Hi dear,
There can be multiple options for u, according to ur conveniance like
1) U can take cold backup every week and take only backup of archive file on daily basis till the backup date.......But if u can only recover ur database till the point u have the archive logs.
2) U can have incremental backup policy through which u can have hot as well as cold backup's .........But the problem is same like in step no 1.
3) U can create a dataguard(Standby Database) on your test server......This will be safest and foremost a best idea i feel because u will loss a very little dataonly which is present in ur log files but whole of ur database will be saved...
Regards
Amit Raghuvanshi -
Hi experts
We are archiving our product repository on a daily basis.
And we want to create a backup repository also for the same on a daily basis.
How we can automatically do this task?(We are using 7.1 version)
Thanks in Advance
Dhanish JosephHi Dhanish,
This can be done using MDM Clix commands in the form of batch files which you need to scheduled with the help of scheduler.
Please refer the below wiki for complete details:
[Automatic Backup (Archive) of MDM Repository|http://wiki.sdn.sap.com/wiki/display/SAPMDM/AutomaticBackupofMDMRepository%28UsingMDMCLIXinWindowsOS%29]
Similarly this can be achieved for MDM 7.1.
Regards,
Mandeep Saini -
Max number of records in MDM workflow
Hi All
Need urgent recommendations.
We have a scenario where we need to launch a workflow upon import of records. The challenge is source file contains 80k records and its always a FULL load( on daily basis) in MDM. Do we have any limitation in MDM workflow for the max number of records? Will there be significant performance issues if we have a workflow with such huge number of records in MDM?
Please share your inputs.
Thanks-RaviHi Ravi,
Yes it can cause performance overhead and you will also have to optimise MDIS parametrs for this.
Regarding WF i think normally it is 100 records per WF.I think you can set a particular threshold for records after which the WF will autolaunch.
It is difficult to say what optimum number of records should be fed in Max Records per WF so I would suggest having a test run of including 100/1000 records per WF.Import Manager guide say there are several performance implications of importing records in a WF,so it is better to try for different ranges.
Thanks,
Ravi -
Archive log mode in 3 node rac database
Sir I am using oracle 10.2.0.1 and created 3 node cluster setup on.OS i am using linux 4
I am too much confuse in setting up this cluster in archive log mode bec why we need to false cluater_database parameter to setup cluster in archive log mode.
I have search lot of documents but all are saying we not need setting this parameter flase perior to 10.2.But what is the actual concept behind setting this parameter to false in early release and why we need not to set it false.Please help me.
I know how to set up in archive log mode but this parameter create confusion.
Thanks sir in advanceI also dont't know all the details about this but I'll try to explain what I know. Setting cluster_database to false in a rac environment is usually done when you need to mount the database from one instance exclusively. This is still needed when you upgrade the catalog, for example during release update (from 10.2.0.1 to 10.2.0.4 for example). Now with switching archivelog mode someone at oracle must have discovered that it is sufficient when the database is mounted by only one instance and you can actually save one step.
As I (and propably most of you) don't switch archiving on or off on a daily basis, I don't worry about this a lot, simply turn this on when I create a new database and then leave it on forever.
Bjoern -
Exchange 2013 Archive mailbox best practise
Current senario:
Migrating to Exchange 2013 CU3 from lotus Domino
in lotus domino the customer is having huge archive files(nfs file size is around 30 GB, like wise users are having multiple archive file with same size.)
Requirement is all these file need to migrated to exchange 2013 CU3. whcih we are taking care by using thrid party tool.
My concern is exchang e2013 support for huge mailbox size. if so what maximum size supported for online mailbox and archive mailbox.
can I assign multiple archive mailbox to users.
we have got separate Exchange 2013 archive server in place
We would like know the best practise/guide line for archive mailbox/live mailbox size.
refered below link:
http://blogs.technet.com/b/ashwinexchange/archive/2012/12/16/major-changes-with-exchange-server-2013-part-1.aspxThe key decision is that the content in the primary mailbox is synchronized with the client when in cached mode, while the content in the archive is not. So I'd want to keep the primary mailbox populated with the content the user needs on a daily basis,
and put the rest in the archive. Unfortunately, that answer is not a number, and it isn't the same for all users.
Each user can have zero or one archive mailboxes, not multiple.
Ed Crowley MVP "There are seldom good technological solutions to behavioral problems." -
Hello,
We are in process of archiving of SAP PI System.
The configuration of system is as below:
SAP Netweaver 7, SAP PI 7.0, Oracle 10g.
I checked the SXMSPMAST, and found that there are lots of entries which are in state ARCH.
SQL> select count(*) from sappip.SXMSPMAST where ITFACTION = 'ARCH' and MSGSTATE = '003';
COUNT(*)
11772771
Hence i schedule the "ARV_BC_XMB_WRP*" job. This job is getting sucessfully completed also the corresponding "ARV_BC_XMB_DEL*" job is also getting complted ( checked output of each deletion job , it shows more than 10K messages deleted in each jobs.)
I think this mean that messages are archiving and getting deleted from database.
But my concern are:
1) On daily basis there are approx 35 deletion jobs are running, But none of the job's output includes table SXMSCLUP. and in our case this
is the biggest table in the database.
eg: output of one deletion job is as below (
Production Mode: Statistics for Deleted Data Objects
Archive File Key 019170-006BC_XMB
Number of Deleted Data Objects 10.832
Deleted Database Space in MB 8,938
- Tables 7,531
- Indexes 1,407
Type No. Description
SXMSPEMAS 2.860 Integration Engine:
Enhanced Message Queue (Master)
SXMSPERROR 0 XML Message Broker:
Message Queue (Incorrect Entries)
SXMSPMAST 2.860 Integration Engine:
Message Queue (Master)
SXMSPVERS 8.580 Integration Engine:
Message Version
All the output of deletion job ARV_BC_XMB_DEL includes only these table.
So does these jobs will archive/delete the data from SXMSCLUP?
or we need to schedule some other jobs for this
2) Even though several archiving jobs are running ( The archiving job is schedule to run once daily ) there is not significent difference in
the count of ""no of messages for archiving" in the RSXMB_SHOW_STATUS. considering the message and adapter status.
( Count decrease by 30-40K/per day).if we consider proceeding with this speed, it will take huge amount of time to archive/delete messages.
Is there any workaround for this?
Output of RSXMB_SHOW_STATUS
Overview
========
Time Stamp Not Included
Number of Messages in Database:
12,929,872
Number of Messages in Client:
12,929,879
Number of Messages for Reorganization in Client:
12,929,905
Number of Messages to Be Archived in Client:
11,882,234
Number of Logically Deleted Messages in Client:
0
Number of Archived and Logically Deleted Messages in Client:
0
Message Status
==============
Message Status: 000 Number: 0
Message Status: 001 Number: 39
Message Status: 002 Number: 0
Message Status: 003 Number: 11,813,699
Message Status: 004 Number: 0
Message Status: 005 Number: 0
Message Status: 006 Number: 0
Message Status: 007 Number: 0
Message Status: 008 Number: 0
Message Status: 009 Number: 7
Message Status: 010 Number: 1,210
Message Status: 011 Number: 0
Message Status: 012 Number: 179,966
Message Status: 013 Number: 0
Message Status: 014 Number: 935,820
Message Status: 015 Number: 0
Message Status: 016 Number: 4
Message Status: 017 Number: 0
Message Status: 018 Number: 1
Message Status: 019 Number: 0
Message Status: 020 Number: 0
Message Status: 021 Number: 0
Message Status: 022 Number: 0
Message Status: 023 Number: 0
Message Status: 024 Number: 0
Message Status: 025 Number: 0
Message Status: 026 Number: 0
Message Status: 027 Number: 0
Message Status: 028 Number: 0
Message Status: 029 Number: 0
Message Status: 030 Number: 0
Message Status: 031 Number: 0
Adapter Status
==============
14.05.2014 Program RSXMB_SHOW_STATUS 2
Adapter Status of Messages with Message Status: 003
Adapter Status: 000 Number: 193,071
Adapter Status: 001 Number: 185,086
Adapter Status: 002 Number: 0
Adapter Status: 003 Number: 0
Adapter Status: 004 Number: 0
Adapter Status: 005 Number: 0
Adapter Status: 006 Number: 11,435,594
Adapter Status: 007 Number: 0
Adapter Status: 008 Number: 0
Adapter Status: 009 Number: 0
Adapter Status: 010 Number: 0
Please suggest.Hi Mark,
I am checking the messages on ABAP stack. The messages are missing and i am sure as i compared the messages of two consecutive dates and also sure those two messages are missing. Messages are not found in the table as well.
I will check the notes, and proceed if i get to understand something.
Meanwhile, actually two interfaces are involved here Sales and Loyality. Will try to explain the flow.
The messages are in flow as shown above. All other messages are available only these two are missing highlighted in red.
As logging_sync=1 so they are persisting but when delete job is running its lost while all other are available according to the retention period(40 days).
Do reply if anything strikes.
Thanks! -
Hi ,
We are taking archive backup using rman but all the archive is not deleted after backup.only few archive is delete
Please help on this
OS is window and database verion is 10g
Below is the script for takinhg a backup
run
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
allocate channel CH01 type 'sbt_tape';
send 'NB_ORA_POLICY=N10-NDLORADB02-DB-Oracle-billhost-Archive-Full-Backup-Daily,NB_ORA_SERV=CNDBBKPRAPZT33,NB_ORA_CLIENT=NDLORADB02,NB_ORA_SCHED=Default-Application-Backup';
backup filesperset 20 format 'ARC%S_%R.%T' (archivelog like 'T:\ARCHIVE\ARC%');
backup format 'cf_%d_%U_%t' current controlfile tag='backup_controlfile';
delete noprompt archivelog all backed up 1 times to sbt_tape ;
release channel CH01;
Regards
GauravNo evidence with respect to your assertion exists in your post.
These two commands
backup filesperset 20 format 'ARC%S_%R.%T' (archivelog like 'T:\ARCHIVE\ARC%');
delete noprompt archivelog all backed up 1 times to sbt_tape ;are potentially contradictory, but as you also don't provide the log_archive_format parameter who can tell.
This forum is about providing help. In order to be able to help people asking help need to provide sufficient information.
You don't ask for help, you ask to solve a riddle, without providing a webcam.
Sybrand Bakker
Senior Oracle DBA -
Is it possible to use Archive Redo log file
Hi friends,
My database is runing in archive log mode.I had taken cold backup on Sunday but i am taking archive log file daily evening.
Wednesday my database crash that means i lost all control file,redo log file,datafile etc.
I have archived log file backup till Tuesday Night and other files like control file,datafile etc of Sunday .
1)Is it possible to recover database till tuesday if yes HOW to use archive log file.
(See SCN no of control file and datafiles is same ,if we use RECOVER DATABASE command oracle shows that media recovery is not requide.)
we don't have current control file we had lost in media crash.
nullDear friend
In this scenario
you lost the contorl file
1>If you have old copy of Contorl file,
which has the current structure of the
database and all the archive files then
you can recover the database with
Ponint in time recovery (Using Backup Controlfile)
suresh -
Help with locating and moving archives
Hi all, not sure where to post this but here goes.
We are currently planning to centralize all archives to one server as they are quite disparate at the moment. I have a program that finds the locations of the archives from groupwise but the issue is some are on the network and some are on the local drives. Of course d:\archives means nothing to most people as the pc name is required to locate. Is there a program that can automate the location and relocation of archives??? Id hate to have to locate over 300 archives manually
ThanksOn 7/11/2011 6:06 PM, tugless wrote:
>
> Hi all, not sure where to post this but here goes.
>
> We are currently planning to centralize all archives to one server as
> they are quite disparate at the moment. I have a program that finds the
> locations of the archives from groupwise but the issue is some are on
> the network and some are on the local drives. Of course d:\archives
> means nothing to most people as the pc name is required to locate. Is
> there a program that can automate the location and relocation of
> archives??? Id hate to have to locate over 300 archives manually
>
> Thanks
>
>
You could write one, but realize it wouldn't work well for 2 reasons.
1. It would have to run while the user was logged on, on their
workstation or it might not have rights to the right mapped drive.
2. Consolidation is not always wise. In fact it is a recipe for DISASTER
if you are not careful. You must create a structure like this
BASE\PO\<archive>
in short, a base directory, followed by a subdirectory PER po.
Why? Remember the archive stores stuff by FID, not GUID. FIDs are OnLY
unique per PO, and clash fairly often in practice otherwise. You do not
want that- you will lose mail that way! -
Create Archive splits one video into 3 parts -- why?
I'm finding that Final Cut Pro X can import video from my tape-based Canon GL2 with no problem for the most part. But when I choose Create Archive instead of importing, for a one-hour miniDV tape, the Archive typically shows up as two or three separate clips. This happens even with a one-hour video I just shot last week, as one continuous video. When I import the archive, I'm stuck with three separate clips that, when assembled in a project timeline, have an audible glitch where the breaks are. Anybody know what might be causing this, and what I can do about it (other than not use the Create Archive function)?
BenB wrote:
… Well, about that FAT 32 issue, as a retired IT engineer I can factually say no it doesn't. 1st, a Mac can not write files to a FAT 32 Windows format drive. Read, yes. Write, no.
you should tell Apple, e.g. in this article
http://support.apple.com/kb/HT3764?viewlocale=en_US&locale=en_US
fat32 is read/write for years.
using it on a daily basis.
or, ever noticed the formatting option in MacOS' DiskUtility?
ever used an out-of-the-box usb-drive? any manufacturer, any size? in 99% fat32 formatted, fully functional with MacOSX.
or did you mean ntfs?
… but as a 'retired IT engineer' … -
.... sorry, but I'm outa here.- -
Archiving Issue!!!! Need an urgent reply
My DB size is (62 GB) but the problem create 80 to 90 archiving on daily
basis & one day total archiving size of the day is 30GB.Some times archiving size goes to
49 or 50 GB.
The thing I can't understand is that there is not much enough DML & DDL
perform in this DB then how this DB creates a lot of archiving.
Plz help me regarding this issue....Thanks in advanced...Any chance you know the version number (3+ decimal places) of your database?
If it is missing some patches ... apply them and ask again.
What is in your alert log?
PS: Your question has nothing to do with SQL and PL/SQL and likely should have been posted in the General Database forum.
You might want to consider, in the future, posting where your topic is ON topic. -
Hi,
I have the following archive backup and maintenence scripts configured in my cron which run on a daily basis. How ever of late I faced an issue where in the archive destination got filled up. On check I found that there were archives that were older than 10 days in my system and I had to manually cross check them and get them deleted. Where as I expect not more than 3 days old archives to be available.
But I fail to understand as why those archive got piled up in the system where as they had to get deleted anyways by the maintenence script?
Only one thing I remember is that I had some network issue and the crons had not run for 5 days after which it had started to work.
Can some one tell me if there is any flaw in the script or configuration? Appreciate your help...
My Backup Script:
run
allocate channel n1t11 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
allocate channel n2t22 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
allocate channel n3t33 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
allocate channel n4t44 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
sql 'alter system archive log current';
backup archivelog all format 'archbkp_%d_set%s_piece%p_%T_%U';
+++++++++++++++++++++++++++++++++++++
My Maintenance Script:
run
allocate channel n1t1 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
delete noprompt archivelog until time 'SYSDATE-2' backed up 1 times to device type sbt_tape;
crosscheck backup of database completed before 'SYSDATE-10';
crosscheck backup of ARCHIVELOG ALL completed before 'SYSDATE-2';
crosscheck backup of controlfile completed before 'SYSDATE-10';
crosscheck backup of spfile completed before 'SYSDATE-10';
delete noprompt expired backup of database completed before 'SYSDATE-10' device type sbt_tape;
delete noprompt expired backup of archivelog all completed before 'SYSDATE-2' device type sbt_type;
delete noprompt expired backup of controlfile;
delete noprompt expired backup of spfile;
delete noprompt obsolete recovery window of 10 days device type 'SBT_TAPE';
release channel n1t1;
+++++++++++++++++++++++++++++++++++++
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO '%F';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u/app/oracle/ut.../snapcf_name.f'; # defaultHi,
I have the following archive backup and maintenence scripts configured in my cron which run on a daily basis. How ever of late I faced an issue where in the archive destination got filled up. On check I found that there were archives that were older than 10 days in my system and I had to manually cross check them and get them deleted. Where as I expect not more than 3 days old archives to be available.
But I fail to understand as why those archive got piled up in the system where as they had to get deleted anyways by the maintenence script?
Only one thing I remember is that I had some network issue and the crons had not run for 5 days after which it had started to work.
Can some one tell me if there is any flaw in the script or configuration? Appreciate your help...
My Backup Script:
run
allocate channel n1t11 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
allocate channel n2t22 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
allocate channel n3t33 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
allocate channel n4t44 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
sql 'alter system archive log current';
backup archivelog all format 'archbkp_%d_set%s_piece%p_%T_%U';
+++++++++++++++++++++++++++++++++++++
My Maintenance Script:
run
allocate channel n1t1 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
delete noprompt archivelog until time 'SYSDATE-2' backed up 1 times to device type sbt_tape;
crosscheck backup of database completed before 'SYSDATE-10';
crosscheck backup of ARCHIVELOG ALL completed before 'SYSDATE-2';
crosscheck backup of controlfile completed before 'SYSDATE-10';
crosscheck backup of spfile completed before 'SYSDATE-10';
delete noprompt expired backup of database completed before 'SYSDATE-10' device type sbt_tape;
delete noprompt expired backup of archivelog all completed before 'SYSDATE-2' device type sbt_type;
delete noprompt expired backup of controlfile;
delete noprompt expired backup of spfile;
delete noprompt obsolete recovery window of 10 days device type 'SBT_TAPE';
release channel n1t1;
+++++++++++++++++++++++++++++++++++++
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO '%F';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(TDPO_OPTFILE=/u/app/admin/ut.../tdpo.opt)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u/app/oracle/ut.../snapcf_name.f'; # default
Maybe you are looking for
-
My requirement is - There are customer names & region names ,Client wants to go for dynamic cascading prompt and hard code the customer name ,bu t they DON'T WANT customer names to be appear in the cascading dynamic prompt input values (they wants to
-
Does anyone know how to turn off the pop-up iCloud log-in window on iPhone 5c. I have no desire to use iCloud. I have turned off iCloud in settings but that does not seem to deter the pesky little pop-up from continually asking me to log-in Thanks
-
Apple tv optical convert to rca
I have amplifier that only has RCA input . Any suggestion on how to use my Apple TV optical output to connect to amplifier? I have seen several product that can do optical to RCA conversion but also read that some converter may not work properly du
-
Can I change the location of endnotes?
I'm using endnotes in a document. Pages puts them at the absolute end. In this case, that's after the bibliography. I want them before the bibliography, but I can't find a way to move them. Is there one? Thanks, Eric Weir
-
hi guys, just want to ask for your inputs regarding this error being generated every second.. WARNING: IO Failed. subsys:System dg:1, diskname:/dev/rhdisk59 disk:0x3c.0x3f1eaa0c au:65527 WARNING: failed to read mirror side 1 of virtual extent 6 logic