Do we need to backup OS (file systems) on Exadata storage cells?
We got confusing messages about if we need to or not. I'd like to hear your opinons.
Thanks!
Hi,
The answer is no.
There is no need to backup the OS of the storage cell.
Worst case a complete storage cell needs to be replaced. A field engineer will open your broken storage cell and take out the onboard USB and put it inside the new storage cell.
The system will boot from this USB and you can choose to implement the 'old' configuration on the new cell.
Regards,
Tycho
Similar Messages
-
Hi, I am using my icloud on my iphone but am unable to find it on my mac book pro. It's not under my system preferences like it shows under the setup. Can I download it again? Or something. I just need to backup some files on my computer and am unable.
The minimum requirement for iCloud is Lion 10.7.5 (Mavericks preferred): the iCloud Preference Pane does not appear on earlier systems - the MobileMe pane appears on Lion and earlier but is now non-functional - you cannot now open or access a MobileMe account.
To make use of iCloud you will have to upgrade your Mac to Lion or Mavericks, provided it meets the requirements.
The requirements for Lion are:
Mac computer with an Intel Core 2 Duo, Core i3, Core i5, Core i7, or Xeon processor
2GB of memory
OS X v10.6.6 or later (v10.6.8 recommended)
7GB of available space
Lion is available in the Online Apple Store ($19.99). Mountain Lion (10.8.x) is also available there at the same price but there seems little point as the system requirements are the same for Mavericks (10.9.x) - which is free - unless you need to run specific software which will run on Mountain Lion only.
The requirements for Mountain Lion and Mavericks are:
OS X v10.6.8 or later
2GB of memory
8GB of available space
and the supported models are:
iMac (Mid 2007 or newer)
MacBook (Late 2008 Aluminum, or Early 2009 or newer)
MacBook Pro (Mid/Late 2007 or newer)
Xserve (Early 2009)
MacBook Air (Late 2008 or newer)
Mac mini (Early 2009 or newer)
Mac Pro (Early 2008 or newer)
It is available from the Mac App Store (in Applications).
You should be aware that PPC programs (such as AppleWorks) will not run on Lion or above; and some other applications may not be compatible - there is a useful compatibility checklist at http://roaringapps.com/apps:table -
When installing a system upgrade, do i need to backup my files, photos, etc. first?
No, you don't need to, but it is highly recommended. Upgrades can fail, the computer may fail, the may be a power outage, etc. Never be without a current backup. Never upgrade until after you make a backup.
I recommend making a bootable backup by cloning:
Clone using Restore Option of Disk Utility
1. Open Disk Utility in the Utilities folder.
2. Select the destination volume from the left side list.
3. Click on the Restore tab in the DU main window.
4. Select the destination volume from the left side list and drag
it to the Destination entry field.
5. Select the source volume from the left side list and drag it to
the Source entry field.
6. Double-check you got it right, then click on the Restore button.
Destination means the external backup drive. Source means the internal startup drive. -
10G Database install on Linux 64bit File system Vs. Automatic Storage
Hi,
I'm installing Oracle 10G on a Dell 2950 server with an Intel 64 bit processor. The operating system is Red Hat Linux 4. It has a hardware RAID.
I've downloaded the Install Guide but I have a question about installing.
I'm confused about the File system vs. Automatic Storage management option. I will be installing on the local system, I will not be using a NAS or SAN.
Under Database Storage Options, the guide says that "Oracle recommends that the file system you choose be separate from the file systems used by the operating system or the Oracle software.".
1. Do I need to do that since I'm already using hardware RAID??
2. Which way is recommended / what do most people do: File system or Automatic Storage managment??
3. For Automatic Storage Management I read that I'd have to create an ASM Disk group that can consist of "a multiple disk device such as a RAID storage array or logical volume. However, in most cases disk groups consist of one or more individual physical disks". Do I need to reconfigure my partitions ??
I just need some input on what I should do since this is my first time installing Oracle on Linux.
Thank you.Besides documentation there's a step-by-step guide :
http://www.oracle.com/technology/pub/articles/smiley_10gdb_install.html#asm
Many questions should be answered here.
Werner -
Need to backup the file before processing in XI
Hi all
I need to backup the incoming the file and then processing it in XI.(i.e)sender adapter picks the file and then backup that file and then send it onto IE.
I am aware that there is a option in sender file CC "Run operating system Command Before Msg Proceessing".We have to set one batch file in it.
Could you please explain whats the exact commands for this case?Hi Rajesh,
>>>>backup
For you information you can archive the file details of the modes are below
Processing Mode as
A) Archive
Files that have been successfully processed are moved to an archive directory.
1) To add a time stamp to a file name, select the Add Time Stamp indicator.
The time stamp has the format yyyMMdd-hhMMss-SSS_. The time stamp ensures that the archived files are not overwritten and it enables you to sort them according to the time that they were received.
2)Under Archive Directory, enter the name of the archive directory.
If you want to archive the files on the FTP server, set the Archive Files on FTP Server indicator. If you do not set the indicator, the files are archived in the Adapter Engine file system.
B) Delete
Successfully processed files are deleted.
C) Test
Files are not processed.
Only use this mode to test the configurations of the file/FTP adapter or the Integration Engine/PCK. It is not suitable for productive operation.
D)Set to Read-Only
Successfully processed files are given this attribute. Only writable files are processed. This selection is only available for the File System (NFS) transport protocol.
>>>>>Run Operating System Command Before/After Message Processing
Command Line
An operating system command specified here is executed before or after the message processing of a file that was found in a run. The default value is an empty character string (no command).
When the operating system command is called, the file name currently being processed can be specified with the following placeholders:
1) %f (file name)
2) %F(absolute file name including path)
3) Timeout (secs)
This specifies the maximum runtime of the executing program in seconds. When this time interval is exceeded, the adapter continues processing. The executing program continues to run in the background.
4) Terminate Program After Timeout
Set this indicator if the adapter is to terminate the executing program when the timeout is exceeded.
The adapter writes the output (STDOUT and STDERR) for the operating system command in the system trace.
Message processing is independent of any errors that occur during the execution of a configured operating system command.
for more information
http://help.sap.com/saphelp_nw04/helpdata/en/e1/a63d40c6d98437e10000000a155106/frameset.htm
****PS:reward points if useful**
Regards,
Sumit Gupta -
After Restoring/Backup of File System XI Java Instances are not up!
Hello all,
We are facing problem in restoring the SAP XI System, after taking backup of the system the <b>java instances</b> in SAP XI System are not starting again. ABAP connections are fine.
Can anyone provide suggestions/solutions in order to restore the XI System back.
The system information is as follows.
System Component: SAP NetWeaver 2004s, <b>PI 7.0</b>
Operating System: SunOS 5.9, SunOS 5.10
Database: ORACLE 9.2.0.
Regards,
Ketan PatelIf it´s REALLY a PI 7.0 (SAP_BASIS 700 and WebAS Java 7.00) then it´s not compatible. WebAS 7.00 needs Oracle 10g (http://service.sap.com/pam)
Also see
http://service.sap.com/nw2004s
--> Availibility
--> SAP NetWeaver 7.0 (2004s) PAM
If you open the Powerpoint, you will see that Oracle 9 is not listed. I wonder, how you got that installed.
Neverless, if you recover a Java instance, both filesystem and database content (of the Java schema) must be in sync, means, you need to restore both, database (schema) and filesystem, that have been backed up at the same time.
Check Java Backup and Restore :
Restoring the System
1. Shut down the system.
2. Install a new AS Java system using SAPInst, or restore the file system from the offline backups that you created.
3. Import the database backup using the relevant tools provided by the database vendor.
4. Overwrite the SAP system directory /usr/sap/.
5. Start the system (see Starting and Stopping SAP NetWeaver ABAP and Java.)
The J2EE Engine is restored with the last backup.
Markus -
ASM RMAN backup to File System
Hi all,
I have a rman backup (datafile and controlfile) which was took in an ASM instance (not a RAC) ORACLE 11.2.0.2 in a Linux server, now I want restore the backup in a new database in windows/Linux OS using general File System storage (single instance rdbms) instead of ASM.
Is this possible?
Can I restrore an ASM rman backup in a file system storage mechanisim in a new server?
Kindly clarify my question.
Thanks in Advance..
NonudayNonuday wrote:
Hi Levi,
Thanks for your invaluable script and blog.
can you clarify me on this query:
I have a RMAN backup taken from ASM and the backup is database and controlf file backup which contains datafiles and controlfiles.
Now I need to restore this on my system and here I dont use ASM or archive log, I use single instance in no archive log mode database.
I have restored the control file from the RMAN controfile backup.
Before restoring the control file I have checked the orginal pfile of the backup database which had parameters like
'db_create_file_dest',
'db_create_online_log_dest',
'db_recovery_file_dest_size',
'db_recovery_dest',
'log_archive_dest'.
Since I am not gng to create a DB in no archive log mode, I didnt use any of the above parameters and created a database.
Now my question is:
If i restore the database and the datafile will get restored and after renaming all the logfiles, database will be opened.
I want to know whether this method is correct or wrong and will the database work as it was working previously. Or do i need create the db_file_recovery and other parameters also for this database.About Parameter:
All these parameters should reflect your current environment any reference to the old environment must be modified.
About Filesystem used:
Does not matter what Filesystem you are using the File (datafile/redolog/controlfile/archivelog/backuppiece) are created on Binary Format which depend on Platform only. So, The same binary file ( e.g datafile) have same format and content on raw device, ASM, ext3, ext2, and so on. So, to database it's only a location where file are stored, but the file are the same. ASM has a different architecture from Regular Filesystem and need be managed in a different manner (i.e using RMAN).
About Database:
Since your database files are the same even using different filesystem what you need is rename your datafiles/redofiles on controlfile during restore, the redo files will be recreated.
So, does not matter if you database are noarchivelog or archivelog, the same way which you will do a restore on ASM is the same way to restore on Regular Filesystem. (it's only about renaming database file on controlfile during restore)
On blog the post "How Migrate All Files on ASM to Non-ASM (Unix/Linux)" is about move the file from filesystem to another. But you can modify the script used to restore purposes;
## set newname tell to RMAN where file will be restored and keep this files location on memory buffer
RMAN> set newname for datafile 1 to <location>;
### swich get list of files from memory buffer (rman) and rename on controlfile the files already restored.
RMAN>switch datafile/tempfile all ;With database mounted use this script below:
I just commented three lines that are unnecessary in your case.
SET serveroutput ON;
DECLARE
vcount NUMBER:=0;
vfname VARCHAR2(1024);
CURSOR df
IS
SELECT file#,
rtrim(REPLACE(name,'+DG_DATA/drop/datafile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
FROM v$datafile;
CURSOR tp
IS
SELECT file#,
rtrim(REPLACE(name,'+DG_DATA/drop/tempfile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
FROM v$tempfile;
BEGIN
-- dbms_output.put_line('CONFIGURE CONTROLFILE AUTOBACKUP ON;'); ### commented
FOR dfrec IN df
LOOP
IF dfrec.name != vfname THEN
vcount :=1;
vfname := dfrec.name;
ELSE
vcount := vcount+1;
vfname:= dfrec.name;
END IF;
-- dbms_output.put_line('backup as copy datafile ' || dfrec.file# ||' format "'||dfrec.name ||vcount||'.dbf";'); ### commented
END LOOP;
dbms_output.put_line('run');
dbms_output.put_line('{');
FOR dfrec IN df
LOOP
IF dfrec.name != vfname THEN
vcount :=1;
vfname := dfrec.name;
ELSE
vcount := vcount+1;
vfname:= dfrec.name;
END IF;
dbms_output.put_line('set newname for datafile ' || dfrec.file# ||' to '''||dfrec.name ||vcount||'.dbf'' ;');
END LOOP;
FOR tprec IN tp
LOOP
IF tprec.name != vfname THEN
vcount :=1;
vfname := tprec.name;
ELSE
vcount := vcount+1;
vfname:= tprec.name;
END IF;
dbms_output.put_line('set newname for tempfile ' || tprec.file# ||' to '''||tprec.name ||vcount||'.dbf'' ;');
END LOOP;
dbms_output.put_line('restore database;');
dbms_output.put_line('switch tempfile all;');
dbms_output.put_line('switch datafile all;');
dbms_output.put_line('recover database;');
dbms_output.put_line('}');
--- dbms_output.put_line('alter database open;'); ### comented because you need rename your redologs on controlfile before open database
dbms_output.put_line('exit');
END;
/After restore you must rename your redologs on controlfile from old location to new location:
e.g
## use this query to get current location of redolog
SQL> select group#,member from v$logfile order by 1;
## and change from <old_location> to <new_location>
SQL > ALTER DATABASE
RENAME FILE '+DG_TSM_DATA/tsm/onlinelog/group_3.263.720532229'
TO '/u01/app/oracle/oradata/logs/log3a.rdo' When you change all redolog on controlfile issue command below:
SQL> alter database open resetlogs;PS: Always track database in real time using alert log file of database.
HTH,
Levi Pereira -
Do I need to backup my files when upgrading from tiger to leopard?
Not sure if I need to backup all my files before upgrading from Tiger to Leopard (to use a Mac Box Set), have bought a Seagate 1TB FreeAgent GoFlex Home hard drive to back up on. Was told I wouldn't need a WiFi router but have now discovered I would need a dual Ethernet port to be connected to the internet and the hard drive which my Mac mini only has access for one, either the internet or the hard drive. So I ask do I really need to back up before upgrading? or can you get dual Ethernet ports?
Only if your data has any value to you.
If everything works perfectly, you should lose nothing.
If anything goes wrong, it is possible to lose data.
Personally I don't want to trust my data to everything working perfectly so I back it up.
As to the question of dual ethernet ports, you don't need them actually. If you create a LAN with a router then the multiple ports on your router will allow you to connect to both the internet and disk at once.
Allam -
Backup to file system and sbt_tape at the same time?
hello!
is it possible to do a rman backup to disk and sbt_tape at the same time (just one backup, not two)? the reason is, that i want to copy the rman backup files from the local file system via robocopy to another server (in a different building), so that i can use them for fast restores in the case of a crash.
if not, what is the recommended strategy in this case? backups should be available on tape AND the file system of a different machine.
environment: oracle 10g, windows server 2008, commvault for backup on tape
thanks for your advice.
best regards,
christianIf you manually copy backupsets out of the FRA to another location, but still accessible as "local disks", you can use the CATALOG command in RMAN to "catalog" the copies.
Thus, if you opy or move files from the FRA to /mybackups/MYDB, you would use
"CATALOG START WITH /mybackups/MYDB" or individually "CATALOG BACKUPPIECE /mybackups/MYDB/backuppiece_1" etc
Once you have moved or removed the backupsets out of the FRA, you must use
"CROSSCHECK BACKUP"
and
"DELETE EXPIRED BACKUP"
to update the RMAN repository. Else, Oracle will continue to "account" for disks pace consumed by the BackupPieces in V$FLASH_RECOVERY_AREA_USAGE and will soon "run out of space".
You would do the same for ArchiveLogs (CROSSCHECK ARCHIVELOG ALL; DELETE EXPIRED ARCHIVELOG ALL) if your ArchiveLogs go to the FRA as USE_DB_RECOVERY_FILE_DEST -
Hi
Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
Thanks
sunder
<distributed-scheme>
<scheme-name>distributed-Customer</scheme-name>
<service-name>DistributedCache</service-name>
<!-- <thread-count>5</thread-count> -->
<backup-count>1</backup-count>
<backup-storage>
<type>file-mapped</type>
<directory>/data/xx/backupstorage</directory>
<initial-size>1KB</initial-size>
<maximum-size>1KB</maximum-size>
</backup-storage>
<backing-map-scheme>
<read-write-backing-map-scheme>
<scheme-name>DBCacheLoaderScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>blaze-binary-backing-map</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.xxloader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.CustomerProduct
</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>CUSTOMER</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
<read-only>true</read-only>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>blaze-binary-backing-map</scheme-name>
<high-units>{back-size-limit 1}</high-units>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>{back-expiry 0}</expiry-delay>
<cachestore-scheme></cachestore-scheme>
</local-scheme>Hi
We did try out with the following configuration
<near-scheme>
<scheme-name>blaze-near-HeaderData</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{front-size-limit 0}</high-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>{back-expiry 1h}</expiry-delay>
<flush-delay>1m</flush-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>blaze-distributed-HeaderData</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
<distributed-scheme>
<scheme-name>blaze-distributed-HeaderData</scheme-name>
<service-name>DistributedCache</service-name>
<partition-count>200</partition-count>
<backing-map-scheme>
<partitioned>true</partitioned>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<high-units>20</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1073741824</unit-factor>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</nio-memory-manager>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.xx.loader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.bean.HeaderData</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>SDR.TABLE_NAME_XYZ</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<backup-count>1</backup-count>
<backup-storage>
<type>off-heap</type>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</backup-storage>
<autostart>true</autostart>
</distributed-scheme>
With configuration the amount of residual main memory consumption is like 15 G.
When we changed this configuration to
<near-scheme>
<scheme-name>blaze-near-HeaderData</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{front-size-limit 0}</high-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>{back-expiry 1h}</expiry-delay>
<flush-delay>1m</flush-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>blaze-distributed-HeaderData</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
<distributed-scheme>
<scheme-name>blaze-distributed-HeaderData</scheme-name>
<service-name>DistributedCache</service-name>
<partition-count>200</partition-count>
<backing-map-scheme>
<partitioned>true</partitioned>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<high-units>20</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1073741824</unit-factor>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</nio-memory-manager>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.xx.loader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.bean.HeaderData</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>SDR.TABLE_NAME_XYZ</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<backup-count>1</backup-count>
<backup-storage>
<type>file-mapped</type>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
<directory>/data/xxcache/blazeload/backupstorage</directory>
<file-name>{cache-name}.store</file-name>
</backup-storage>
<autostart>true</autostart>
</distributed-scheme>
Note backup storage is file-mapped
<backup-storage>
<type>file-mapped</type>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
<directory>/data/xxcache/blazeload/backupstorage</directory>
<file-name>{cache-name}.store</file-name>
</backup-storage>
We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
Appreciate any pointers in this regard.
Thanks
sunder -
Backup root file system on DVD
I have a root file system about 4Gb large.
Can I back it up on a DVD?
I read the ufsdump works with tapes.
Can it work with a DVD?
If yes, what is the command format?I have a root file system about 4Gb large.
Can I back it up on a DVD?
I read the ufsdump works with tapes.
Can it work with a DVD?
If yes, what is the command format? -
Developer needs access to OS file systems
Hello
Our CRM developer is requesting access to the following CRM java stack file systems at OS level. we are on AIX and our corporate policy does not allow access to the UNIX servers file systems to the developers.
is there a workaround?
/usr/sap/SID/JC01/j2ee/
All access (read.write,delete) for /usr/sap/SID/JC01/j2ee/cluster/server0/apps/sap.com/crm~b2b/
Thanks
Raghu
Edited by: Raghu Gajjala on Sep 15, 2010 12:30 AMHi,
Could you explain the reason for this?
Regards,
Caíque Escaler -
Aperture backup strategi - file system compatibility
Hi all,
I am beginning to run out of space for a managed library so I'm starting to look for alternatives. Currently I have a managed library, with a vault on another disk on the same computer. This vault is then mirrored to another computer (Linux) via rsync. This seems to work fine.
The solution I'm thinking about it to skip the vault and mirror the library + referenced files directly, while Aperture is not running of course.
The question that arises is whether Aperture does/needs anything "nonstandard" in it's files - resource forks for example. The file names inside the library are a bit weird but within specs for a Unix filesystem so that isn't as problem, but I would hate to have to recover a library and then discover that all the files are "disconnected" because of some minor change that occurred during backup/restore...
Anyone with experience of this (mirroring libraries to a non-Apple filesystem and restoring it)?
PowerMac G5 Dual 2.0 GHz, 2GB RAM Mac OS X (10.4.8) ATI Radeon 9650OK,
I actually tried this myself instead. I created a new Library, imported a project into it to get some photos with metadata and versions, etc. The masters were relocated to outside the library. I then copied the library to and from a Linux server with rsync, and moved the masters to the file server as well.
After opening Aperture again and reconnecting the masters, all seemed well. -
i'm using version 10.7.4, now i'm want to download os mountain lion do i need to do any backup on my files??
marcusts! It is always a good idea to have a backup of your files, whatever the circumstance.
Maybe make this your new project; buy an external hard drive to back up your files, and think about mountain lion after that. -
Raw Device Backup to file system(OPS 8i)
Hi
Our currently setup is
Oracle database 8.1.6 (Oracle Parallel Server) Two Node
Noarchive Mode
Solaris 2.6
all database file ,redo logfiles,controlfiles under raw device.
database size 16 G.B
oracle block size 8192
currently we are using only export backup of oracle.
But now i want to take cold backup of oracle database to disk.
cold backup Raw --> Disk
How we can take cold backup with dd command and skip parameter ?
Is anybody have practical idea of dd command with skip parameter.
Thanks and regards
Kuljeet pal singhyou can use ufsdump instead of dd
Maybe you are looking for
-
How to create an custom template for cheque printing layout?
Hi, I have a question about cheque printing format set up in SAP Business One. All the the system standard templates in u201CCheque print lay out designeru201Dare u201Ccheque-stub-stubu201D or u201Cstub-cheque-stubu201D or u201Cstub-stub-chequeu201D
-
How do I get my songs from my ipod onto a new itunes library???
How do I sync my ipod to my new computer without losing all my songs??
-
Note on interactive SVG and Edge Commons
I encountered something I want to mention so people do not pull their hair trying to find out which things are not working. 1- issue publishing If you add a js folder for your js files, Animate v2 will not import it in your web folder - So add it aft
-
When I open IPhoto the source column list does not appear, How to get it back to show up on the left side?
-
Made it through the upgrade to step 11 in the release notes instructions. The step is as follows: ====== Run the JHeadstart Application Generator. If you have a group with a shuttle layout in your application definition, you will get an error. See th