Oracle backup of more then 1-2 TB database

Hi all,
Can anybody suggest me how to take a backup of more than 1-2 tb of database.I know many people will recommend to use Rman, bit is slows down I/O and the rman takes aroung more than 30-40 hours to complete.So is there any solution, if yes, then how to take the backup and recover the same.Like anybody have any idea about SAN mirroring.

user13389425 wrote:
Hi all,
Can anybody suggest me how to take a backup of more than 1-2 tb of database.I know many people will recommend to use Rman, bit is slows down I/O and the rman takes aroung more than 30-40 hours to complete.So is there any solution, if yes, then how to take the backup and recover the same.Like anybody have any idea about SAN mirroring.Hello,
You can use RMAN options >
- Compression
- Parallel option < Channel configuration
- Rate option
- Duration option
- Block change tracking
- Incremental backups
So you save time, spaces and load of your system
Hope it helps!
Thanks,
Wissem
www.oracle-class.com
www.oracle-tns.com

Similar Messages

  • Oracle RAC with more then two nodes?

    Hello,
    knows anybody a reference client project that use Oracle RAC with more then two nodes on a Linux environment?
    Many thnaks!
    Norman Bock

    Hello Norman,
    XioTech is a SAN company that has a project called "THE TENS". They configured and ran a 10 node RH Linux Oralce 9i RAC. I understand they want to see if 32 are possible. I am sure if you ask them, they would be happy to give you the details.
    http://www.xiotech.com/
    Cheers

  • How to retrieve more then one record from database

    I am doing a SELECT FROM WHERE SQL statement on a database which returns more then one record. I want to retrieve certain column values from the selected records. How do I do that? If only one record is returned, I know I can do a data operation with operation set to 'Get-Retrieve Values from Record' and Record to operate on set to 'Current-Use current record'. I need to find out how to configure to data operation window in the case more then one record is returned by the preceding SQL statement. Depending on the number of records returned, I can dynamically create array variables to store the 'to be retrieved' column values, just dont know how to retrieve them.
    Any help will be greatly appreciated.
    Thanks
    Anuj

    I apologize for not being clear in explaining my problem, perhaps I should have posted an example. Anyways, I was able to figure out the solution. I was doing an 'Open SQL' statement which was selecting multiple records (rows) from a table in the dB. I was storing the number of records returned in a local variable. Then, I wanted to retrieve certain columns of all the selected rows (records). In the "Data operation", I was choosing the 'Record to operate on' as 'Current-Use Current Record'. Changing this field to 'Next-Fetch next record' fixed the problem. I then retrieved the values of those columns into a dynamically created array variable whose dimensions came from the local variable which stored the number of records returned in the SELECT SQL statement. 
    Thanks
    Anuj

  • Oracle RAC support more then one database

    My company's requirement is to have multiple database, instead of multiple schemas in the same databsae. The only reason to have multiple database is to support different applications with different database versions. for eg. One application sitting in Oracle RAC with have version 10.02.0.0. and other applications will have 10.02.0.2.

    You can have multiple instances on a node or multiple databases on one RAC cluster. However a more important factor that should be considered is to ensure that the servers are capable of handling the additional workload.
    Also to consider is the software compatibilities between various versions of Oracle RDBMS, ASM and the Oracle Clusterware.

  • Taking more time(more then 12 hours) while syncing

    I sync some music(about 340 MB) and pics (about 200 MB) to my new Ipad for the first time, and it was ok.
    But when i connect my Ipad Second time to my laptop(running on Windows 8) it was syncing & taking backup for more then 12 hours but still it was not completed. Is there ny problem in My Ipad or in my Laptop or in Itunes.
    Kindly help me,i bought my Ipad just 5 days before.

    Hello,
    Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
    1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
    2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
    Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
    Best Regards,
    Chris

  • Iphone backup (more then one)

    ok i think my backup is corrupt,now i can delete it and make a new one then itunes will add to that one right? well thats what i think is corrupting my backup (when new info is added to current backup.. how do i tell itunes to make a new backup without me deleting the other one? thnx ....

    That's crappy, but your right it verywell could be my phone. setting up as new is kinda what I'm trying to avoid. That's what I had to do last time when all my Info was wrong after the first restore, that was a restore from backup and it was corrupt so I restored as new. I got lots of data, logins, and app creds.it'll take forever to put all that back in the phone. My phone is fine now but that don't last forever, if I remember correctly in previous Versions of iTunes, u could have more then one backup.
    Seeing how I already restored as new and not from backup I don't even kno if the current backup is corrupt but I would like choice of two backups.
    Thnx for your input wjosten

  • Oracle Data Miner - one project (workflow) more then one user ?

    Hi,
    Is it possible to have more then one DM user working on one DM project / workflow?
    Example I have DMUSER1 and DMUSER2
    DMUSER1 created project and some workflow. I need DMUSER2 to be able to modify that same workflow if possible simultaneously.
    Something like in OWB where we can use more than one user on creating mappings.
    How is that possible?.
    Regards
    Kreso

    Hi Attila,
    You can find the transform package at the following link:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_dmtran.htm#i1013223
    The ODM mining algorithms use techniques that are not exposed.
    However, there are a lot of features in the db that are available to you to build your own.
    As for an optimization package, I know there are some internal implementations but I am not aware of one in the db that is exposed.
    Thanks, Mark

  • HOW DO I USE MORE THEN ONE ITUNES WITH MY IPHONE WITHOUT ERASING MY MUSIC?

    I have two different laptops with iTunes on it. I use both of them but my iPhone only works with one iTunes. I plug it into the computer and it says that my iPhone cannot use more then one iTunes. Is there a way I can use both or no?
    YES I AM CLICKING MANUELLY ADD SONGS TO MY iPhone.
    I just bought my iPhone about a week or so ago.
    Please help ASAP.

    I clone my library to a portable drive using SyncToy 2.1, a free tool from MS. I run this both at home & work so that I essentially have three identical copies of my library for backup redundacy, any of which I can use to update my iPods and my iPhone. I use iTunes Folder Watch occasionaly in case I've managed to download, for example, a podcast at one location but overwrite the library with a newer copy updated at the other. It can be done but you need to take considerable care...
    tt2

  • Using SQL developer, how to show more then 1 table at at time ?

    Hi, everybody
    I am able to open more then 1 procedures/functions for editing.
    However i am not able to open more then 1 table for seeing the data and columns.
    1) how do i open more then 1 table tab in sql developer or i am not allow to do so ?
    2) is it possible to see what column is the foreign key of a table in sql developer ?
    thanks once again :)

    use freeze view button
    and drop table below from tabs
    http://www.oracle.com/technology/products/database/sql_developer/files/viewlets.html
    watch this viewlet
    Useful Features of SQL Developer (July '07)
    it can show u how u can see more than one table at a time

  • Time machine is trying to back up more then the drive has on it

    I'm doing an intial backup on a fresh Time Machine drive ... my computer has 1.66TB of files there are no external drives etc selected. Time Machine wants to back up 1.98TB !!!! It wants MORE then the drive has on it !!!!!
    How is this possible ... Crazy stuff !

    Time Machine will require about 200 TB of free space.
    Unfortunately, a new 2 TB drive will not have exactly 2 TB of free space. so that is where the problem is arising.
    Wow.. that is a lot of space.. LOL!!
    It is confusing now with the change from v5 utility which recorded capacity in binary bytes to the v6 utility which records digital bytes.
    So the same drive looks different on both. But the computer is going to always use binary bytes.
    Older v5 utility.. shows capacity. 2.7TB. That is standard binary value for a 3TB drive. Always sold with numbers as inflated as possible.. ie digital bytes..
    c
    So the available is shown under the partition correctly.
    In v6 utility.
    We discover the disk has more free space than the v5 utility says when it is formatted... and empty.
    The confusion caused by using digital bytes.

  • Oracle backup to ftp

    Hello all
    I have problems with oracle backup through db13 to ftp.
    Job log
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000028, user ID USER)
    Execute logical command BRBACKUP On host sap1-ast
    Parameters:-u / -jid ALLOG20061117094801 -c force -t online -m all -p initTHD.sap -a -c force -p initTHD.sap -c
    ds
    BR0051I BRBACKUP 7.00 (16)
    BR0282E Directory '/sapbackup' not found
    BR0182E Checking parameter/option 'compress_dir' failed
    BR0056I End of database backup: bdtyppxk.log 2006-11-17 09.48.04
    BR0280I BRBACKUP time stamp: 2006-11-17 09.48.04
    BR0054I BRBACKUP terminated with errors
    External program terminated with exit code 3
    BRBACKUP returned error status E
    Job finished
    this is my initTHD.sap file
    @(#) $Id: //bas/700_REL/src/ccm/rsbr/initNT.sap#5 $ SAP
    SAP backup sample profile. #
    The parameter syntax is the same as for init.ora parameters. #
    Enclose parameter values which consist of more than one symbol in #
    double quotes. #
    After any symbol, parameter definition can be continued on the next #
    line. #
    A parameter value list should be enclosed in parentheses, the list #
    items should be delimited by commas. #
    There can be any number of white spaces (blanks, tabs and new lines) #
    between symbols in parameter definition. #
    backup mode [all | all_data | full | incr | sap_dir | ora_dir
    | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>
    | <generic_path> | (<object_list>)]
    default: all
    backup_mode = all
    restore mode [all | all_data | full | incr | incr_only | incr_full
    | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>
    | <generic_path> | (<object_list>) | partial | non_db
    redirection with '=' is not supported here - use option '-m' instead
    default: all
    restore_mode = all
    backup type [offline | offline_force | offline_standby | offline_split
    | offline_mirror | offline_stop | online | online_cons | online_split
    | online_mirror
    default: offline
    backup_type = online_cons
    backup device type
    [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk
    | disk_copy | disk_standby | stage | stage_copy | stage_standby
    | util_file | util_file_online | rman_util | rman_disk | rman_stage
    | rman_prep]
    default: tape
    backup_dev_type = stage
    backup root directory [<path_name> | (<path_name_list>)]
    default: %SAPDATA_HOME%\sapbackup
    backup_root_dir = /sapbackup
    stage root directory [<path_name> | (<path_name_list>)]
    default: value of the backup_root_dir parameter
    stage_root_dir = /sapbackup
    compression flag [no | yes | hardware | only]
    default: no
    compress = no
    compress command
    first $-character is replaced by the source file name
    second $-character is replaced by the target file name
    <target_file_name> = <source_file_name>.Z
    for compress command the -c option must be set
    recommended setting for brbackup -k only run:
    "%SAPEXE%\mkszip -l 0 -c $ > $"
    no default
    compress_cmd = "C:\usr\sap\THD\SYS\exe\uc\NTI386\mkszip -c $ > $"
    uncompress command
    first $-character is replaced by the source file name
    second $-character is replaced by the target file name
    <source_file_name> = <target_file_name>.Z
    for uncompress command the -c option must be set
    no default
    uncompress_cmd = "C:\usr\sap\THD\SYS\exe\uc\NTI386\uncompress -c $ > $"
    directory for compression [<path_name> | (<path_name_list>)]
    default: value of the backup_root_dir parameter
    compress_dir = /sapbackup
    brarchive function [save | second_copy | double_save | save_delete
    | second_copy_delete | double_save_delete | copy_save
    | copy_delete_save | delete_saved | delete_copied]
    default: save
    archive_function = save
    directory for archive log copies to disk
    default: first value of the backup_root_dir parameter
    archive_copy_dir = /sapbackup
    directory for archive log copies to stage
    default: first value of the stage_root_dir parameter
    archive_stage_dir = /sapbackup
    delete archive logs from duplex destination [only | no | yes | check]
    default: only
    archive_dupl_del = only
    new sapdata home directory for disk_copy | disk_standby
    no default
    new_db_home = X:\oracle\C11
    stage sapdata home directory for stage_copy | stage_standby
    default: value of the new_db_home parameter
    #stage_db_home = /sapbackup
    original sapdata home directory for split mirror disk backup
    no default
    #orig_db_home = C:\oracle\THD
    remote host name
    no default
    remote_host = srv1
    remote user name
    default: current operating system user
    remote_user = "thdadm password"
    tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu
    rman_dd | rman_dd_gnu]
    default: cpio
    tape_copy_cmd = cpio
    disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu]
    default: copy
    disk_copy_cmd = copy
    stage copy command [rcp | scp | ftp]
    default: rcp
    stage_copy_cmd = ftp
    pipe copy command [rsh | ssh]
    default: rsh
    pipe_copy_cmd = rsh
    flags for cpio output command
    default: -ovB
    cpio_flags = -ovB
    flags for cpio input command
    default: -iuvB
    cpio_in_flags = -iuvB
    flags for cpio command for copy of directories to disk
    default: -pdcu
    use flags -pdu for gnu tools
    cpio_disk_flags = -pdcu
    flags for dd output command
    default: "obs=16k"
    caution: option "obs=" not supported for Windows
    recommended setting:
    Unix: "obs=nk bs=nk", example: "obs=64k bs=64k"
    Windows: "bs=nk", example: "bs=64k"
    dd_flags = "bs=64k"
    flags for dd input command
    default: "ibs=16k"
    caution: option "ibs=" not supported for Windows
    recommended setting:
    Unix: "ibs=nk bs=nk", example: "ibs=64k bs=64k"
    Windows: "bs=nk", example: "bs=64k"
    dd_in_flags = "bs=64k"
    number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]
    default: 1
    saveset_members = 1
    additional parameters for RMAN
    rman_channels and rman_filesperset are only used when rman_util,
    rman_disk or rman_stage
    rman_channels defines the number of parallel sbt channel allocations
    rman_filesperset = 0 means:
    one file per save set - for non-incremental backups
    all files in one save set - for incremental backups
    the others have the same meaning as for native RMAN
    rman_channels = 1
    rman_filesperset = 0
    rman_maxpiecesize = 0 # in KB - former name rman_kbytes
    rman_rate = 0 # in KB - former name rman_readrate
    rman_maxopenfiles = 0
    rman_maxsetsize = 0 # in KB - former name rman_setsize
    additional parameters for RMAN version 8.1
    the parameters have the same meaning as for native RMAN
    rman_diskratio = 0 # deprecated in Oracle 10g
    rman_pool = 0
    rman_copies = 0 | 1 | 2 | 3 | 4 # former name rman_duplex
    rman_proxy = no | yes | only
    special parameters for an external backup library, example:
    rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"
    rman_send = "'<command>'"
    rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",
    "channel sbt_2 '<command2>' parms='<parameters2>'")
    remote copy-out command (backup_dev_type = pipe)
    $-character is replaced by current device address
    no default
    copy_out_cmd = "dd ibs=8k obs=64k of=$"
    remote copy-in command (backup_dev_type = pipe)
    $-character is replaced by current device address
    no default
    copy_in_cmd = "dd ibs=64k obs=8k if=$"
    rewind command
    $-character is replaced by current device address
    no default
    operating system dependent, examples:
    HP-UX: "mt -f $ rew"
    TRU64: "mt -f $ rewind"
    AIX: "tctl -f $ rewind"
    Solaris: "mt -f $ rewind"
    Windows: "mt -f $ rewind"
    Linux: "mt -f $ rewind"
    rewind = "mt -f $ rewind"
    rewind and set offline command
    $-character is replaced by current device address
    default: value of the rewind parameter
    operating system dependent, examples:
    HP-UX: "mt -f $ offl"
    TRU64: "mt -f $ offline"
    AIX: "tctl -f $ offline"
    Solaris: "mt -f $ offline"
    Windows: "mt -f $ offline"
    Linux: "mt -f $ offline"
    rewind_offline = "mt -f $ offline"
    tape positioning command
    first $-character is replaced by current device address
    second $-character is replaced by number of files to be skipped
    no default
    operating system dependent, examples:
    HP-UX: "mt -f $ fsf $"
    TRU64: "mt -f $ fsf $"
    AIX: "tctl -f $ fsf $"
    Solaris: "mt -f $ fsf $"
    Windows: "mt -f $ fsf $"
    Linux: "mt -f $ fsf $"
    tape_pos_cmd = "mt -f $ fsf $"
    mount backup volume command in auto loader / juke box
    used if backup_dev_type = tape_box | pipe_box
    no default
    mount_cmd = "<mount_cmd> $ $ $ [$]"
    dismount backup volume command in auto loader / juke box
    used if backup_dev_type = tape_box | pipe_box
    no default
    dismount_cmd = "<dismount_cmd> $ $ [$]"
    split mirror disks command
    used if backup_type = offline_split | online_split | offline_mirror
    | online_mirror
    no default
    split_cmd = "<split_cmd> [$]"
    resynchronize mirror disks command
    used if backup_type = offline_split | online_split | offline_mirror
    | online_mirror
    no default
    resync_cmd = "<resync_cmd> [$]"
    additional options for SPLITINT interface program
    no default
    split_options = "<split_options>"
    resynchronize after backup flag [no | yes]
    default: no
    split_resync = no
    volume size in KB = K, MB = M or GB = G (backup device dependent)
    default: 1200M
    recommended values for tape devices without hardware compression:
    60 m 4 mm DAT DDS-1 tape: 1200M
    90 m 4 mm DAT DDS-1 tape: 1800M
    120 m 4 mm DAT DDS-2 tape: 3800M
    125 m 4 mm DAT DDS-3 tape: 11000M
    112 m 8 mm Video tape: 2000M
    112 m 8 mm high density: 4500M
    DLT 2000 10/20 GB: 10000M
    DLT 2000XT 15/30 GB: 15000M
    DLT 4000 20/40 GB: 20000M
    DLT 7000 35/70 GB: 35000M
    recommended values for tape devices with hardware compression:
    60 m 4 mm DAT DDS-1 tape: 1000M
    90 m 4 mm DAT DDS-1 tape: 1600M
    120 m 4 mm DAT DDS-2 tape: 3600M
    125 m 4 mm DAT DDS-3 tape: 10000M
    112 m 8 mm Video tape: 1800M
    112 m 8 mm high density: 4300M
    DLT 2000 10/20 GB: 9000M
    DLT 2000XT 15/30 GB: 14000M
    DLT 4000 20/40 GB: 18000M
    DLT 7000 35/70 GB: 30000M
    tape_size = 100G
    volume size in KB = K, MB = M or GB = G used by brarchive
    default: value of the tape_size parameter
    tape_size_arch = 100G
    level of parallel execution
    default: 0 - set to number of backup devices
    exec_parallel = 0
    address of backup device without rewind
    [<dev_address> | (<dev_address_list>)]
    no default
    operating system dependent, examples:
    HP-UX: /dev/rmt/0mn
    TRU64: /dev/nrmt0h
    AIX: /dev/rmt0.1
    Solaris: /dev/rmt/0mn
    Windows: /dev/nmt0 | /dev/nst0
    Linux: /dev/nst0
    tape_address = /dev/nmt0
    address of backup device without rewind used by brarchive
    default: value of the tape_address parameter
    operating system dependent
    tape_address_arch = /dev/nmt0
    address of backup device with rewind
    [<dev_address> | (<dev_address_list>)]
    no default
    operating system dependent, examples:
    HP-UX: /dev/rmt/0m
    TRU64: /dev/rmt0h
    AIX: /dev/rmt0
    Solaris: /dev/rmt/0m
    Windows: /dev/mt0 | /dev/st0
    Linux: /dev/st0
    tape_address_rew = /dev/mt0
    address of backup device with rewind used by brarchive
    default: value of the tape_address_rew parameter
    operating system dependent
    tape_address_rew_arch = /dev/mt0
    address of backup device with control for mount/dismount command
    [<dev_address> | (<dev_address_list>)]
    default: value of the tape_address_rew parameter
    operating system dependent
    tape_address_ctl = /dev/...
    address of backup device with control for mount/dismount command
    used by brarchive
    default: value of the tape_address_rew_arch parameter
    operating system dependent
    tape_address_ctl_arch = /dev/...
    volumes for brarchive
    [<volume_name> | (<volume_name_list>) | SCRATCH]
    no default
    volume_archive = (THDA01, THDA02, THDA03, THDA04, THDA05,
    THDA06, THDA07)
    volumes for brbackup
    [<volume_name> | (<volume_name_list>) | SCRATCH]
    no default
    volume_backup = (THDB01, THDB02, THDB03, THDB04, THDB05,
    THDB06, THDB07)
    expiration period for backup volumes in days
    default: 30
    expir_period = 30
    recommended usages of backup volumes
    default: 100
    tape_use_count = 100
    backup utility parameter file
    default: no parameter file
    util_par_file = initTHD.utl
    mount/dismount command parameter file
    default: no parameter file
    mount_par_file = initTHD.mnt
    Oracle instance string to the primary database
    [primary_db = <conn_name> | LOCAL]
    no default
    primary_db = <conn_name>
    description of parallel instances for Oracle RAC
    parallel_instances = <instance_desc> | (<instance_desc_list>)
    <instance_desc_list> -> <instance_desc>[,<instance_desc>...]
    <instance_desc> -> <Oracle_sid>:<Oracle_home>@<conn_name>
    <Oracle_sid> -> Oracle system id for parallel instance
    <Oracle_home> -> Oracle home for parallel instance
    <conn_name> -> Oracle instance string to parallel instance
    Please include the local instance in the parameter definition!
    default: no parallel instances
    example for initRAC001.sap:
    parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,
    RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)
    database owner of objects to be checked
    <owner> | (<owner_list>)
    default: all SAP owners
    check_owner = sapr3
    database objects to be excluded from checks
    all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>)
    default: no exclusion, example:
    check_exclude = (SDBAH, SAPR3.SDBAD)
    database owner of SDBAH, SDBAD and XDB tables for cleanup
    <owner> | (<owner_list>)
    default: all SAP owners
    cleanup_owner = sapr3
    retention period in days for brarchive log files
    default: 30
    cleanup_brarchive_log = 30
    retention period in days for brbackup log files
    default: 30
    cleanup_brbackup_log = 30
    retention period in days for brconnect log files
    default: 30
    cleanup_brconnect_log = 30
    retention period in days for brrestore log files
    default: 30
    cleanup_brrestore_log = 30
    retention period in days for brrecover log files
    default: 30
    cleanup_brrecover_log = 30
    retention period in days for brspace log files
    default: 30
    cleanup_brspace_log = 30
    retention period in days for archive log files saved on disk
    default: 30
    cleanup_disk_archive = 30
    retention period in days for database files backed up on disk
    default: 30
    cleanup_disk_backup = 30
    retention period in days for brspace export dumps and scripts
    default: 30
    cleanup_exp_dump = 30
    retention period in days for Oracle trace and audit files
    default: 30
    cleanup_ora_trace = 30
    retention period in days for records in SDBAH and SDBAD tables
    default: 100
    cleanup_db_log = 100
    retention period in days for records in XDB tables
    default: 100
    cleanup_xdb_log = 100
    retention period in days for database check messages
    default: 100
    cleanup_check_msg = 100
    database owner of objects to adapt next extents
    <owner> | (<owner_list>)
    default: all SAP owners
    next_owner = sapr3
    database objects to adapt next extents
    all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>)
    default: all abjects of selected owners, example:
    next_table = (SDBAH, SAPR3.SDBAD)
    database objects to be excluded from adapting next extents
    all_part | [<owner>.]<table> | [<owner>.]<index> | [<owner>.]<prefix>*
    | <tablespace> | (<object_list>)
    default: no exclusion, example:
    next_exclude = (SDBAH, SAPR3.SDBAD)
    database objects to get special next extent size
    all_sel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]
    | [<owner>.]<index>:<size>[/<limit>]
    | [<owner>.]<prefix>*:<size>[/<limit>] | (<object_size_list>)
    default: according to table category, example:
    next_special = (SDBAH:100K, SAPR3.SDBAD:1M/200)
    maximum next extent size
    default: 2 GB - 5 * <database_block_size>
    next_max_size = 1G
    maximum number of next extents
    default: 0 - unlimited
    next_limit_count = 300
    database owner of objects to update statistics
    <owner> | (<owner_list>)
    default: all SAP owners
    stats_owner = sapr3
    database objects to update statistics
    all | all_ind | all_part | missing | info_cubes | dbstatc_tab
    | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>) | harmful
    | locked | system_stats | oradict_stats
    default: all abjects of selected owners, example:
    stats_table = (SDBAH, SAPR3.SDBAD)
    database objects to be excluded from updating statistics
    all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>)
    default: no exclusion, example:
    stats_exclude = (SDBAH, SAPR3.SDBAD)
    method for updating statistics for tables not in DBSTATC
    E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H
    | =I | =X | +H | +I
    default: according to internal rules
    stats_method = E
    sample size for updating statistics for tables not in DBSTATC
    P<percentage_of_rows> | R<thousands_of_rows>
    default: according to internal rules
    stats_sample_size = P10
    number of buckets for updating statistics with histograms
    default: 75
    stats_bucket_count = 75
    threshold for collecting statistics after checking
    default: 50%
    stats_change_threshold = 50
    number of parallel threads for updating statistics
    default: 1
    stats_parallel_degree = 1
    processing time limit in minutes for updating statistics
    default: 0 - no limit
    stats_limit_time = 0
    parameters for calling DBMS_STATS supplied package
    all:R|B:<degree> | all_part:R|B:<degree> | info_cubes:R|B:<degree>
    | [<owner>.]<table>:R|B:<degree> | [<owner>.]<prefix>*:R|B:<degree>
    | (<object_list>) | NO
    default: NULL - use ANALYZE statement
    stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R|B:<degree>,...)
    definition of info cube tables
    default | rsnspace_tab | [<owner>.]<table> | [<owner>.]<prefix>*
    | (<object_list>) | null
    default: from RSNSPACE control table
    stats_info_cubes = (/BIC/D, /BI0/D, ...)
    recovery type [complete | dbpit | tspit | reset | restore | apply
    | disaster]
    default: complete
    recov_type = complete
    directory for brrecover file copies
    default: $SAPDATA_HOME/sapbackup
    recov_copy_dir = C:\oracle\THD\sapbackup
    time period for searching for backups
    0 - all available backups, >0 - backups from n last days
    default: 30
    recov_interval = 30
    degree of paralelism for applying archive log files
    0 - use Oracle default parallelism, 1 - serial, >1 - parallel
    default: Oracle default
    recov_degree = 0
    number of lines for scrolling in list menus
    0 - no scrolling, >0 - scroll n lines
    default: 20
    scroll_lines = 20
    time period for displaying profiles and logs
    0 - all available logs, >0 - logs from n last days
    default: 30
    show_period = 30
    directory for brspace file copies
    default: $SAPDATA_HOME/sapreorg
    space_copy_dir = C:\oracle\THD\sapreorg
    directory for table export dump files
    default: $SAPDATA_HOME/sapreorg
    exp_dump_dir = C:\oracle\THD\sapreorg
    database tables for reorganization
    [<owner>.]<table> | [<owner>.]<prefix>* | [<owner>.]<prefix>%
    | (<table_list>)
    no default
    reorg_table = (SDBAH, SAPR3.SDBAD)
    database indexes for rebuild
    [<owner>.]<index> | [<owner>.]<prefix>* | [<owner>.]<prefix>%
    | (<index_list>)
    no default
    rebuild_index = (SDBAH0, SAPR3.SDBAD0)
    database tables for export
    [<owner>.]<table> | [<owner>.]<prefix>* | [<owner>.]<prefix>%
    | (<table_list>)
    no default
    exp_table = (SDBAH, SAPR3.SDBAD)
    database tables for import
    [<owner>.]<table> | (<table_list>)
    no default
    imp_table = (SDBAH, SAPR3.SDBAD)
    I can log in to ftp from server
    C:\Documents and Settings\thdadm>ftp srv1
    Connected to srv1.
    220 Microsoft FTP Service
    User (srv1:(none)): thdadm
    331 Password required for thdadm.
    Password:
    230 User thdadm logged in.
    ftp> ls
    200 PORT command successful.
    150 Opening ASCII mode data connection for file list.
    sapbackup
    226 Transfer complete.
    ftp: 11 bytes received in 0,00Seconds 11000,00Kbytes/sec.
    ftp> mkdir sap
    257 "sap" directory created.
    please help
    best regards
    Olzhas Suleimenov

    del

  • Set more then 3 values in link

    Hello,
    i am using oracle 10g apex 3.2.
    I created a interactive report,with shows 5 columns in page 1.
    Now 2nd column i made it as link.
    On clicking that link it goes to page 3.
    When i click the link i want to set few items of page 3 with page 1 values.
    But in the column link(report attributes) of the interactive report i could see only 3 items can be set.
    What to do if i need to set more then 3 values?
    For More info on what i am doing---------
    Page 3 is having a form whose values i need to fill from page 1.A user need to click the link i.e. col2 on page 1 and then it goes to page 3 and could be able to see some values in the form of page 3 already filled.I want to send total 6 values.
    Thanks
    Swapna

    Hi,
    Option 1>>>
    When you click link on interactive report , send the primary key to page 3 which contains the form.
    In page 3 add a on load , before header process which will populate the other items based on the primary key value..
    Option 2>>>
    You can enter comma separated item names / values in the link tab of interactive report..
    eg. Name Field you can P1_ITEM1 , P1_ITEM2 , P1_ITEM3 and in value field you can enter the corresponding values eg.. #field1# , #field2# , #field3#
    Regards,
    Shijesh
    Please reward the answer if was helpful/correct

  • Hello why cant we store more then 25,000 songs that have been uploaded in the Icloud from my cd's

    Hello why cant we store more then 25,000 songs that have been uploaded in the Icloud from my cd's, I'm a DJ and the purpose of getting the Icloud was not only for backup but to have access to all my music and this 25,000 song limit is not acceptable for me. I also upgraded my icloud drive to 200gb in hope that this would help, did it? I dont know as of yet, I just did it any thoughts out there??

    Apple has not said why. Inform Apple of your displeasure here:
    http://www.apple.com/feedback/itunesapp.html

  • Is ORACLE slower on Windows then on Linux ?

    Hi,
    I'm suppose to choose the platform for Oracle .... Linux or Windows 2008 Server ...
    Ones we've installed Oracle on Windows 2003 Server and it was really slower like 200-300% ..... we did simple default install
    Does anyone know if this is a fact that Oracle will run slower on Windows platform then on any other ? Any experiance with that ? If we are talking about 5-10% then this is ok and I'll stick with windows ...
    Cause I know how to deal with windows much more then how to deal with Linux admin, install and other stuff ...
    Thank you...
    Kris

    burleson wrote:
    Hi Charles,
    Installing a virus scanner on the server, especially if it is permitted to scan program and data files used by Oracle.Ha, that a good one! That the FIRST thing I check!
    Also, I've seen high demand screensavers (fractals) clobber Oracle performance . . .
    Oracle on Windows uses a thread modelThat's actually one of the few "positives" about running Oracle on Windows . . .
    rather than providing a performance comparison between Linux and Windows.Fine, try here:
    www.dba-oracle.com/oracle_tips_linux_oracle.htm
    "Roby Sherman performed an exhaustive study of the speed of Oracle on Linux and Microsoft Windows using identical hardware. Sherman currently works for Qwest Communications in the data technologies group of IT architecture and transversal services. He is a recognized expert in designing, delivering, tuning, and troubleshooting n-tier systems and technology architecture components of various size and complexity based on Oracle RDBMS technology.
    Sherman concludes that Linux was over 30% faster:
    "From perspective of performance, RedHat Linux 7.2 demonstrated an average performance advantage of 38.4 percent higher RDBMS throughput than a similarly configured Windows 2000 Server in a variety of operational scenarios." Sherman also notes: "Another point of contention was Window's lack of consistency between many database administrative functions (e.g., automated startup, shutdown, service creation, scripting) compared to what DBAs are already used to in many mainstream UNIX environments (e.g., Solaris and HP-UX)."
    Mr. Burleson's comments seem to be out of line. No, I'm right on the money.
    *I've seen enough companies get burned by Windows (unplanned outages, data corruption) to speak with confidence.*
    Bottom line, it's malfeasence to recommend any OS platform for a production database application that has an unsavory history.
    It does not take a genius to figure out that an OS with this kind of history should not be used with any data that you care about:
    - "blue screen of death"
    - legandary security vulnerabilities
    - memory leaks you could drive a truck through
    - Patching weekly
    - The world's most incompetant tech support
    I'm not alone in this opinion:
    thetendjee.wordpress.com/2007/01/22/oracle-10202-sucks-on-windows/
    www.google.com/search?&q=%22windows+sucksMr. Burleson,
    Good point on the screen saver.
    Comparing Windows 2000, released in March 2000, with Red Hat 7.2 which was released in ... sorry, forgot the date, but I put in a couple servers running that release of Linux into service in 2001. I was kind of hoping that you would have an article which pits Windows 3.1 against the original Linux release. :-)
    You might be happy to know that things have changed significantly since Windows 3.1, and also significantly since Windows 2000. There were many improvements in Windows 2003 over Windows 2000 (I happened to read a couple large books on the subject of Windows 2003 Server a couple years ago). This page contains a couple links that you may want to browse:
    http://www.microsoft.com/windowsserver2003/evaluation/performance/default.mspx
    Windows sucks... I have heard that the Microsoft Windows operating system is on barcode scanners, phones, and even car navigation systems. I had no idea that vacuums also utilized Windows, thanks for the heads-up:
    http://www.patentstorm.us/patents/6289552/description.html
    http://advertising.microsoft.com/BestVacuum
    Interesting Google search of the day: define:malfeasence
    http://www.google.com/search?hl=en&q=define%3Amalfeasence&aq=f&oq=&aqi=
    "Did you mean: define:malfeasance
    No definitions were found for malfeasence"
    define:malfeasance
    http://www.google.com/search?hl=en&q=define%3Amalfeasance&spell=1
    "Definitions of malfeasance on the Web:
    •wrongful conduct by a public official
    wordnetweb.princeton.edu/perl/webwn
    •The expressions misfeasance and nonfeasance, and occasionally malfeasance, are used in English law with reference to the discharge of public obligations existing by common law, custom or statute.
    en.wikipedia.org/wiki/Malfeasance
    •wrongdoing; Misconduct or wrongdoing, especially by a public official that causes damage
    en.wiktionary.org/wiki/malfeasance"
    define:unsavory
    http://www.google.com/search?hl=en&q=define%3Aunsavory
    "Definitions of unsavory on the Web:
    •morally offensive; ‘an unsavory reputation’; ‘an unsavory scandal’
    •distasteful: not pleasing in odor or taste
    wordnetweb.princeton.edu/perl/webwn
    •Disreputable, not respectable, of questionable moral character
    en.wiktionary.org/wiki/unsavory"
    Regarding blue screen of death, those are rather rare with versions of Windows since the release of Windows 2000. The last blue screen that I saw on a server happened when an ECC memory module started experiencing multiple hardware bit errors which could not be corrected by the ECC memory logic. The server hardware forced the blue screen to prevent data corruption. The previous blue screen? A Windows NT 4.0 Server (circa 1996) when a new RAID controller was added to the server.
    Regarding legandary security vulnerabilities, well I don't think that it quite qualifies as legendary. However, given the wide usage of Windows (particularly by people just starting to learn to use computers), there will very definitely be more security issues to contend with - Windows often offers a larger attack surface than other operating systems. Yes, there have been many security problems over the years.
    Regarding memory leaks you could drive a truck through, I have to say that in a server environment I have not experienced that problem. On a desktop environment, I would say that it is typically the fault of poorly written applications which cause the memory leaks. Windows will often clean up after the poorly written applications when they are closed.
    Regarding patching weekly, yes there are typically frequent security and bug fixes released for the Windows platform, but I suggest that if someone is patching weekly on a server, there is probably a larger problem to be addressed.
    Regarding the world's most incompetant tech support, I am not sure that I follow your logic:
    define:incompetant
    http://www.google.com/search?hl=en&q=define%3Aincompetant
    "Did you mean: define:incompetent
    No definitions were found for incompetant.
    Suggestions:
    - Make sure all words are spelled correctly.
    - Search the Web for documents that contain 'incompetant'"
    As previously stated, consideration should be given to the operating system which is most familiar to the OP.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • SAP & Oracle Backup Integration

    Hi,
        Have Oracle 10g as backend for database & Legato Backup Software has been installed on the server. Could anyone please let me know that if i use DB13 for taking online & offline backups how would it integrate with the Legato Backup Software & is there any special licences required for integration with SAP. Hoping to get a reply soon...

    Hi
    For Oracle Backups the preferred backup method is RMAN. I suggest you read docs for more information. Try focusing on incremental backups as you suggested space constraints.
    Rgds
    Adnan

Maybe you are looking for