Abnormal growth of Data file

If i import a dmp file of size 20MB, it is creating a data file of size 300MB or more than that (in one case 2 GB). Pl. tell me the appropriate storage parameters for not to increase the data file size abnormally.
thank u
[email protected]

Hi Angel
In that case passed 2 possible things:
1.- The transfer to the tape or from the tape ot disk was not in binary mode
2.- The export utility did not finish succesful the exportation when somebody was doing that activity.
Joel Pérez
http://www.oracle.com/technology/experts

Similar Messages

  • How to design SQL server data file and log file growth

    how to design SQL DB data file and log file growth- SQL server 2012
    if my data file is having 10 GB sizze and log file is having 5 GB size
    what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.

    It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
    The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
    the appropriate amount.
    One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
    https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
    Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this 
    SELECT
    [DatabaseName],
    [FileName],
    [SPID],
    [Duration],
    [StartTime],
    [EndTime],
    CASE [EventClass]
    WHEN 92 THEN 'Data'
    WHEN 93 THEN 'Log' END
    FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
    WHERE
    EventClass IN (92,93)
    hope that helps

  • Data / Data files / Database GROWTH

    Dear experts,
    I have a practical question on reading / determining the exact fluctuations in the sizeof
    a database.
    I am publishing this question in the oracle section, as my database is oracle, but if I am
    not wrong, all this should be valid for all databases.
    So, the question itself: I have a system, where all the datafiles / tablespaces are set to
    AUTOEXTEND, the size for each growth is 200 MB (meaning, if growing automatically,
    the increment size will be 200 MB). Now I would like to see, if datafiles grew automatically,
    lets say for today! And if yes - by how much increments.
    Furthermore, I would like to ask - browsing ST04, in Space / Database / Overview on the
    history tab - all the daily / weekly / monthly changes - how to evaluate, whether this was
    only an "internal" growth, when the database grew in account of decreasing the free space
    in the DB itself, and when it has also caused a data file to grow automatically, as it has its
    AUTOEXTEND option enabled ???

    Dear Deepak,
    after quite a lot of googling,I found this:
    SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days, ts.tsname,
    max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB,
    max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
    FROM DBA_HIST_TBSPC_SPACE_USAGE tsu, DBA_HIST_TABLESPACE_STAT ts, DBA_HIST_SNAPSHOT sp, DBA_TABLESPACES dt
    WHERE tsu.tablespace_id= ts.ts#
    AND tsu.snap_id = sp.snap_id
    AND ts.tsname = dt.tablespace_name
    AND ts.tsname NOT IN ('SYSAUX','SYSTEM')
    GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
    ORDER BY ts.tsname, days;
    It comes the closest to what I really need. And what I need, is the above query,
    but by data file, not by tablespaces.
    Thanks a lot!!

  • Adding Data file to existing primary file group with 1 data file

    Currently our databases are configured to only have 1 data file and 1 log file.  I am looking at adding a 2nd data file to the primary group, which will be on a separate lun.
    Will we benefit from adding the 2nd data file (same size as 1st data file and same autogrowth rate) , or should we create a new database with 2 data files (equal size and autogrowth rate), and import the data from the database with the single data file.
    Thanks.
    DJ

    Having another data file pointing to different Physical Volume
    will give you better performance gains. Additionally, you should pre-size them (Same as First Data File) with same growth settings (Preferably in Mb
    instead of Percentages) .
    It is perfectly OK to add another data file to PRIMARY file-group as well and SQL Server will automatically balance the data across multiple files over the period time (Due to Data Striping)
    HTH
    Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue

  • SAP data files

    Hi
    My SAP version is ECC 5.0 with oracle 9.2.0.7 on SOLARIS 10.
    My current DB size is 300GB where i have 23 data files on my oracle/<SID>/sapdata1...4
    I have implemented MM,FICO,PP,QM modules.My monthly DB growth rate is 30-35 GB.
    My Questions
    01.Is this growing rate is acceptable.If not how can i find the error.
    02.PSAP<SID> table space is the place where my data is written.I have to keep on monitoring the space on this table space and i have to manually extend that once it is filled.Why do i need to do this whwre i have enable "AUTOEXTEND ON" and why it is not automatically extend.
    03.Table reqorganization/Index Rebuilding  is not giving me any gain and why?
    Pls give me your feed back.
    With the current growing rate i can go for another 1.5 yrs but after that what should i do..
    Roshantha

    Hi,
    Including the brtools,
    >> we are using Oracle 10g on RHEL5.4 64Bit i want to move all my sap data files along with Temp are there any specific notes that i need to follow apart form 868750 and there is no mention of Temp file in this note.
    You can move all the datafiles including TEMP tablespace, by the statement, below;
    ALTER DATABASE RENAME FILE '<old_full_path>' TO '<new_full_path>';
    >> Apart from this i want to know what changes need to be done in the control file for the information of the new file system,so that the database can be started using changed control file having information of the renamed and moved file system.
    You will not change anything on control files, manually. It will be updated after you executed the statement, respectfully
    >> Are there any oracle parameters or profile parameters that needs to be changed and how much downtime is required.
    No you don't need to change any database profile parameter after you move the datafile(s). If you prepare a script and execute it, it will be done in a few seconds. This is because only configuration will be updated. The datafiles will not be created from the scratch.
    Best regards,
    Orkun Gedik

  • Dense Restructure 1070020 Out of disk space. Can't create new data file

    During a Dense Restructure we receive: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    Essbase 6.5.3 32-bit
    Windows 2003 32bit w/16GB RAM
    Database is on E: drive with 660GB space total, database is ~220GB.
    All cubes are unlimited
    Tried restoring from backup same problem.
    Over years and years the database is never recalculated, never exported and imported, never verified. Only new data loaded and dense restructured.
    Towards the end of a dense restructure (about 89 pan files through about 101 2GB pag files), getting an error: Error(1070020) Out of disk space. Cannot create a new [Data] file.
    There are still several hundred GB of free space available, and we can write to this free space outside of the essbase application within windows.
    The server's file system is consistent, defragmented, and can prove use of additional space. Hard drive controller and system does not report any "hardware issues".
    Essbase.cfg file
    ; The following entry specifies the full path to JVM.DLL.
    JvmModuleLocation C:\Hyperion\Essbase\java\jre13\bin\hotspot\jvm.dll
    ;This statement loads the essldap.dll as a valid authentication module
    ;AuthenticationModule LDAP essldap.dll x
    DATAERRORLIMIT 30000
    ;These settings are here to deal with error 1040004
    NETRETRYCOUNT 2000
    NETDELAY 1600
    App log
    [Sat Oct 17 13:59:32 2009]Local/removedfrompost/removedfrompost/admin/Info(1007044)
    Restructuring Database [removedfrompost]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost/removedfrompost/admin/Error(1070020)
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008108)
    Essbase Internal Logic Error [7333]
    [Sat Oct 17 15:48:42 2009]Local/removedfrompost///Info(1008106)
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    log00002.xcp
    Assertion Failure - id=7333 condition='((!( dbp )->bFatalError))'
    - line 11260 in file datbuffm.c
    - arguments [0] [0] [0] [0]
    Additional log info from database start to restructure failure
    Starting Essbase Server - Application [removedfrompost]
    Loaded and initialized JVM module
    Reading Application Definition For [removedfrompost]
    Reading Database Definition For [removedfrompost]
    Reading Database Definition For [TempOO]
    Reading Database Definition For [WTD]
    Reading Database Mapping For [removedfrompost]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    Waiting for Login Requests
    Received Command [Load Database]
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Reading Outline For Database [removedfrompost]
    Declared Dimension Sizes = [289 125 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 119 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34391]
    Maximum Declared Blocks is [1960864521] with data block size of [72250]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17138]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [removedfrompost] can hold a maximum of [76] blocks.
    The Dyn.Calc.Cache for database [removedfrompost], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [removedfrompost]
    Reading Parameters For Database [removedfrompost]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [removedfrompost]...
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\removedfrompost\removedfrompost.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Reading Outline For Database [TempOO]
    Declared Dimension Sizes = [277 16 2 1023 139047 ]
    Actual Dimension Sizes = [277 16 1 1022 138887 ]
    The number of Dynamic Calc Non-Store Members = [68 3 0 0 0 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [4432]
    Maximum Declared Blocks is [142245081] with data block size of [8864]
    Maximum Actual Possible Blocks is [141942514] with data block size of [2717]
    Essbase needs to retrieve [1] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [TempOO] can hold a maximum of [591] blocks.
    The Dyn.Calc.Cache for database [TempOO], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [TempOO]
    Reading Parameters For Database [TempOO]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [TempOO]...
    Data cache size ==> [3145728] bytes, [144] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\TempOO\TempOO.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Load Database]
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Reading Outline For Database [WTD]
    Declared Dimension Sizes = [2 105 2 11649 158778 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 6 ]
    Actual Dimension Sizes = [1 99 1 1293 127722 1279 609 971 531 208 78 2017 11 9 9 1 1 1 1 6 1 2 1 1 2 1 1 1 2 77 1 1 1 1 1 1 1 1 1 1 1 1 1 260 3 2954 52 6 39 4 1581 5 ]
    The number of Dynamic Calc Non-Store Members = [0 29 0 257 57 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [99]
    Maximum Declared Blocks is [1849604922] with data block size of [420]
    Maximum Actual Possible Blocks is [165144546] with data block size of [70]
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [WTD] can hold a maximum of [26479] blocks.
    The Dyn.Calc.Cache for database [WTD], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Writing Parameters For Database [WTD]
    Reading Parameters For Database [WTD]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Reading Data File Free Space Information For Database [WTD]...
    Data cache size ==> [3145728] bytes, [5617] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Missing Database Config File [C:\HYPERION\ESSBASE\APP\removedfrompost\WTD\WTD.cfg], Query logging disabled
    Received Command [Get Database Volumes]
    Received Command [Set Database State]
    Writing Parameters For Database [removedfrompost]
    Writing Parameters For Database [removedfrompost]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [TempOO]
    Writing Parameters For Database [TempOO]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [Set Database State]
    Writing Parameters For Database [WTD]
    Writing Parameters For Database [WTD]
    Received Command [Get Database State]
    Received Command [Get Database Info]
    Received Command [SetApplicationState]
    Writing Application Definition For [removedfrompost]
    Writing Database Definition For [removedfrompost]
    Writing Database Definition For [TempOO]
    Writing Database Definition For [WTD]
    Writing Database Mapping For [removedfrompost]
    User [admin] set active on database [removedfrompost]
    Clear Active on User [admin] Instance [1]
    User [admin] set active on database [removedfrompost]
    Received Command [Restructure] from user [admin]
    Reading Parameters For Database [Drxxxxxx]
    Reading Outline For Database [Drxxxxxx]
    Reading Outline Transaction For Database [Drxxxxxx]
    Declared Dimension Sizes = [289 126 2 11649 168329 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 6 ]
    Actual Dimension Sizes = [289 120 1 1293 134423 1294 622 985 544 210 80 2016 11 9 9 8 8 1 1 6 1 3 1 2 2 1 2 1 2 77 2 65 1 1 1 1 1 1 1 1 1 1 1 260 4 3018 52 6 39 4 1577 5 ]
    The number of Dynamic Calc Non-Store Members = [80 37 0 257 67 ]
    The number of Dynamic Calc Store Members = [0 0 0 0 0 ]
    The logical block size is [34680]
    Maximum Declared Blocks is [1960864521] with data block size of [72828]
    Maximum Actual Possible Blocks is [173808939] with data block size of [17347]
    Formula for member [4 WK Avg Total Sls U] will be executed in [CELL] mode
    Formula for member [Loc Cnt] will be executed in [CELL] mode
    Formula for member [OH Str Cnt] will be executed in [CELL] mode
    Formula for member [Current Rtl] will be executed in [CELL] mode
    Essbase needs to retrieve [1017] Essbase Kernel blocks in order to calculate the top dynamically-calculated block.
    The Dyn.Calc.Cache for database [Drxxxxxx] can hold a maximum of [75] blocks.
    The Dyn.Calc.Cache for database [Drxxxxxx], when full, will result in [allocation from non-Dyn.Calc.Cache memory].
    Reading Parameters For Database [Drxxxxxx]
    Unable to determine the amount of virtual memory available on the system
    Index cache size ==> [1048576] bytes, [128] index pages.
    Index page size ==> [8192] bytes.
    Using buffered I/O for the index and data files.
    Using waited I/O for the index and data files.
    Unable to determine the amount of virtual memory available on the system
    Data cache size ==> [3145728] bytes, [22] data pages
    Data file cache size ==> [0] bytes, [0] data file pages
    Performing transaction recovery for database [Drxxxxxx] following an abnormal termination of the server.
    Restructuring Database [removedfrompost]
    Out of disk space. Cannot create a new [Data] file. [adIndNewFile] aborted
    Essbase Internal Logic Error [7333]
    Exception error log [C:\HYPERION\ESSBASE\app\removedfrompost\log00002.xcp] is being created...
    Exception error log completed -- please contact technical support and provide them with this file
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

    To avoid all these things as a best practice
    we didn't allow dense restructure on the cubes size>30 GB
    As an altrnative, we will export the level0 data, clear the DB, and load the new data. After that aggregate the cube to store the data at all the consolidation levels.

  • Oracle Database abnormal growth

    Dear all,
    Can anybody suggest me, what is the best method to analyze and check the abnormal growth of database?
    Currently our database size is 3.8 Tera , we have more then 1300 concurrent users and we are collecting statistics info on daily basis from DB02, ST04, ST10, and program RSSTAT10.
    Can any body suggest me an answer with regards to various options available?
    Thanks and regards
    Asif
    Message was edited by: Mohammed Asif
    Message was edited by: Mohammed Asif

    Mohammed,
    Lot's of work to do. No easy fix for that.
    As far as GLPCA you need to work with application teams to do data archiving. As far as I know, this is the only way you can shrunk this table.
    TST* tables are spool/temse related tables. You can go over OSS notes 160083 48400 and make sure that spool and temse consistence check jobs are running periodically based on your company's retention period limitations.
    CDCLS (data table) is change pointer related. You can get more maeningful information on CDHDR ( header table) such as how old data is out there. You need change pointer deletion/reorganization or archiving jobs scheduled. I do not have OSS notes handy but check OSS and schedule changepointer deletion jobs based on your company's retention period requirements.
    After running cleanup jobs, do not forget to run an analyze table statement at the DB level and see how much space you have gained. If it is more than 40%, I recommend full table reorganization otherwise unbalanced indexes might cause you performance loss and you cannot allocate the space back to tablespace as freespace to be shared by other tables.
    Good luck!
    Neval

  • Database data file growing very fast

    Hi
    I have a database that runs on SQL server 2000.
    A few months back, the database was shifted to new server because the old server was crash.
    There was no issue in old server which was used more than 10 years.
    I noticed that the data file was growing very fast since the database was shifted to new server.
    When I run "sp_spaceused", a lot of space are unused. Below is the result:
    database size = 50950.81 MB
    unallocated space = 14.44 MB
    reserved = 52048960 KB
    data = 9502168 KB
    index size = 85408 KB
    unused = 42461384 KB
    When I run "sp_spacedused" only for one big table, the result is:
    reserved = 19115904 KB
    data = 4241992 KB
    index size = 104 KB
    unused = 14873808 KB
    I had shrink the database and the size didn't reduce.
    May I know how to reduce the size? Thanks.

    Hallo Thu,
    can you check whether you have active Jobs in Microsoft SQL Server Agent which may...
    rebuild Indexes?
    run maintenance Jobs of your application?
    I'm quite confident that index maintenance will cause the "growth".
    Shrinking the database is...
    useless and
    nonsence
    if you have index maintenance Tasks. Shrinking the database means the move of data pages from the very end of the database to the first free part in the database file(s). This will cause index fragmentation.
    If the nightly index maintenance Job will rebuild the Indexes it uses NEW space in the database for the allocation of space for the data pages!
    Read the blog post from Paul Randal about it here:
    http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/
    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

  • While importing a request error message' Check-sum error in data file'

    Hi Friends
    I have a problem.
    We are trying to inport a request after putting the files in cofile and data file folders( 4.6C System).While doing so an error message is seen in the log " Check-sum error in data file after XXXX bytes".
    Can some one help me with this?
    Thanks
    Regards
    Ankur

    Hi Ankur,
    It is sure your file is corrupted or not present.
    Check in cofiles and data directory under /usr/sap/trans your transport request number.
    Best Wishes.
    Kumar

  • Can not restore data files from backup set

    I am trying to restore Server A's backup data to Server B (they are all oracle11g) using rman. The restore commands is below:
    rman target /;
    shutdown immediate;
    startup nomount;
    restore controlfile from '/usr/local/oracle/backup/20100418/ctl_xxx'
    alter database mount;
    catalog start with '/usr/local/oracle/backup/20100418/';
    restore database;
    recover database;
    alter database open resetlogs;
    For the first time, it works. But when i tried to restore another backup data by the same way:
    rman target /;
    shutdown immediate;
    startup nomount;
    restore controlfile from '/usr/local/oracle/backup/20100425/ctl_xxx'
    alter database mount;
    catalog start with '/usr/local/oracle/backup/20100425/';
    restore database;
    recover database;
    alter database open resetlogs;
    The second time, i found that rman restore the old backup data, which means that it restore the data file under '/usr/local/oracle/backup/20100418/' instead of '/usr/local/oracle/backup/20100425/'.
    So I run 'list backup of database summary' to see the backup set lists in control file.
    List of Backups
    ===============
    Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    910 B 0 A DISK 18-APR-10 1 1 NO TAG20100418T020007
    945 B 0 A DISK 25-APR-10 1 1 NO TAG20100425T020007
    But when i run ‘restore database preview summary’ to see the backup set list to restore
    List of Backups
    ===============
    Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    910 B 0 A DISK 18-APR-10 1 1 NO TAG20100418T020007
    there's no backup set 945 at all. that's why i could not restore data file under '/usr/local/oracle/backup/20100425/' at the second time.
    So, why two backup list is different ? how can i restore datafile under '/usr/local/oracle/backup/20100425/' ?
    My backup script is below:
    run{
    allocate channel c1 type disk;
    backup incremental level 0 as backupset format '$DIR/`date +%Y%m%d`/data_%d_c0_%T_%u' database;
    sql 'alter system archive log current';
    backup archivelog from time 'sysdate-14' format '$DIR/`date +%Y%m%d`/log_%d_%T_%u';
    backup current controlfile format '$DIR/`date +%Y%m%d`/ctl_%d_%T_%I_%u';
    release channel c1;
    Thanks
    Edited by: user13055376 on 2010-4-29 上午1:20

    yeah, I am sure Tag: TAG20100425T020007 exists
    RMAN> list backupset 945;
    List of Backup Sets
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    945 Incr 0 6.40G DISK 00:05:46 25-APR-10
    BP Key: 945 Status: AVAILABLE Compressed: NO Tag: TAG20100425T020007
    Piece Name: /usr/local/oracle/backup/20100425/data_QIANGL_c0_20100425_thlbvjt7
    List of Datafiles in backup set 945
    File LV Type Ckp SCN Ckp Time Name
    1 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/system01.dbf
    2 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/sysaux01.dbf
    3 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/undotbs01.dbf
    4 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/users01.dbf
    5 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/dict01.dbf
    6 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/support01.dbf
    7 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/supportindex01.dbf
    8 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/log01.dbf
    9 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/logindex01.dbf
    10 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/lobindex01.dbf
    11 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/data01.dbf
    12 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/indexes01.dbf
    13 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/image001.dbf
    14 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/tongbuimage001.dbf
    15 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/imagebackup001.dbf
    My purpose is to use the newest backupset of Server A to update Server B. So if Server A crush, Server B will be useful. Is there any other way to do that ?
    retention policy :'configure retention policy to redundancy 4'
    And Server A do LV0 backup every 7 day, and do LV1 backup ervery other day.
    Edited by: user13055376 on 2010-4-29 上午3:45

  • RMAN duplicate target using set until telling me it can not find data files

    I have RMAN scripts that I use freqently to clone a database from DB (production) to another (non-prod). These work just fine. I use the set until so that I can tell it at what point I want the new DB to be created from the backups of the source.
    I had a request to go back a few weeks and the backup files (I do compressed backups to disk) were on tape. I had my backup person restore my backup directory for the source DB as it looked on a certain day (May 28). I have an 8 day retention policy and so the backup files that were restored showed files all the way back to 5/20 ( includes the point I want to use in my set until clause).
    However, whenever I try to execute the rman clone, it tells me for each datafile:
    RMAN-06023: no backup or copy of datafile 1 found to restore
    Like I said, I am able to do this for a current backup that is on disk. I moved the backup files to the source db server and it works fine. However, from these files restored from tape it errors.
    Here is the RMAN script:
    spool log to c:\temp\clone_CSPROD_CSPRSUM1.log;
    #connect to catalog <catalog info>
    # target is the source and auxiliary is destination
    #connect target <put in source info here>
    #connect auxiliary /
    run {
    allocate auxiliary channel d1 type disk format 'F:\backups\CSPROD\d1\CSPROD_DATA_%s';
    allocate auxiliary channel d2 type disk format 'F:\backups\CSPROD\d2\CSPROD_DATA_%s';
    allocate auxiliary channel d3 type disk format 'F:\backups\CSPROD\d3\CSPROD_DATA_%s';
    allocate auxiliary channel d4 type disk format 'F:\backups\CSPROD\d4\CSPROD_DATA_%s';
    allocate auxiliary channel a1 type disk format 'F:\backups\CSPROD\a1\CSPROD_arch_%s';
    ##Archivelog number get from sql archive log list command
    #set until sequence 831;
    set until time "to_date('2011-05-25 08:00:00', 'YYYY-MM-DD HH24:MI:SS')";
    duplicate target database to CSPRSUM1 nofilenamecheck
    logfile
    group 1('+DATA/CSPRSUM1/onlinelog/redo1a.log', '+FRA/CSPRSUM1/onlinelog/redo1b.log') size 50m,
    group 2('+DATA/CSPRSUM1/onlinelog/redo2a.log', '+FRA/CSPRSUM1/onlinelog/redo2b.log') size 50m,
    group 3('+DATA/CSPRSUM1/onlinelog/redo3a.log', '+FRA/CSPRSUM1/onlinelog/redo3b.log') size 50m;
    Exit
    Here is the output:
    Spooling started in log file: c:\temp\clone_CSPROD_CSPRSUM1.log
    Recovery Manager11.1.0.7.0
    RMAN> #connect catalog <redacted info>>
    2> # target is the source and auxiliary is destination
    3> #connect target <redacted info>
    4> #connect auxiliary /
    5>
    6> run {
    7> allocate auxiliary channel d1 type disk format 'F:\backups\CSPROD\d1\CSPROD_DATA_%s';
    8> allocate auxiliary channel d2 type disk format 'F:\backups\CSPROD\d2\CSPROD_DATA_%s';
    9> allocate auxiliary channel d3 type disk format 'F:\backups\CSPROD\d3\CSPROD_DATA_%s';
    10> allocate auxiliary channel d4 type disk format 'F:\backups\CSPROD\d4\CSPROD_DATA_%s';
    11> allocate auxiliary channel a1 type disk format 'F:\backups\CSPROD\a1\CSPROD_arch_%s';
    12> ##Archivelog number get from sql archive log list command
    13> #set until sequence 831;
    14> set until time "to_date('2011-05-25 08:00:00', 'YYYY-MM-DD HH24:MI:SS')";
    15> duplicate target database to CSPRSUM1 nofilenamecheck
    16> logfile
    17> group 1('+DATA/CSPRSUM1/onlinelog/redo1a.log', '+FRA/CSPRSUM1/onlinelog/redo1b.log') size 50m,
    18> group 2('+DATA/CSPRSUM1/onlinelog/redo2a.log', '+FRA/CSPRSUM1/onlinelog/redo2b.log') size 50m,
    19> group 3('+DATA/CSPRSUM1/onlinelog/redo3a.log', '+FRA/CSPRSUM1/onlinelog/redo3b.log') size 50m;
    20> }
    starting full resync of recovery catalog
    full resync complete
    allocated channel: d1
    channel d1: SID=534 device type=DISK
    allocated channel: d2
    channel d2: SID=533 device type=DISK
    allocated channel: d3
    channel d3: SID=532 device type=DISK
    allocated channel: d4
    channel d4: SID=531 device type=DISK
    allocated channel: a1
    channel a1: SID=530 device type=DISK
    executing command: SET until clause
    Starting Duplicate Db at 15-JUN-11
    RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only.
    contents of Memory Script:
    set until scn 260398799;
    set newname for datafile 1 to
    "+data";
    set newname for datafile 2 to
    "+data";
    set newname for datafile 3 to
    "+data";
    set newname for datafile 4 to
    "+data";
    set newname for datafile 5 to
    "+data";
    set newname for datafile 6 to
    "+data";
    set newname for datafile 7 to
    "+data";
    set newname for datafile 8 to
    "+data";
    set newname for datafile 9 to
    "+data";
    set newname for datafile 10 to
    "+data";
    set newname for datafile 11 to
    "+data";
    set newname for datafile 12 to
    "+data";
    set newname for datafile 13 to
    "+data";
    set newname for datafile 14 to
    "+data";
    set newname for datafile 15 to
    "+data";
    set newname for datafile 16 to
    "+data";
    set newname for datafile 17 to
    "+data";
    set newname for datafile 18 to
    "+data";
    set newname for datafile 19 to
    "+data";
    set newname for datafile 20 to
    "+data";
    set newname for datafile 21 to
    "+data";
    set newname for datafile 22 to
    "+data";
    set newname for datafile 23 to
    "+data";
    set newname for datafile 24 to
    "+data";
    set newname for datafile 25 to
    "+data";
    set newname for datafile 26 to
    "+data";
    set newname for datafile 27 to
    "+data";
    set newname for datafile 28 to
    "+data";
    set newname for datafile 29 to
    "+data";
    set newname for datafile 30 to
    "+data";
    set newname for datafile 31 to
    "+data";
    set newname for datafile 32 to
    "+data";
    set newname for datafile 33 to
    "+data";
    set newname for datafile 34 to
    "+data";
    set newname for datafile 35 to
    "+data";
    set newname for datafile 36 to
    "+data";
    set newname for datafile 37 to
    "+data";
    set newname for datafile 38 to
    "+data";
    set newname for datafile 39 to
    "+data";
    set newname for datafile 40 to
    "+data";
    set newname for datafile 41 to
    "+data";
    set newname for datafile 42 to
    "+data";
    set newname for datafile 43 to
    "+data";
    set newname for datafile 44 to
    "+data";
    set newname for datafile 45 to
    "+data";
    set newname for datafile 46 to
    "+data";
    set newname for datafile 47 to
    "+data";
    set newname for datafile 48 to
    "+data";
    set newname for datafile 49 to
    "+data";
    set newname for datafile 50 to
    "+data";
    set newname for datafile 51 to
    "+data";
    set newname for datafile 52 to
    "+data";
    set newname for datafile 53 to
    "+data";
    set newname for datafile 54 to
    "+data";
    set newname for datafile 55 to
    "+data";
    set newname for datafile 56 to
    "+data";
    set newname for datafile 57 to
    "+data";
    set newname for datafile 58 to
    "+data";
    set newname for datafile 59 to
    "+data";
    set newname for datafile 60 to
    "+data";
    set newname for datafile 61 to
    "+data";
    set newname for datafile 62 to
    "+data";
    set newname for datafile 63 to
    "+data";
    set newname for datafile 64 to
    "+data";
    set newname for datafile 65 to
    "+data";
    set newname for datafile 66 to
    "+data";
    set newname for datafile 67 to
    "+data";
    set newname for datafile 68 to
    "+data";
    set newname for datafile 69 to
    "+data";
    set newname for datafile 70 to
    "+data";
    set newname for datafile 71 to
    "+data";
    set newname for datafile 72 to
    "+data";
    set newname for datafile 73 to
    "+data";
    set newname for datafile 74 to
    "+data";
    set newname for datafile 75 to
    "+data";
    set newname for datafile 76 to
    "+data";
    set newname for datafile 77 to
    "+data";
    set newname for datafile 78 to
    "+data";
    set newname for datafile 79 to
    "+data";
    set newname for datafile 80 to
    "+data";
    set newname for datafile 81 to
    "+data";
    set newname for datafile 82 to
    "+data";
    set newname for datafile 83 to
    "+data";
    set newname for datafile 84 to
    "+data";
    set newname for datafile 85 to
    "+data";
    set newname for datafile 86 to
    "+data";
    set newname for datafile 87 to
    "+data";
    set newname for datafile 88 to
    "+data";
    set newname for datafile 89 to
    "+data";
    set newname for datafile 90 to
    "+data";
    set newname for datafile 91 to
    "+data";
    set newname for datafile 92 to
    "+data";
    set newname for datafile 93 to
    "+data";
    set newname for datafile 94 to
    "+data";
    set newname for datafile 95 to
    "+data";
    set newname for datafile 96 to
    "+data";
    set newname for datafile 97 to
    "+data";
    set newname for datafile 98 to
    "+data";
    set newname for datafile 99 to
    "+data";
    set newname for datafile 100 to
    "+data";
    set newname for datafile 101 to
    "+data";
    set newname for datafile 102 to
    "+data";
    set newname for datafile 103 to
    "+data";
    set newname for datafile 104 to
    "+data";
    set newname for datafile 105 to
    "+data";
    set newname for datafile 106 to
    "+data";
    set newname for datafile 107 to
    "+data";
    set newname for datafile 108 to
    "+data";
    set newname for datafile 109 to
    "+data";
    set newname for datafile 110 to
    "+data";
    set newname for datafile 111 to
    "+data";
    set newname for datafile 112 to
    "+data";
    set newname for datafile 113 to
    "+data";
    set newname for datafile 114 to
    "+data";
    set newname for datafile 115 to
    "+data";
    set newname for datafile 116 to
    "+data";
    set newname for datafile 117 to
    "+data";
    set newname for datafile 118 to
    "+data";
    set newname for datafile 119 to
    "+data";
    set newname for datafile 120 to
    "+data";
    set newname for datafile 121 to
    "+data";
    set newname for datafile 122 to
    "+data";
    restore
    clone database
    executing Memory Script
    executing command: SET until clause
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 15-JUN-11
    released channel: d1
    released channel: d2
    released channel: d3
    released channel: d4
    released channel: a1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 06/15/2011 11:21:42
    RMAN-01005: not all datafiles have backups that can be recovered to scn 260398799
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 122 found to restore
    RMAN-06023: no backup or copy of datafile 121 found to restore
    RMAN-06023: no backup or copy of datafile 120 found to restore
    RMAN-06023: no backup or copy of datafile 119 found to restore
    RMAN-06023: no backup or copy of datafile 118 found to restore
    RMAN-06023: no backup or copy of datafile 117 found to restore
    RMAN-06023: no backup or copy of datafile 116 found to restore
    RMAN-06023: no backup or copy of datafile 115 found to restore
    RMAN-06023: no backup or copy of datafile 114 found to restore
    RMAN-06023: no backup or copy of datafile 113 found to restore
    RMAN-06023: no backup or copy of datafile 112 found to restore
    RMAN-06023: no backup or copy of datafile 111 found to restore
    RMAN-06023: no backup or copy of datafile 110 found to restore
    RMAN-06023: no backup or copy of datafile 109 found to restore
    RMAN-06023: no backup or copy of datafile 108 found to restore
    RMAN-06023: no backup or copy of datafile 107 found to restore
    RMAN-06023: no backup or copy of datafile 106 found to restore
    RMAN-06023: no backup or copy of datafile 105 found to restore
    RMAN-06023: no backup or copy of datafile 104 found to restore
    RMAN-06023: no backup or copy of datafile 103 found to restore
    RMAN-06023: no backup or copy of datafile 102 found to restore
    RMAN-06023: no backup or copy of datafile 101 found to restore
    RMAN-06023: no backup or copy of datafile 100 found to restore
    RMAN-06023: no backup or copy of datafile 99 found to restore
    RMAN-06023: no backup or copy of datafile 98 found to restore
    RMAN-06023: no backup or copy of datafile 97 found to restore
    RMAN-06023: no backup or copy of datafile 96 found to restore
    RMAN-06023: no backup or copy of datafile 95 found to restore
    RMAN-06023: no backup or copy of datafile 94 found to restore
    RMAN-06023: no backup or copy of datafile 93 found to restore
    RMAN-06023: no backup or copy of datafile 92 found to restore
    RMAN-06023: no backup or copy of datafile 91 found to restore
    RMAN-06023: no backup or copy of datafile 90 found to restore
    RMAN-06023: no backup or copy of datafile 89 found to restore
    RMAN-06023: no backup or copy of datafile 88 found to restore
    RMAN-06023: no backup or copy of datafile 87 found to restore
    RMAN-06023: no backup or copy of datafile 86 found to restore
    RMAN-06023: no backup or copy of datafile 85 found to restore
    RMAN-06023: no backup or copy of datafile 84 found to restore
    RMAN-06023: no backup or copy of datafile 83 found to restore
    RMAN-06023: no backup or copy of datafile 82 found to restore
    RMAN-06023: no backup or copy of datafile 81 found to restore
    RMAN-06023: no backup or copy of datafile 80 found to restore
    RMAN-06023: no backup or copy of datafile 79 found to restore
    RMAN-06023: no backup or copy of datafile 78 found to restore
    RMAN-06023: no backup or copy of datafile 77 found to restore
    RMAN-06023: no backup or copy of datafile 76 found to restore
    RMAN-06023: no backup or copy of datafile 75 found to restore
    RMAN-06023: no backup or copy of datafile 74 found to restore
    RMAN-06023: no backup or copy of datafile 73 found to restore
    RMAN-06023: no backup or copy of datafile 72 found to restore
    RMAN-06023: no backup or copy of datafile 71 found to restore
    RMAN-06023: no backup or copy of datafile 70 found to restore
    RMAN-06023: no backup or copy of datafile 69 found to restore
    RMAN-06023: no backup or copy of datafile 68 found to restore
    RMAN-06023: no backup or copy of datafile 67 found to restore
    RMAN-06023: no backup or copy of datafile 66 found to restore
    RMAN-06023: no backup or copy of datafile 65 found to restore
    RMAN-06023: no backup or copy of datafile 64 found to restore
    RMAN-06023: no backup or copy of datafile 63 found to restore
    RMAN-06023: no backup or copy of datafile 62 found to restore
    RMAN-06023: no backup or copy of datafile 61 found to restore
    RMAN-06023: no backup or copy of datafile 60 found to restore
    RMAN-06023: no backup or copy of datafile 59 found to restore
    RMAN-06023: no backup or copy of datafile 58 found to restore
    RMAN-06023: no backup or copy of datafile 57 found to restore
    RMAN-06023: no backup or copy of datafile 56 found to restore
    RMAN-06023: no backup or copy of datafile 55 found to restore
    RMAN-06023: no backup or copy of datafile 54 found to restore
    RMAN-06023: no backup or copy of datafile 53 found to restore
    RMAN-06023: no backup or copy of datafile 52 found to restore
    RMAN-06023: no backup or copy of datafile 51 found to restore
    RMAN-06023: no backup or copy of datafile 50 found to restore
    RMAN-06023: no backup or copy of datafile 49 found to restore
    RMAN-06023: no backup or copy of datafile 48 found to restore
    RMAN-06023: no backup or copy of datafile 47 found to restore
    RMAN-06023: no backup or copy of datafile 46 found to restore
    RMAN-06023: no backup or copy of datafile 45 found to restore
    RMAN-06023: no backup or copy of datafile 44 found to restore
    RMAN-06023: no backup or copy of datafile 43 found to restore
    RMAN-06023: no backup or copy of datafile 42 found to restore
    RMAN-06023: no backup or copy of datafile 41 found to restore
    RMAN-06023: no backup or copy of datafile 40 found to restore
    RMAN-06023: no backup or copy of datafile 39 found to restore
    RMAN-06023: no backup or copy of datafile 38 found to restore
    RMAN-06023: no backup or copy of datafile 37 found to restore
    RMAN-06023: no backup or copy of datafile 36 found to restore
    RMAN-06023: no backup or copy of datafile 35 found to restore
    RMAN-06023: no backup or copy of datafile 34 found to restore
    RMAN-06023: no backup or copy of datafile 33 found to restore
    RMAN-06023: no backup or copy of datafile 32 found to restore
    RMAN-06023: no backup or copy of datafile 31 found to restore
    RMAN-06023: no backup or copy of datafile 30 found to restore
    RMAN-06023: no backup or copy of datafile 29 found to restore
    RMAN-06023: no backup or copy of datafile 28 found to restore
    RMAN-06023: no backup or copy of datafile 27 found to restore
    RMAN-06023: no backup or copy of datafile 26 found to restore
    RMAN-06023: no backup or copy of datafile 25 found to restore
    RMAN-06023: no backup or copy of datafile 24 found to restore
    RMAN-06023: no backup or copy of datafile 23 found to restore
    RMAN-06023: no backup or copy of datafile 22 found to restore
    RMAN-06023: no backup or copy of datafile 21 found to restore
    RMAN-06023: no backup or copy of datafile 20 found to restore
    RMAN-06023: no backup or copy of datafile 19 found to restore
    RMAN-06023: no backup or copy of datafile 18 found to restore
    RMAN-06023: no backup or copy of datafile 17 found to restore
    RMAN-06023: no backup or copy of datafile 16 found to restore
    RMAN-06023: no backup or copy of datafile 15 found to restore
    RMAN-06023: no backup or copy of datafile 14 found to restore
    RMAN-06023: no backup or copy of datafile 13 found to restore
    RMAN-06023: no backup or copy of datafile 12 found to restore
    RMAN-06023: no backup or copy of datafile 11 found to restore
    RMAN-06023: no backup or copy of datafile 10 found to restore
    RMAN-06023: no backup or copy of datafile 9 found to restore
    RMAN-06023: no backup or copy of datafile 8 found to restore
    RMAN-06023: no backup or copy of datafile 7 found to restore
    RMAN-06023: no backup or copy of datafile 6 found to restore
    RMAN-06023: no backup or copy of datafile 5 found to restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    RMAN> Exit
    Recovery Manager complete.
    Edited by: kerrygm on Jun 15, 2011 11:36 AM

    Maybe making progress.
    I got into RMAN and did a List backup command. In looking at the output, I see that it does have the 5/20 backup showing. Here is part of the output:
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    965689 Full 19.86M DISK 00:00:01 16-JUN-11
    BP Key: 967942 Status: AVAILABLE Compressed: NO Tag: TAG20110616T231223
    Piece Name: F:\BACKUPS\CSPROD\C-368871413-20110616-01
    SPFILE Included: Modification time: 16-JUN-11
    SPFILE db_unique_name: CSPROD
    Control File Included: Ckp SCN: 369596068 Ckp time: 16-JUN-11
    BS Key Size Device Type Elapsed Time Completion Time
    966218 403.84M DISK 00:00:00 20-MAY-11
    BP Key: 967842 Status: AVAILABLE Compressed: YES Tag: TAG20110520T231012
    Piece Name: F:\BACKUPS\CSPROD\A1\CSPROD_ARCH_5844
    List of Archived Logs in backup set 966218
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 2112 237027300 20-MAY-11 237046947 20-MAY-11
    1 2113 *237046947* 20-MAY-11 237059284 20-MAY-11
    1 2114 237059284 20-MAY-11 237216514 20-MAY-11
    1 2115 237216514 20-MAY-11 237709545 20-MAY-11
    1 2116 237709545 20-MAY-11 237722825 20-MAY-11
    1 2117 237722825 20-MAY-11 237730431 20-MAY-11
    1 2118 237730431 20-MAY-11 237860199 20-MAY-11
    I then changed my rman script to use the *237046947* SCN number that says it is from 5/20. I them kicked off my rman script and got the following result.
    Spooling started in log file: c:\temp\clone_CSPROD_CSPRSUM1.log
    Recovery Manager11.1.0.7.0
    RMAN> #connect catalog 'rmancat/rmancat@rmancat';
    2> # target is the source and auxiliary is destination
    3> #connect target sys/<pwd>@csprod
    4> #connect auxiliary /
    5>
    6> run {
    7> allocate auxiliary channel d1 type disk format 'F:\backups\CSPROD\d1\CSPROD_DATA_%s';
    8> allocate auxiliary channel d2 type disk format 'F:\backups\CSPROD\d2\CSPROD_DATA_%s';
    9> allocate auxiliary channel d3 type disk format 'F:\backups\CSPROD\d3\CSPROD_DATA_%s';
    10> allocate auxiliary channel d4 type disk format 'F:\backups\CSPROD\d4\CSPROD_DATA_%s';
    11> allocate auxiliary channel a1 type disk format 'F:\backups\CSPROD\a1\CSPROD_arch_%s';
    12> ##Archivelog number get from sql archive log list command
    13> set until sequence 237046947;
    14> ##set until time "to_date('2011-05-20 08:00:00', 'YYYY-MM-DD HH24:MI:SS')";
    15> duplicate target database to CSPRSUM1 nofilenamecheck
    16> logfile
    17> group 1('+DATA/CSPRSUM1/onlinelog/redo1a.log', '+FRA/CSPRSUM1/onlinelog/redo1b.log') size 50m,
    18> group 2('+DATA/CSPRSUM1/onlinelog/redo2a.log', '+FRA/CSPRSUM1/onlinelog/redo2b.log') size 50m,
    19> group 3('+DATA/CSPRSUM1/onlinelog/redo3a.log', '+FRA/CSPRSUM1/onlinelog/redo3b.log') size 50m;
    20> }
    allocated channel: d1
    channel d1: SID=532 device type=DISK
    allocated channel: d2
    channel d2: SID=533 device type=DISK
    allocated channel: d3
    channel d3: SID=531 device type=DISK
    allocated channel: d4
    channel d4: SID=555 device type=DISK
    allocated channel: a1
    channel a1: SID=534 device type=DISK
    executing command: SET until clause
    Starting Duplicate Db at 17-JUN-11
    RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only.
    contents of Memory Script:
    set until scn 373021170;
    set newname for datafile 1 to
    "+data";
    set newname for datafile 2 to
    Starting restore at 17-JUN-11
    channel d1: starting datafile backup set restore
    channel d1: specifying datafile(s) to restore from backup set
    channel d1: restoring datafile 00003 to +DATA
    channel d2: restoring datafile 00122 to +DATA
    channel d2: reading from backup piece F:\BACKUPS\CSPROD\D4\CSPROD_DATA_6149
    channel d3: starting datafile backup set restore
    channel d3: specifying datafile(s) to restore from backup set
    channel d3: restoring datafile 00002 to +DATA
    channel d3: restoring datafile 00020 to +DATA
    channel d4: restoring datafile 00120 to +DATA
    channel d4: reading from backup piece F:\BACKUPS\CSPROD\D2\CSPROD_DATA_6147
    channel d1: piece handle=F:\BACKUPS\CSPROD\D3\CSPROD_DATA_6148 tag=TAG20110616T230013
    channel d1: restored backup piece 1
    channel d1: restore complete, elapsed time: 00:01:15
    channel d4: piece handle=F:\BACKUPS\CSPROD\D2\CSPROD_DATA_6147 tag=TAG20110616T230013
    channel d4: restored backup piece 1
    channel d4: restore complete, elapsed time: 00:03:15
    channel d2: piece handle=F:\BACKUPS\CSPROD\D4\CSPROD_DATA_6149 tag=TAG20110616T230013
    channel d2: restored backup piece 1
    channel d2: restore complete, elapsed time: 00:03:35
    channel d3: piece handle=F:\BACKUPS\CSPROD\D1\CSPROD_DATA_6146 tag=TAG20110616T230013
    channel d3: restored backup piece 1
    channel d3: restore complete, elapsed time: 00:04:55
    Finished restore at 17-JUN-11
    sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CSPRSUM1" RESETLOGS ARCHIVELOG
    MAXLOGFILES 23
    MAXLOGMEMBERS 3
    MAXDATAFILES 1021
    MAXINSTANCES 8
    MAXLOGHISTORY 584
    LOGFILE
    GROUP 1 ( '+DATA/CSPRSUM1/onlinelog/redo1a.log', '+FRA/CSPRSUM1/onlinelog/redo1b.log' ) SIZE 50 M ,
    GROUP 2 ( '+DATA/CSPRSUM1/onlinelog/redo2a.log', '+FRA/CSPRSUM1/onlinelog/redo2b.log' ) SIZE 50 M ,
    GROUP 3 ( '+DATA/CSPRSUM1/onlinelog/redo3a.log', '+FRA/CSPRSUM1/onlinelog/redo3b.log' ) SIZE 50 M
    DATAFILE
    '+DATA/csprsum1/datafile/system.1535.754067379'
    CHARACTER SET WE8MSWIN1252
    contents of Memory Script:
    switch clone datafile all;
    executing Memory Script
    datafile 2 switched to datafile copy
    input datafile copy RECID=1 STAMP=754067676 file name=+DATA/csprsum1/datafile/sysaux.1536.754067379
    datafile 121 switched to datafile copy
    input datafile copy RECID=120 STAMP=754067677 file name=+DATA/csprsum1/datafile/waapp.1653.754067403
    datafile 122 switched to datafile copy
    input datafile copy RECID=121 STAMP=754067677 file name=+DATA/csprsum1/datafile/cu_custom.1549.754067381
    contents of Memory Script:
    set until scn 373021170;
    recover
    clone database
    delete archivelog
    executing Memory Script
    executing command: SET until clause
    Starting recover at 17-JUN-11
    starting media recovery
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '+DATA/csprsum1/datafile/system.1535.754067379'
    released channel: d1
    released channel: d2
    released channel: d3
    released channel: d4
    released channel: a1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 06/17/2011 15:14:54
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 3818 and starting SCN of 372868482 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3817 and starting SCN of 372685133 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3816 and starting SCN of 372593528 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3815 and starting SCN of 372374385 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3814 and starting SCN of 372325053 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3813 and starting SCN of 372316138 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3812 and starting SCN of 372249619 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3811 and starting SCN of 371775981 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3810 and starting SCN of 371643855 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3809 and starting SCN of 371614442 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3808 and starting SCN of 371432892 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3807 and starting SCN of 371121955 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3806 and starting SCN of 371047786 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3805 and starting SCN of 371029095 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3804 and starting SCN of 371018252 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3803 and starting SCN of 370947755 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3802 and starting SCN of 370857440 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3801 and starting SCN of 370814417 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3800 and starting SCN of 370797061 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3799 and starting SCN of 370756569 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3798 and starting SCN of 370746833 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3797 and starting SCN of 370746693 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3796 and starting SCN of 370746568 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3795 and starting SCN of 370746068 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3794 and starting SCN of 370745451 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3793 and starting SCN of 370733767 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3792 and starting SCN of 370674629 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3791 and starting SCN of 370501026 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3790 and starting SCN of 370498513 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3789 and starting SCN of 370497977 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3788 and starting SCN of 370497635 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3787 and starting SCN of 370497319 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3786 and starting SCN of 370496938 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3785 and starting SCN of 370495428 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3784 and starting SCN of 370491173 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3783 and starting SCN of 370390074 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3782 and starting SCN of 370385574 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3781 and starting SCN of 370385310 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3780 and starting SCN of 370385044 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3779 and starting SCN of 370368486 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3778 and starting SCN of 370333960 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3777 and starting SCN of 370330980 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3776 and starting SCN of 370327876 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3775 and starting SCN of 370174433 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3774 and starting SCN of 370148373 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3773 and starting SCN of 370147821 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3772 and starting SCN of 370147623 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3771 and starting SCN of 370141528 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3770 and starting SCN of 369711271 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3769 and starting SCN of 369621694 found to restore
    RMAN-06025: no backup of archived log for thread 1 with sequence 3768 and starting SCN of 369595614 found to restore
    RMAN> Exit
    Recovery Manager complete.
    What I am not sure is if the SCN I used in my set until sequence command was from 5/20, when I look at the backup pieces that it is reading to restore from, they appear to be from backup files taken from June.
    (e.g channel d1: reading from backup piece F:\BACKUPS\CSPROD\D3\CSPROD_DATA_6148 -- this is a 6/16 backup file).
    Also, as you can see it was not able to do a media recovery.
    Thanks in advance for your help.

  • I have a VI and an attched .txt data file. Now I want to read the data from the .txt file and display it as an array in the front panel. But the result is not right. Any help?

    I have a VI and an attched .txt data file. Now I want to read the data from the .txt file and display it as an array in the front panel. But the result is not right. Any help?
    Attachments:
    try2.txt ‏2 KB
    read_array.vi ‏21 KB

    The problem is in the delimiters in your text file. By default, Read From Spreadsheet File.vi expects a tab delimited file. You can specify a delimiter (like a space), but Read From Spreadsheet File.vi has a problem with repeated delimiters: if you specify a single space as a delimiter and Read From Spreadsheet File.vi finds two spaces back-to-back, it stops reading that line. Your file (as I got it from your earlier post) is delimited by 4 spaces.
    Here are some of your choices to fix your problem.
    1. Change the source file to a tab delimited file. Your VI will then run as is.
    2. Change the source file to be delimited by a single space (rather than 4), then wire a string constant containing one space to the delimiter input of Read From Spreadsheet File.vi.
    3. Wire a string constant containing 4 spaces to the delimiter input of Read From Spreadsheet File.vi. Then your text file will run as is.
    Depending on where your text file comes from (see more comments below), I'd vote for choice 1: a tab delimited text file. It's the most common text output of spreadsheet programs.
    Comments for choices 1 and 2: Where does the text file come from? Is it automatically generated or manually generated? Will it be generated multiple times or just once? If it's manually generated or generated just once, you can use any text editor to change 4 spaces to a tab or to a single space. Note: if you want to change it to a tab delimited file, you can't enter a tab directly into a box in the search & replace dialog of many programs like notepad, but you can do a cut and paste. Before you start your search and replace (just in the text window of the editor), press tab. A tab character will be entered. Press Shift-LeftArrow (not Backspace) to highlight the tab character. Press Ctrl-X to cut the tab character. Start your search and replace (Ctrl-H in notepad in Windows 2000). Click into the Find What box. Enter four spaces. Click into the Replace With box. Press Ctrl-V to paste the tab character. And another thing: older versions of notepad don't have search and replace. Use any editor or word processor that does.

  • OraRRP Error with "Unable to copy data file;Error code 2, check disk space"

    Hi,
    Some users get this message -"Unable to copy data file;Error code 2, check disk space" when run report with orarrp, but most users do not get it.
    I check free space at both server and client side, they are very sufficient.
    I also checked directory exists for REPORTXX_TMP variable.
    My user call reports via URL (rwservlet) and it occur for all reports.
    How I can solve this problem?
    Thanks in advance.
    Tawatchai R.

    Hi,
    have the same problem now. One user has temporarily problems to download .rrpa files via URL (rwservlet) request. Error code: -"Unable to copy data file;Error code 2, check disk space". Did you get a solution??
    Thanks in advance. Axel

  • Index File group on same drive as data files

    I've just found a file group used for indexes on the same drive as the data files.
    Am i correct in saying there is little benefit to this. The index file group should be on it's own spindle?
    Mr Shaw... One day I might know a thing or two about SQL Server!

    Definitely there will be performance gain provided you are querying for related data which as references index on those index filegroups.
    It helps in parallel processing , having data and index on multiple disk heads helps in reading the data parallel. For more information you can refer the below link
    https://technet.microsoft.com/en-us/library/ms190433%28v=sql.105%29.aspx
    --Prashanth

  • Location of iPhone sync data files or How to not have to re-install all my iPhone apps?

    Hi Folks,
    I recently did a clean install of Lion on my MacBook Pro and I've chosen to not use Migration Assistant because I don't want to move over a bunch of cruft under my 4+ year old home directory (Tiger > Leopard > SL).  I've been very selective about the things I've moved over and all has gone well until I want to sync my iPhone.  I've transferred purchases from the iPhone but if I check Sync Apps, iTunes wants to wipe and re-install everything!  This wouldn't be a problem, except the time it would take to re-organize all my apps on the iPhone again.   (Side note: Please Apple, give us an "auto-organize" button for apps!)
    Any idea how to move over my old sync data files to avoid having to nuke/reinstall my iPhone's apps?  Or is there another option to preserve how my apps are organizated?
    Many thanks in advance,
    Rob

    Skyler Richard wrote:
    Setting it up as new will not put anything over, then when you sync it will sync the applications over and it will sync the applications over Un organized.  My advised should resolve the issue, I work for technical support for the iPhone's, iPad, and iPod Touchs for Apple...
    Right Click the iPhone and click Transfer Purchases and then do the backup and restore from backup... That will transfer all your applications over to your computer and then organize them properly in their folders...
    OK, that does make sense so I won't bother setting it up as a new phone.
    Quick follow up question though, do I need to click the Sync Apps check box before I do a restore?  The reason I ask is that I've done a transfer purchases/backup/restore cycle but all the tabs (Apps, Music, Movies, etc.) do not have the sync check box checked following the restore as I would expect.  And when I go check the Sync Apps check box that's when it warns me "Are you sure you want to sync apps?  All existing apps and their data on the iPhone will be replaced with apps from this iTunes library."  That's when the apps then become unorganized.
    Thanks so much for your continued help!
    Rob

Maybe you are looking for

  • SQL Injection with CF7 and MS SQL 2005

    I looked through a bunch of SQL injection posts and couldn't find a definitive answer to this... Let me introduce this by saying that I know I should be using CFQUERYPARAM with EVERY CF variable in a CFQUERY tag. No excuses. But for a necessary quick

  • In HR ABAP WHAT IS DYNAMIC ACTIONS? GIVE ME ONE EXAMPLE?

    DEAR EXPERTS in HR ABAP WHAT IS DYNAMIC ACTIONS? GIVE ME ONE EXAMPLE?

  • Multi-file OCR quits on error

    When using the "recognize text in multiple files using OCR" function if it runs into an error in one file it quits and neither that one nor any after are done. I don't see a way to change this. This is in Acrobat v9.0.0. And ideas on how to deal with

  • Cannot Install SP2 - Requires SP1 before continuing

    I have tried just about everything to fix this: Through Windows Update and the Standalone package I receive a message that setup cannot continue because it needs SP1 installed first. Here is what I've tried: CHKDSK - Good SFC - Good Uninstalled AV Sy

  • Enabling airprint on OS X 10.8.4 Server 2.2 to share local laser printer to iOS?

    I am running OS X 10.8.4 with Server 2.2. I'd like to open up the use of my laser (Dell) to the iOS devices on the local network. That requires airprint to run on the server, passing the resulting print job to the printer. Is this possible in some wa