Backint/brrestore for refresh of test system?

We have a sandbox system that we refresh periodically from production. All systems are running Oracle 9.2. We use the backint interface and brbackup (called from DB13) to back up the production system daily, but we do not back up the sandbox.
To refresh the sandbox system, I copy down the latest online backup of the production system and perform database recovery. We've had some performance problems in the restore, and our backup system vendor (Legato) has advised me that I should use brrestore instead of Legato's recover utility to get the files back from tape.
Here's the question: Can I use brrestore to restore files from a backup of a different system? And if so, what is the procedure? With the Legato tool, I can easily pull down files from a different database and relocate them wherever I like. Can brrestore do this?
Thanks for your help.
Eva Blinder
Sonopress LLC, Weaverville, NC USA

Dear Eva,
I use Legato as Backup tool on My windows environment.
I have understood you need to perform a restore keepind the same Oracle sid.
You have to launch a brrestore on the target server passing a .utl file ( example restore.utl in the directory %ORACLE_HOME%\database\ ) similar at the .utl file ( example backup.utl ) you have used to backup the database on source server.
In the Restore.utl you have to add the parameter:
client = servername.xxx.yyy.zzz
This is the client who have performed the backup of the database. On the legato server you have to modify the definition of the client of the source server. In the remote administration tab you have to add the target server as remote administrator of the source server.
After that you have:
1.Check that in the file init<TARGET_SID>.ora the rollback segment are the same than in the file init<SOURCE_SID>.ora ( I think are the same because you have already done other restore )
2.Copy from  <SOURCE_server> a <TARGET_server> the last file .aff + and the file .log under directory sapbackup
3. On the <TARGET_server> modify file .log. You have to keep only the row relative to file .aff you have copied for the restore
4. Optionally: you can modify the file .aff if you wanto to distribute sapdata file on different drive letters on the target server
5. Run on the <TARGET_server>: brrestore -m all -d util_file –r %ORACLE_HOME%\database\restore.utl -c
It seems difficult. The first time we launched the restore we spent a day only to understood the procedure.
Now we restore 200 Gb of database in 2 hour.
Kind regards
Mauro

Similar Messages

  • Modified version is differant from active for ds in test system

    Modified version is differant from active version in test system for ds 0FI_AA_12 (unequal sign close to ds).
    Once I try to load data to PSA , the system generates the following error :
    ds is not replicated
    I replicated ds many times , re-transported it but nothing helps

    Hi,
    After you replicate the datasource,it will become inactive and so you need to activate it again.
    Always replicate first and then activate.
    Hope it clarifies

  • After Refresh of Test System XI connection is not working

    We are using SRM 4.0 in ECS and recently did a refresh to our Quality system.  After the refresh we went through the normal post refresh activities and assumed that everything was ok.  We started creating test shopping carts and PO for XML vendors and nothing is going out through XI.   I have ran tcode sxmb_moni to check to see if any error message are occuring but there are no messages in the system.
    We have restarted the server but can't seem to get anything to go out through XML.
    Can anyone provide any information or some things to check?

    Hi
    First check the setting in 'Define backend systems' if the SLD system name is correctly entered for your EBP system. Then using txn SLDCHECK, you can check connectivity to XI.
    You can also check txn SXMB_ADM to see if the integration server connection is correctly defined.
    Regards,
    Sanjeev

  • Steps for Refresh the Test Database with cold-backup

    Hello everybody !
    Can any body write me the steps to referesh the test db ( T1 -> already there) from the cold backup of production database ( P1 on another machine ) . all the datafiles are there available in the backup of production ..controlfile or redolog files are not available of production database. Both databases are 9.2.0.8 and are on different machines with same OS of AIX.
    Thanks in advance !

    Steps
    1.) Before Shutting down P1 DB for cold back execute
    alter database backup controlfile to trace;
    2.) copy the created the trace file from udump to test server
    3.) shutdown the P1DB and take cold backup containing all datafiles
    4.) Copy dbf to test servers
    5.) Edit trc file copied in step no 2
    6.) As you are changing the name of db use option SET ( in place of REUSE)
    7.)Also remove unwanted portion
    8.) Change the name of trc file to <somthing>.sql
    9.) Startup test db in nomount stage
    10.) run the above created sql
    11.) tHis will create controlfiles, place db in mount stage
    12.) issue alter database open reset logs
    13) add tempfiles to temp tbsp
    regards
    Pravin

  • Table structure changed in testing system after system refresh.

    Hi Team,
    Recently we underwent a system refresh in Testing System where the Testing data is filled with Production data. But now we find that in one table some fields which we had deleted they are again found there. The version history of table is also gone. The table object was under testing in testing system and after which it was supposed to be transported to production. I believe System refresh only means refresh of data and it has nothing to do with table structure. Please correct me if I am wrong. Please also let me know what could be the reason of those changes in table if it is not System Refresh.
    Regards,
    Amit

    I believe System refresh only means refresh of data and it has nothing to do with table structure.
    Alas, you were wrong, after all table definition are data too, as program sources...
    You have to re-import every transport request which was not yet transported to production in your test system.
    Regards,
    Raymond

  • Oracle DB copy from PRD system 10g to TEST system 11g

    Hi All,
    I have recently upgraded Oracle in the test system from 10.2.0.4 to 11.2.0.2. My Production system ran still with the old oracle version 10.2.0.4.
    Now I need to refresh the test system.
    Is that possible to be made at all, a database copy from production system (with old oracle version 10g) into test system (with the new version 11g) ?
    Thanks in advance for your reply.
    Regards
    Latif

    We must have taken the
    When you upgraded 10.2.0.X.X  to  11.2.0.2.0 on TST System
    You would have defintiely taken the 10.2.0.X.X.  ( Oracle Home Backup)
    I Strongly feel..
    Resotre  home folder in Qty System
    Change the Environment Variables of  s<sid>adm & ora<sid> users to oracle 10.2.0.X.X home directory
    Change listener file entires.
    Restore backup of Production of 10.2.0.X.X
    recover system as you have not done any upgrade on QTY and release.
    ...The rest all approaches may consume your extensive .
    You are the best judge...
    How you want to build TST System ....want to make easy ...want to make hard your self.
    Business wants data ..The rest all yours decission i think
    Rgds

  • IMPORTING a transport to TEST system??

    Hi Experts,
    I imported successfully a trsnsport, say 123456 into TEST system.
    Bcoz of some reason, I want to again IMPORT the same transport 123456 into same TEST system.
    So,
    1 - Is it possible? I mean, Is system allows to do so?
    2 - If so, Is prog. wuld be over written? I know there r NO chnages!!
    3 - Is it any danger, to do so?
    I know; i can create a new TR in DEV and import it, but, I dont want to do!!
    thanq

    No - you have to create a new transport (or refresh the test system from a system that does not yet have the transport).
    Rob

  • For one Urgent Change during performing the Approval(chnging the status to 'To be Tested') system does not recognize any changes using the CTS WBS BOM in the development system. The transaction is therefore incorrect or the status was reset by the system.

    For one Urgent Change while performing the one of the Approval before changing the status to 'To Be Tested'
    We are getting below error.
    The system does not recognize any changes using the CTS WBS BOM in the development system. The transaction is therefore incorrect or the status was reset by the system.
    COuld anyone please help us to know, How it can be resolved?
    We also have this below error.
    System Response
    If the PPF action is a condition check, the condition is initially considered as not met, and leads to another warning, an error message, or status reset, depending on the configuration.
    If the PPF action is the execution of a task in the task list, and the exception is critical, there is another error message in the document.
    Procedure
    The condition cannot be met until the cause is removed. Analyze all messages in the transaction application log.
    Procedure for System Administration
    Analyze any other messages in the task list application log, and the entries for the object /TMWFLOW/CMSCV
    Additional Information:
    System cancel RFC destination SM_UK4CLNT005_TRUSTED, Call TR_READ_COMM:
    No authorization to log on as a trusted system (Tr usted RC=0).
    /TMWFLOW/TU_GET_REQUEST_REMOTE:E:/TMWFLOW/TRACK_N:107
    For above error Table /TMWFLOW/REP_DATA_FLOWwas refreshed as well but still the same error.

    If you are in Test System, you can use function module AA_AFABER_DELETE to totally delete the depreciation area (tcode SE37, specify chart of depreciation and depreciation area), After that recreate your depreciation area and run AFBN. But before you do that, have you created a retirement transaction type that limits the posting on your new depreciation area? If not create one.
    Hope this helps.
    Thanks!
    Jhero

  • TDMS for refresh DEV system

    Hi,
    Does anybody have experience with refreshing a DEV system using TDMS?
    Our DEV system consists of 2 clients, One used by programmers with no data in it, and a second client with some transactional data for performing the first tests before something is transported to QAS.
    The repository of a DEV system will never be the same as on a production system (people are programming simultaneously), so the scenario I had in mind was to create the DEV system as a shell from the production system, create 2 clients and use a TIM for the second client.
    But then all the program specific data which is stored in DEV gets lost (e.g. program versioning).
    Does anyone know how to keep this data?
    Kind regards,
    Nicolas

    Hello
    Their are some main concerns before a TDMS TDTIM transfer can be done, these are -
    1) The table whose data is to be transfered from sender system to receiver system should exist in both the system.
    2) Key fields should be exactly identical.
    3) Their may be some extra fields in the receiver, this will not pose any issue but if their is an extra field in sender then its data could not be transferred. So to solve this either mark this table as not use or change the table in receiver to add the missing field.
    As a part of TDTIM both the system will be analyzed for consistent repository and any discrepancy will be notified to the user then user needs to decide whether he marks the inconsistent table for no transfer or corrects the inconsistency before proceeding further.
    Best regards,

  • Test system refreshed

    Hello,
    Our Test system is refreshed for some reasons. I'm not aware to which date is that refreshed.
    Now, there are about 30-40 objects(programs, function modules and tables as well) which
    don't have the latest developments.
    is there any faster way to find the version mis match between the Test system and Development system?
    Thankful for any kind of help.
    Lucky.

    Hi Lucky,
    If No back up is taken,
    Only thing you can do here is just go to the development system for a particular object and go to utilities --> version management and check which version is active..
    you need to release that active version to testing system...
    Probably let the missing terms get recognised in testing system , then you will get list of objects which needs to to transported from development to testing system.
    Regards
    Satish Boguda

  • Need detail information, steps would be nicer, to upgrade from Exchange 2003 to Exchange 2010 to setup in test system first then try on production, since not much room for downtime, thanks bekir

    Need detail information, steps would be nicer,  to upgrade from Exchange 2003 to Exchange 2010 to setup in test system first then try on production, since not much room for downtime, thanks bekir

    Hi,
    Overview of the upgrade progress from Exchange 2003 to Exchange 2010 including the following steps:
    Installing Exchange 2010 within your organization on new hardware.
    Configuring Exchange 2010 Client Access.
    Creating a set of legacy host names and associating those host names with your Exchange 2003 infrastructure.
    Obtaining a digital certificate with the names you'll be using during the coexistence period and installing it on your Exchange 2010 Client Access server.
    Associating the host name you currently use for your Exchange 2003 infrastructure with your newly installed Exchange 2010 infrastructure.
    Moving mailboxes from Exchange 2003 to Exchange 2010.
    Decommissioning your Exchange 2003 infrastructure.
    For more details, please refer to this following document.
    http://technet.microsoft.com/en-us/library/ff805040(v=exchg.141).aspx
    Best Regards.

  • Help for Job BDLS - Logical System Conversion aftrer System Refresh

    hi SAP Basis Gurus,
    We have restored production data to test system. After that we are doing Logical System conversion - Renaming of Logical System Names. For that we are running BDLS job. The program used is IBDLS2LS.
    Now one job is running for more than 200000 seconds.  Job is doing "Sequential Read" on table ZARIXCO40.  The following has been updated in the jobs log:
    01.02.2008 10:20:06 Processing table ZARIXCO40
    02.02.2008 01:59:56 Processing table ZARIXCO40
    02.02.2008 22:26:44 Processing table ZARIXCO40
    02.02.2008 23:04:05 Processing table ZARIXCO40
    So as per above log, it looks like job got stuck in loop for table 01.02.2008 10:20:06 Processing table ZARIXCO40.
    The oracle Session is showing following SQL Statement:
    SELECT
    /*+
      FIRST_ROWS
    FROM
      "ZARIXCO40"
    WHERE
      "MANDT" = :A0 AND "LOGSYSTEM" = :A1 AND ROWNUM <= :A2#
    ~
    Execution Plan is :
    ~
    Execution Plan
    Explain from v$sql_plan: Address: 00000003A2FD5728 Hash_value:  2772869435 Child_number:  0
    SELECT STATEMENT ( Estimated Costs = 4.094.270 , Estimated #Rows = 0 )
            3 COUNT STOPKEY
                2 TABLE ACCESS BY INDEX ROWID ZARIXCO40
                  ( Estim. Costs = 4.094.270 , Estim. #Rows = 32.799.695 )
                    1 INDEX RANGE SCAN ZARIXCO40~0
                      ( Estim. Costs = 418.502 , Estim. #Rows = 65.599.390 )
                      Search Columns: 1
    ~
    Table has been analyzed today. See the details below:
    ~
    Table   ZARIXCO40
    Last statistics date                  02.02.2008
    Analyze Method              mple 16.623.719 Rows
    Number of rows                        65.599.390
    Number of blocks allocated             2.875.670
    Number of empty blocks                     7.912
    Average space                                934
    Chain count                                    0
    Average row length                           313
    Partitioned                                   NO
    UNIQUE     Index   ZARIXCO40~0
    Column Name                     #Distinct
    MANDT                                          1
    KOKRS                                         14
    BELNR                                  2.375.251
    BUZEI                                      1.000
    ~
    Index used for this table is also analyzed today:
    ~
    UNIQUE     Index   ZARIXCO40~0
    Column Name                     #Distinct
    MANDT                                          1
    KOKRS                                         14
    BELNR                                  2.375.251
    BUZEI                                      1.000
    Last statistics date                  02.02.2008
    Analyze Method              mple 24.510.815 Rows
    Levels of B-Tree                               3
    Number of leaf blocks                    418.499
    Number of distinct keys               65.480.722
    Average leaf blocks per key                    1
    Average data blocks per key                    1
    Clustering factor                     40.524.190
    ~
    Can you please let me know what could be the issue and how to resolve this issue so that job gets completed. Normally this job runs for 1,00,000 seconds. I can not afford to cancel this job and run again.
    Any help is Highly appreciated,
    Thanks
    Best Regards,
    Basis CK

    Hi Markus,
    The stastics of the table is already updated today. After that also it is not going faster. It is doing "Sequential Read" for more than 5 hours and then one update is issued. And in the job log, all other tables are preocess only once, only this table is selected to processed 4 times which is very abnormal.
    01.02.2008 07:43:59 Processing table ZARIXCO11
    01.02.2008 07:45:36 Processing table ZARIXCO16
    01.02.2008 10:17:20 Processing table ZARIXCO21
    01.02.2008 10:20:06 Processing table ZARIXCO26
    01.02.2008 10:20:06 Processing table ZARIXCO29
    01.02.2008 10:20:06 Processing table ZARIXCO33
    01.02.2008 10:20:06 Processing table ZARIXCO37
    01.02.2008 10:20:06 Processing table ZARIXCO40
    02.02.2008 01:59:56 Processing table ZARIXCO40
    02.02.2008 22:26:44 Processing table ZARIXCO40
    02.02.2008 23:04:05 Processing table ZARIXCO40
    ~
    I guess this will keep on going like this and not sure when it gets complete. Any other thing you can suggest which help to resolve the issue.
    Thanks
    Best Regards,
    Basis CK

  • Restrict table maintenance for a Z Table in SM30 in Test systems

    Hello,
    I have 2 Z tables ZTAB1 & ZTAB2 which has table maintenance created. Both tables should be non modifiable in Test systems  but should be able to make entries in DEV system.
    This means any entry that needs to be present in PROD or TEST systems for these tables should be moved through transports from DEV only.
    The Tables are defined as follows:
    The Delivery Class in Both Tables is 'C' and both table. In the table maintenance screen, the authorization group for ZTAB1 is ZT1 & for ZTAB2 is ZT2. The Authorization object is  S_TABU_DIS for Both tables.
    ZTAB1 has 2 screens for maintenance and ZTAB2 has 1 screen for maintenance.
    The recording routine is "Standard recording routine" for both tables.
    The Problem is as follows:
    In TEST system, In SM30 for table ZTAB2, when I click Maintain I get an error message "Client XXX status is 'not modifiable' " which is correct. That means we cannot modify the entries for this table in TEST system.
    However for table ZTAB1, I am able to change the table entries through SM30.
    Could anyone suggest how to restrict for making entries in table ZTAB1 in SM30.
    I debugged SM30 for both tables & found that the SM30 program checks for an entry in OBJH table with the table name as OBJECTNAME and if the OBJCATEGORY in OBJH is equals 'CUST', it throws error message "Client XXX status is 'not modifiable' ".
    For the problematic table ZTAB1, the OBJH entry has 'APPL' and that is why we dont get the error message and the table entries are editable in SM30.
    (The transaction SOBJ can be used to changes OBJH entries. I haven't used this before. Not sure if I can use it to correct the problem.)
    Can anybody tell me how this OBJH gets populated when we create a Z table? 
    Could anyone suggest how to restrict for making entries in table ZTAB1 in SM30.
    Thanks,

    APPL stands for the Delivery Class A (Application Table)
    CUST stands for the Delivery Class C (Customizing Table) in table OBJH.
    It seems, the delivery class for the ZTAB1 has been changed from A (Application) to C(Customization). Try to delete the Table maintenance and regenerate it again.
    Regards,
    Naimesh Patel

  • FAQ: DPA - Remote Access for test systems

    Welcome to DPA - Remote Access for test systems forum!
    This forum is actively monitored and moderated by SAP Integration and Certification Center (ICC).
    Frequently Asked Questions:
    ======================================
    Q1: Where can I learn more about Developer Package Service (DPA)?
    Q2: How can I apply for the regular DPA service?
    Q3. How many SAP systems can I access under the regular DPA service?
    Q4. How can I have access to a non-shared SAP system?
    ======================================
    Q1: Where can I learn more about Developer Package Service (DPA)?
    A: Please take a look at ICC's web page on SDN:
    https://www.sdn.sap.com/irj/sdn?rid=/webcontent/uuid/15330f73-0501-0010-d59e-8a32e220b2ed [original link is broken]
    Q2: How can I apply for the regular DPA service?
    A: Please go to http://www.sap.com/partners/apply and fill out the application form.
    Q3. How many SAP systems can I access under the regular DPA service?
    A: Up to 3 SAP systems, which are shared among all DPA users.
    Q4. How can I have access to a non-shared SAP system?
    A: Please go to http://www.sap.com/partners/apply and apply for the exclusive-use DPA service.
    Message was edited by: Chung-Ho Fan

    https://discussions.apple.com/thread/5294202?tstart=0
    Something you should be aware of is the frequency of IP address change at your father's location. Providers of residential broadband services lease an IP address for a certain duration which you have no control over and is purely arbitrary. You may be familiar with these changes?
    The point is sometimes these addresses change regularly (4 hours to every few days) and sometimes they stay the same for a longer period of time such as a year or more.
    Because of the nature of this change you may find you can remote assist your father one day but not the next. The situation is easily rectified with a simple phonecall to your father. He can tell you what IP address he's using by launching his browser and clicking this link:
    http://myipaddress.com
    He gives you his new IP address and you should be able to make a successful connection again.
    Be aware IP addresses handed out by ISPs are known as routable. IP addresses handed out by Firewalls/Routers/Gateway devices such as Apple's Airport Express Base Station etc are not routable. Assuming you've not changed anything in the devices they will always be one of these three ranges: 192.168.1.x; 10.x.x.x and 172.16.16.x. You don't use any of these last three group of addresses to make the connection over the public external (internet) network but you do use them when on the same private internal network.

  • Current Voltage Luminance Testing System for OLEDs

    This is going to be a message board for a OLED/LED Current-Voltage-Luminance (IVL) testing system that I am in process of developing.  The system will be capable of electrically categorizing 36 pixels(9 Devices, 4 pixels each) by performing a reverse bias to forward bias voltage sweep.  The system will also measure the light output of a pixel by monitoring the current(A) output of a photodiode.  To calibrate the light output the pixels are measured at fixed current density specified by the user before the IVL test using a Luminance meter.  The system consist of two keithley 2400 source meters and a keithley 7001 Switch System.  One keithley is use to source voltage(V) measure current(A) and the other is used to monitor photodiode current.  The switch is used to switch between the 36 pixels.  
    Eventually I will post the entire project but for now I would like to get some comments on performing a single IVL sweep on a single pixel.  I was unable to get the triggering feature to work with keithley's in labview so I had to use sequence structures.  I just took the Labview Core 3 and DAQ classes,  if possible I would like to implement some of the parallel processing which I learned.  I will attach the file VI's in the next post.

    Here are the important vis
    Attachments:
    singleJV.vi ‏54 KB
    InitKeithLV.vi ‏18 KB
    InitKeithJV.vi ‏17 KB

Maybe you are looking for