Incremental Data Backup

Dear sir/Madam
I have One server i.e. Windows NT. With oracle 8 database loaded
and one database which is having many users, I am taking backup of
one x user which is very important to me . The same is exported and
Import to another server which is Windows2000 and Oracle 8i Database loaded.
In this server i am importing the data of x user.
Every time i am export data from windows nt server data base and delete the user in windows2000 server and import the dmp file which is latest updated dmp.
here what happen the time consume and import data of all tables.
So what i need is system automatecaly export the updated tables and
same may import in this server with the help of incremental command, but i dont want to take full dba backup.
Please guide us how to do the same
regards
Mathpati.

Post your question on Database - General Forum.
And then You'll have an answer like:
1- Create and save a script in the file system that runs command line to export the user you want. You may use the very same script you're already using;
2- Schedule this script in OS scheduler (chron if unix based systems or whatever scheduler you want);
Regards,
Marcos

Similar Messages

  • RELIABLE data backup software recommendation for MacBook Pro WANTED!

    I own Retrospect 6.x and I must say that on the intel platform it is less than stellar. On my 15" Powerbook I would get about 90MB/minute and on my new 17" MacBook Pro I am getting about 30MB/minute, which is taking FOREVER. Can someone PLEASE recommend some reliable data backup software in universal binary that will obtain backup speeds similar to what I use to get out of Retrospect on my powerbook? Thank you kindly!
    Scott

    Welcome to the forum,
    If you don’t have a .Mac account, you may want to consider it. It has a fairly respectable BackUp feature that’s quite good and very well integrated to your Mac. I strongly recommend it. Can’t say much compared to the speed of Retrospect although it does perform incremental backups as well as a host of other options. Hope that you may find this useful, etc.
    References:
    http://www.mac.com/1/solutions/backup.html
    http://www.mac.com/WebObjects/Welcome.woa?aff=consumer&cty=US&lang=en
    Regards,
    2.16 MBP (FW 1.0.1) Week-12 build   Mac OS X (10.4.6)   G4 Tower (OS 9/10), Dell 620 WorkStation (XP Pro), Gateway P4 (XP Home)

  • Does SAP support incremental/delta backups?  if so which note describes it

    I'm just about to test incremental/delta backups on udb 9.5 fp1.  Now to my question:
    Does SAP support incrmental/delta backups?
    Is there a note, I couldnt get a hit on OSS
    Are there any gotcha's?  such as do I need to move read only tables to a new talbespace?
    Any info on this is greatly appreciated.
    Anke

    Hi Anke,
    You can specify whether you want to perform a full, incremental or incremental delta backup and the frequency via DB13 or the planning calender in DBACOCKPIT.
    SAP Note 1269697 DB6: Backup Performance
    should be of use to you and please also see our administration guide available at our main page
    https://www.sdn.sap.com/irj/sdn/db6 under Administration. Specifically check the guide
    "Data Recovery and High Availability Guide and Reference" mentioned in Chapter 8.3.5 Advanced Backup Techniques
    There is not a specific SAP note relating to incremental/delta backups but they can be scheduled via DB13 and so, they are supported.
    Regards,
    Paul

  • Incrementally Updated Backups -- do you Backup to TAPE ?

    Incrementally Updated Backups on disk provide
    a. only a single image of the database
    b. as of only a particular point in time (the last incremental update)
    c. no protection against disk/storage failure as the backups are on the same server as the database
    Your Incrementally Updated Backups should go to Tape or to another Disk Area on another server or Virtual Tape Library frequently.
    You may find the need to "restore and recover the database as of 3 days ago OR as of the last critical processing date" which cannot be done if you incrementally update your backup every day.
    A backup on disk doesn't protect you from disk failure.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

    Look in the FAQ for SuperDuper at: <http://www.shirt-pocket.com/forums/showthread.php?t=3942&highlight=time+capsul e>
    where you'll find a discussion of what you can do.

  • Data Backup 2.1 and Mac Memory Management?

    I'm trialing a backup program called Data Backup 2.1. It keeps versions of my programs, which I need, as often I've had corruptions and have not notice this till a few days after the fact. I've been using Retrospect but read a review that praised Data Backup. The thing I've noticed with it is that although it is very fast, like SuperDuper, it seems to affect my free memory dramatically. I've noticed that it will finish and instead of having say 250 megs of Active memory in use I'll have 700 megs of active memory. Inactive will be low whereas normally its high. Free memory during its backup can drop to 20 megs (I have 1.5 gigs). The free memory, once you start to use your computer seems to recover to around 500 - 700 megs. The one thing I have noticed of concern is that while its running I get pageouts, which I never get and my reading about Mac memory management is you want to avoid pageouts and if you get them you need more memory (for what I'm doing 1.5 gigs should be plenty). I've asked the Data Backup people what's going on and they don't think its something to be concerned about but they said it is probably something to do with the way they are caching.
    I'm just wondering - do you think this is something to be concerned about. I'd like to switch from Retrospect as although I know it I'm not sure how committed they are to the Mac market any longer and it is way slower in terms of activities but it does manage memory well. However I don't want to get Data Backup if it is affecting RAM inappropriately.
    Kerry

    Synchronize! Pro X will maintain versioned archives, perform full, incremental, and bootable backups both to local and to network devices. I have found that SPX is just about as full-featured as Retrospect with certain limitations. It cannot backup across multiple media (CDs, DVDs, tape), no extensive browser windows like Retrospect, no backups without scanning (as SuperDuper does for its "fast" updating backup.
    SPX supports schedules, multiple-item backups (can select individual files and/or folders), extensive backup/synchronize customizations, can run as "root", can auto-mount devices (including network drives), and it's a universal binary.
    It's also nearly as expensive as Retrospect but in my opinion it's worth it.
    If you want a less costly backup solution without all the features of SPX, but with all the features of SuperDuper (and in my opinion better than SD) then try Deja Vu. Also a universal binary, it supports incremental archives, full, incremental, and bootable backups to local or network drives, supports scheduling and runs as a preference pane.
    Finally, for the the truly cheap there are PsyncX and RsyncX - both freeware. They are GUI wrappers around the basic backup and synchronizing tools that are part of Unix (ditto, rsync, and psync.)
    Download mentioned software from www.versiontracker.com or www.macupdate.com.
    Why reward points?(Quoted from Discussions Terms of Use.)
    The reward system helps to increase community participation. When a community member gives you (or another member) a reward for providing helpful advice or a solution to their question, your accumulated points will increase your status level within the community.
    Members may reward you with 5 points if they deem that your reply is helpful and 10 points if you post a solution to their issue. Likewise, when you mark a reply as Helpful or Solved in your own created topic, you will be awarding the respondent with the same point values.

  • Data Backup 3 - Dead

    Just an FYI for those using Data Backup 3 - currently it does not work in Leopard (fails to start for me). The Prosoft FAQ indicates a new version will be available soon.

    Well its been over a week now since Leopard's release and I've decided to abandon Prosoft's Data Backup in favor of Leopard's Time Machine forever.
    While most companies have gotten their Leopard updates out within 1-2 days, Prosoft, of which my entire backup system was based on, is still waiting to release their's. Meanwhile, their product is completely dead in Leopard. Its too bad, if not for the delay I probably would have never stopped using their product.
    BTW, for anyone who's wondering, the primary reason I'm sticking with Time Machine? I can do an incremental backup after days of not doing a backup in a 1-2 minutes. Data Backup is stuck scanning the entire drive and can take up to an hour.

  • Want to perform an incremental data replication

    I  have five vdisk in DC SAN (EVA 8400) which contains a large amount of data. This data has to replicate to newly deployed DR SAN. That is why we take a tape backup from DC SAN and want to restore it on DR before start the replication. After that we want to start the replication which will just replicate the incremental data, after restored from tape. But in HP CA (Continuous Access) I have not found any option to start the incremental data replication, it is started from the zero-block data with a newly automatic created vdisk. So please suggest me about the possibility of incremental data replication.

    Actually, I have got it to work...
    I used the methods ACTIVATE_TAB_PAGE & TRANSFER_DATA_TO_SUBSCREEN in the BADI LE_SHP_TAB_CUST_ITEM.
    Its working fine now.
    Thanks for the response.

  • Incrementally updated backup and EMC NMDA

    Hello Everyone,
    I'm kind of a newbie in setting up networker module for oracle, to backup database to tape. We use the oracle's suggested backup strategy to backup DB to Disk first using the incrementally updated backups with recovery set to 3 days (RECOVER COPY OF DATABASE WITH TAG ... UNTIL TIME 'SYSDATE-3') , which helps us to recover DB to any point in time using the backup files in disks vs. going to tape. After backup to disk, we backup recovery area to tape nightly. However, we do want to maintain backup Retention Policy of 1 month. Couple of questions,
    1. If i set RP to recovery window of 31 days in RMAN, then backups don't obsolete at all. this forces me to set RP in networker and they don't recommend setting RP in both RMAN and networker. how is this done generally to obsolete backups from tape as well as rman (catalog in CF) with above strategy. perhaps in this case i should set RP in netwoker and set RP in RMAN to none and rely on crosscheck and delete expired commands to sync with RNAM catalog.
    2. Wondering if nightly backup of recovery area to tape is going to take incremental from previous day and NOT full backup. The reason i ask is, i do not want the tape to do full backup of FRA every day cause full backup datafiles change once in 3 days based on the until time set. is there an option do set in networker to do incremental only or that's the default.
    Thanks in Advance!
    11gR2, 4 Node RAC, Linux, NMDA 1.1, Networker 7.6 sp1

    Loic wrote:
    You do a full backup of the FRA on tape ?No, I do a full backup of the database on tape. Using RMAN together with Veritas NetBackup.
    I mean if you use incremental updated backup it'll not work on tape... Because the level 0 backup that will be updated with the backup of the day after will be on tape and will not be updated.The incrementally updated backup is in the FRA only, on disk (both the image copy and the following backup sets that are used for recovery of the image copy). Never gets written to tape or updated on tape.
    Why don't you use then normal incremental backup ? That will have no problem with the tape backup even if level0, or level1,2 becomes reclaimable... ?I think I do :-)
    Maybe:
    To keep that you can put the redundancy to 2 out of 1 copy. Like this even with one copy on disk and tape it'll say keep the 2 copies.
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;I'll think about that.

  • INCREMENTAL MERGE BACKUP & RECOVERY

    INCREMENTAL MERGE BACKUP & RECOVERY
    =====================================
    1) 개요
    RMAN을 이용하여 database의 Image copy를 Backup하고 그 Backup에 retention policy만큼의 Incremental Backup을 적용하여서 복구의 시간을 단축한 Backup의 방법입니다.
    즉 retention policy에 설정되어있는 시간 전의 최후의 시간까지 Image Copy를 보장하는 방법이며 Disk Space는 Incremental Backup만 사용하는 방법보다는 Image Copy를 사용하는 것임으로 많이 소요됩니다.
    물론 Backup & Recovery 정책에 따라서 Retention Policy를 Recovery Window나 Redundancy 2이상으로 설정할 수 있으나, Incremental Merge Backup & Recovery의 장점을 극대화 하기 위해서는 Redundancy 1 (Default)로 설정하는 것을 권해드립니다.
    2) 장점
    - Recovery 시에 최근 Incremental Backup의 정보들이 적용이 되어있기 때문에 최소한의 시간으로 Recovery 작업을 진행하실 수 있습니다. 물론 장점을 극대화 하시기 위해서는 Retention Policy를 redundancy 1로 설정해 주시길 권해드립니다.
    - Datafile들의 Destination을 Backup된 Image Copy가 있는 곳으로 Switch 하여 Restore하는 시간을 단축하셔서 Recovery를 진행하실 수도 있으나 Recovery 후의 관리의 용이성을 위해 부득히한 경우를 재외하고는 권장하지 않습니다.
    3) Syntax
    - Backup
    RMAN> BACKUP INCREMENTAL LEVEL 1
    FOR RECOVER OF COPY
    WITH TAG WEEKLY DATABASE;
    위 명령어를 사용하시면 WEEKLY라는 Tag로 생성된 Copy가 없을 시에는 Image Copy를 Weekly라는 Tag로 구분하여 Backup을 받게 됩니다.
    * 여기서 Copy라 함은 FOR RECOVER OF COPY라는 구문을 사용하여서 만든 Image Copy를 뜻합니다.
    RMAN> RECOVER COPY OF DATABASE WITH TAG WEEKLY;
    위 명령어를 사용하시면 현재 받았던 Incremental Backup을 Weekly라는 Tag를 가지고 있는 Database Copy에 적용을 하게 됩니다.
    RMAN> Delete Obsolete;
    위 명령어를 사용하시면 현재의 Image Copy에 적용된 Incremental Backup을 재외한 나머지 Incremental Backup들이 Delete됩니다.
    -Recovery
    RMAN> switch database to copy;
    문제가 발생하셨을 때 위 명령어를 사용하시면 모든 datafile의 pointer들은 image backup copy를 바라보게 됨으로 Restore의 시간을 절약할 수 있습니다.
    4) Retention Policy에 따른 Syntax 변화
    * 기본적으로 INCREMENTAL MERGE BACKUP & RECOVERY의 장점인 Recovery 시간을 극대화 하기 위해서는 Retention Policy를 redundancy 1(Default)로 설정하시는 것을 권장해드립니다.
    하지만 User가 Backup & Recovery 계획을 변경하여 retention policy를 recovery window 또는 redundancy 1 이상으로 설정해 주시면 Obsolete한 Backupset을 Delete하는 부분과 Backupset을 적용시키는 부분을 수정해 주셔야합니다.
    - Recovery Window of 2
    Recovery Window를 2일로 설정해 놓으시면 Backup시에는 똑같은 Syntax를 사용하실 수 있으나 Recover copy of database 명령어는 아래와 같이 바뀌어야 합니다.
    RMAN> BACKUP INCREMENTAL LEVEL 1
    FOR RECOVER OF COPY
    WITH TAG WEEKLY DATABASE;
    RMAN> RECOVER COPY OF DATABASE WITH TAG WEEKLY until time 'sysdate-2';
    왜냐하면 retention policy에 마춰서 image copy도 2일전의 상태로 보관되어야하며 그에 따라서 2일동안의 Backupset들도 보관되어야하기 때문입니다. 만약 until time을 사용하지 않으시면 incremental backup들이 obsolete 상태로 안변하게 됩니다.
    - Redundancy 2 이상
    Redundancy 2이상으로 설정해 주시면 2개 이상의 Tag를 보관하는 효과로서 역시 예상대로 Obsolete로 안 변하는 현상이 발생됩니다.
    DAY 1)
    RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG DAILY DATABASE;
    -위와 같이 실행하시면 한개의 Database Copy (Tag DAILY)가 생성됩니다.
    DAY 2)
    RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG DAILY DATABASE;
    RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG WEEKLY DATABASE;
    -위와 같이 실행하시면 한개의 Database Copy (Tag WEEKLY)와 한개의 Backupset (Tag DAILY)가 생성됩니다.
    DAY 3)
    RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG DAILY DATABASE;
    RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG WEEKLY DATABASE;
    RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG MONTHLY DATABASE;
    -위와 같이 실행하시면 한개의 Database Copy (Tag MONTHLY)와 두개의 Backupset (Tag DAILY, Tag WEEKLY)가 생성됩니다.
    DAY 4)
    RMAN> Delete Obsolete;
    - 위와 같이 실행하시면 DAILY Tag와 관련된 부분이 Delete 됩니다. (최신 2개의 Tag관련 Backup만 보관)
    RMAN> Delete Obsolete redundancy 1
    - 위와 같이 실행하시면 DAYLY Tag와 WEEKLY Tag 관련된 부분이 Delete 됩니다. (Redundancy 1로 설정한 것과 똑같은 효과)
    Reference:
    Article-ID: Note 351455.1
    Title: Oracle Suggested Strategy & Backup Retention
    Article-ID: Note 303861.1
    Title: Incrementally Updated Backup In 10G
    글 수정:
    hunlee

    I would always include an archivelog backup:
    ... with tag 'fullbackup’ database plus archivelog delete all input;
    And at the end:
    delete obsolete noprompt;
    That delete incrmental backups already applied to the basic image copies.
    Your configuration allows only limited point in time recoveries. As soon as an incremental backup is applied to the image copies, you cannot go back in time. To change this you can add a 'until time' clause, for example:
    recover
    copy of database
    with tag 'fullbackup until time 'sysdate-11';
    This creates a recovery window of 10 days, only image copies older than 10 days will be changed.
    Werner

  • How to extract incremental data from SQL server to oracle tables in ODI

    HI All,
    In my ODI sql server is install.My Source is in SQL server and my target is in Oracle.
    I need to create a interface mapping where i need to extract incremental data from sql server to oracle.
    There is a datetime(with Timestamp) field in sql server .I need to pull incremental data based on dateime.
    Example = tablename.DateTime > (select '1-jan-11' from dual) .....i am using this query but its not woking.the error is Invalid object name"dual".
    We are not going to use Incremental in IKM and LKM.
    Request you to please provide any suggestion ASAP.
    Thanks,
    Lony

    You can do that via Variable.
    In the interface mapping create a filter on Tablename.DateTime
    and put the condition like this
    Tablename.DateTime BETWEEN #VAR and in the variable use this query in refreshing tab with oracle schema
    SELECT max(start_time)||' AND '||max(END_TIME)+1 from audit_table where ETL_JOB_CODE = '20'In the package call the above variable in refresh mode and then interface.
    This way you will pass from the query between and condition date and pass to interface so that SQL Server fetches the data between those too range.
    Note:- You might need to tweak the date format so that SQL Server can understand.
    Hope this helps.

  • Is there a auto-increment data type in Oracle

    Is there a auto-increment data type in Oracle ?
    How to do it if there is no auto-increment data type in Oracle ?
    null

    jackie (guest) wrote:
    : Is there a auto-increment data type in Oracle ?
    : How to do it if there is no auto-increment data type in Oracle
    Hi,
    I think you need unique ID's, for this purpose you use sequences
    in Oracle. Example:
    create table xy (
    id number,
    name varchar2(100)
    alter table xy
    add constraint xy_pk primary key(id);
    create sequence xy_seq start with 1 maxvalue 99999999999;
    (there are many other options for create sequence)
    create or replace trigger xy_ins_trg
    before insert on xy
    for each row
    begin
    select xy_seq.nextval
    into :new.id
    from dual;
    end;
    This produces a unique value for the column id at each insert.
    Hope this will help.
    peter
    null

  • Increment date parameters on each run

    Hi,
    while scheduling concurren requests. we have option like
    "increment date parameters on each run"
    what exactly the meaning of the above

    Check the following thread:
    schedule purge program
    schedule purge program
    Also, have a look at:
    Note: 151849.1 - Scheduled Periodic Concurrent Program Runs One Time Only
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=151849.1

  • Ideas for Providing User Level Data Backup and Restore

    I'm looking for ideas for implementing a user level application data backup and restore in an Apex app.
    What would be great is to have a user be provided an export file and a way to import this file. A bit overkill but hopefully never needed.
    Another option that is perfectly doable is a report that simply provides a means to create an export of the data. Since I already have an interface I can use an export to interface an export.
    Any thoughts?
    Hopefully I'm missing something already there for an end user to use.

    jlincoln wrote:
    "Do you mean "export" and "import" colloquially, or in the specific sense of the exp/imp/datapump utilities?"': I mean as in imp/exp Oracle utilities. Generally speaking, it would be neat to be able to export and import via an Apex an application. In this hosted environment I don't have that access but would this be a bad idea if you don't care about the existing data in the schema in which the data resides?I can envisage a mechanism using <tt>exp/imp</tt>, but since it requires <tt>dbms_scheduler</tt> external jobs and access to the file system it's highly unlikely to be possible in a hosted environment. (Unless you're doing the hosting?)
    Backup: Necessary for piece of mind and flexibility. I am working on a VB/Access user who does this today to get to the point when they can be comfortable with the backups occurring regularly and by the hosting site's DBA group.
    Restore: Like I said. I am working on a VB/Access user who does this today to get to the point when they can be comfortable with the backups occurring regularly and by the hosting site's DBA group. This is a very small data set. A restore would simply remove existing data and replace it with the new data.My opinion is that time would be better spent working on the user rather than a redundant backup and restore feature. Involve them in a disaster recovery exercise with whoever is hosting the environment to prove that their data is safe. Normally the inclusion of data in regular, effective database backups is sold as a major feature of APEX solutions.
    "What about security/privacy when this data ends up in uncontrolled environments?": I don't understand the point of this question. The data should not end up in uncontrolled environments. Just like the data in the database or its backups.Again, having data in a central, shared location protected by multiple levels of application, database, and OS security is usually seen as a plus for APEX over VB/Access. Exporting the data in toto to a PC/laptop that can be stolen or lost, and where it can be copied to USB drives/phones/email loses this protection.
    User Level: Because the end user must have access to the backup and restore mechanisms of the application.
    Application Data: The application data. Less than 10MB. Very small. It can be exported in a flat file downloaded by the end user. This file can then be used to upload and import via an existing application interface. For example.
    "I'm struggling to parse this for meaning.": When I say I have an existing interface I am referring to a program residing in the Apex application that will take data from a flat table structure (i.e. interface table), validate the data, derive data, and load into the target table structure.Other than the report export capability linked to above, there's nothing built-in to APEX that comes close to your requirement. If the data is simple enough that it can be handled in such a report, and you have a process that can read and recreate this export, then you have your backup/restore capability. If the data can't be handled in a simple report, then you'll need a more complex PL/SQL process to generate the file.

  • Essbase data backup - 9.3.1

    Hi All,
    I have a question around Essbase data backups.
    We are on 9.3.1 and the Essbase data files (.ind and .pag) are under the default location (Hyperion\Essbase\App etc).
    One of our apps is 70gb and gets backed up nightly.
    Now is it necessary to take an export of the data for the backup to be complete? My main concern is that exporting 70GB of data will take hours.
    If for some reason we need to restore this app to another server, will copying all files under the Essbase\App\Appname folder also restore the data?
    Thanks for your help.
    Seb

    Provided the Essbase application & database is stopped when the backup occurs, backing up the ind and pag files, as well as the Database outline(*.otl), *.tct , *.db, *.dbb, *.esm files should allow you to recover from a backup. I have had successfull recoveries from standard file backup of these files without issue, and in pretty good time.
    I have not tried to restore to a different server. Usually when migrating, I copy the outline over, and export data from one database and import to the other.
    Imran
    Edited by: ImranS on Apr 7, 2009 9:51 AM

  • Incremental Data loading in ASO 7.1

    HI,
    As per the 7.1 essbase dbag
    "Data values are cleared each time the outline is changed structurally. Therefore, incremental data loads are supported
    only for outlines that do not change (for example, logistics analysis applications)."
    That means we can have the incremental loading for ASO in 7.1 for the outline which doesn't change structurally. Now what does it mean by the outline which changes structurally? If we add a level 0 member in any dimension, does it mean structrual change to that outline?
    It also syas that adding Accounts/Time member doesn't clear out the data. Only adding/deleting/moving standard dimension member will clear out the data. I'm totally confused here. Can anyone pls explain me?
    The following actions cause Analytic Services to restructure the outline and clear all data:
    ● Add, delete, or move a standard dimension member
    ● Add, delete, or move a standard dimension
    ● Add, delete, or move an attribute dimension
    ● Add a formula to a level 0 member
    ● Delete a formula from a level 0 member
    Edited by: user3934567 on Jan 14, 2009 10:47 PM

    Adding a Level 0 member is generally, if not always, considered to be a structural change to the outline. I'm not sure if I've tried to add a member to Accounts and see if the data is retained. This may be true because by definition, the Accounts dimension in an ASO cube is a dynamic (versus Stored) hierarchy. And perhaps since the Time dimension in ASO databases in 7.x is the "compression" dimension, there is some sort of special rule about being able to add to it -- although I can't say that I ever need to edit the Time dimension (I have a separate Years dimension). I have been able to modify formulas on ASO outlines without losing the data -- which seems consistent with your bullet points below. I have also been able to move around and change Attribute dimension members (which I would guess is generally considered a non-structural change), and change aliases without losing all my data.
    In general I just assume that I'm going to lose my ASO data. However, all of my ASO outlines are generated through EIS and I load to a test server first. If you're in doubt about losing the data -- try it in test/dev. And if you don't have test/dev, maybe that should be a priority. :) Hope this helps -- Jason.

Maybe you are looking for

  • ATI catalyst 11.3-1 + Xserver 1.9.4

    First off, i dont think i am a noob, but still pretty unexpired so sorry if stuff don't make sense, or if i post in the fail forumsection. I have just reinstalled Arch because my root directory was about to blow over. So i did. Unfortonatly the newes

  • Long DDIC Activation for Master Data Change During Transport

    Good Morning Folks, During transport of a change to a custom master data object to production, the data dictionary activation is taking a very long time and using a very large amount of space.  The change to the master data object was adding three ad

  • I want to copy and paste a chart from an email into Pages on MAC. I only get the text.  Not the chart.

    I recieved an email that included a chart that I want to put in a document.  When I copy and paste the chart in Pages it only shows the text without the chart.  This shouldnt be too hard but I have no idea why it will not copy this in the format or i

  • From PP to DVD - simpliest way ?

    Hi gentlemen, I have about 1 hour of video, on the timeline and with to burn out a test DVD of this footage. I chose the export function, then Adobe Media encoder -> MPEG2-DVD ......but PP wants to burn out a .m2v video file. ?? .... I want to burn a

  • ECC 6.0 and Netweaver 7.3 BI across different OS

    Hello all, We are currently upgrading to ECC 6.0 EHP 5 on AIX/Oracle 11g. Our Business Intelligence team would like to run Netweaver 7.3 BI along with BOBJ 4.0 and Data Services 4.0 on Windows Server 2008. I was just wondering whether this was possib