Backup a user/tablespace with trigger

Hi,
i have an database from which i have to backup the data user by user (or if there is no way tablespace by tablespace) and every user at another time. And now i need a solution how to backup the user (maybe the tablepace) an all his trigger and so on. But i don´t want to backup the sys every time.
Is there any possibility to do this?

If you want to store all data of a specific user you still have to use export (or datapump in 10g). Although it is possible to backup single tablespaces, it wouldn't help you, trigger definitions are stored in the data dictionary, that means in the SYSTEM tablespace.
Export is not a compensation for a regular backup, when data changes after the export was taken this data is lost.

Similar Messages

  • User tablespace did not copy during cold backup

    Can anybody explain why this happened and how to fix?
    I was doing regular weekly coldbackup and all other tablespaces got copied except user tablespace. I can not find any error in alert log and windows event log. The file size is 25G. Any idea?
    Thanks

    The original poster has declared that this a cold backup, so presumably no BEGIN/END BACKUP involved here. Further, how would those commands do anything to cause a backup script to SKIP the files for a tablespace?
    As for the OP's problem, I will guess that this is occurring on Windows. Because the default flag that Windows programs use to open files often specify exclusivity, many backup scripts and packages will not be able to open (or copy) a file that is open by another process. Notably, if the backup is hitting such a problem it should be receiving (and hopefully logging) and error.
    I can't guess what other program might have had the files open, but if Oracle did not successfully shut down completely, or if some threads failed to exit during shutdown, you would end up with some files with open filehandles. That would prevent commands like COPY from working on those files.
    Solutions (if this is windows and my theory holds water) would include:
    - Providing exception handling in the cold backup script to verify that Oracle has shutdown completely and that no Oracle threads remain.
    - Switching to ocopy.exe (Oracle nonexclusive copy program for Windows) so that the backup would not fail on files with open filehandles. I guess this is a little reckless since the database could be open and the cold backup could proceed heedless of that fact.
    - Use process explorer to track down what program has the files open after shutdown.
    Hope this helps,
    Jeremiah Wilton
    ORA-600 Consulting
    http://www.ora-600.net

  • Help Importing objects from 1 user to another user and problem with trigger

    Hello community, i am having a little difficulty with the exporting objects from one user to another specifically the exporting of the trigger.
    Here is the situation, because of SOX purpose whenever a update is sent to the client the dba have to execute the script as himself (priviliged user) and is not allowed to log into the schema to make changes. Therefore we perpend the object definitions with &user_schema.. and they define user_schema in sqlplus and execute the update script.
    Here is a small example which requires two users (user1 and user2) with the following grants (connect, create table, create trigger, create view, create sequence). Please forgive the naming of the objects, just trying to be as simple as possible.
    I start out by logging in as system user via sqlplus and execute the following.
    ------------------Begin sqlplus----------------------
    define user_schema=user1;
    create table &user_schema..abc01 (
      col1 number,
      col2 varchar2(20),
      col3 number,
      constraint pk_abc01_col1 primary key (col1)
    create table &user_schema..xyz01 (
      col1 number,
      col2 varchar2(20),
      col3 number,
      constraint pk_xyz01_col1 primary key (col1)
    create or replace view &user_schema..view1 as
    select x.col1, x.col2, x.col3, a.col1 as acol1, a.col2 as acol2, a.col3 as acol3
    from xyz01 x
    inner join abc01 a on a.col1 = x.col1;
    create sequence &user_schema..seq_xyz01 start with 1 increment by 1;
    create or replace trigger &user_schema..trig01
    before insert on &user_schema..xyz01 for each row
    begin
      if (nvl(:new.col1, -1) = -1) then
        select seq_xyz01.nextval into :new.col1 from dual;
      end if;
    end;
    /--------------------End sqlplus----------------------
    I would then proceed to export using the exp utility via the command line
    exp system/systempassword file=user1.dmp owner=user1
    Then import user1 objects into user2
    imp system/systempassword file=user1.dmp fromuser=user1 touser=user2
    Now the problem:
    When i take a look at the sql for user2 trigger (trig01) i see the following (viewed via sqldeveloper)
    create or replace TRIGGER "USER2".trig01
    before insert on user1.xyz01 for each row
    begin
      if (nvl(:new.col1, -1) = -1) then
        select seq_xyz01.nextval into :new.col1 from dual;
      end if;
    end;its referring to user1.xyz01 table, however i want it to point to is user2.xyz01 table. Can someone please help me out or offer another solution to go about this because i need the ability to import the objects into a different user without the import failing and having to recompile the object.
    I've also tried executing this while connected as system user via sqlplus:
    define user_schema=user1
    create or replace trigger &user_schema..trig01
    before insert on xyz01 for each row
    begin
      if (nvl(:new.col1, -1) = -1) then
        select seq_xyz01.nextval into :new.col1 from dual;
      end if;
    end;
    /but that fails stating that table or view does not exist, please help
    however that fails because i
    Edited by: user3868150 on Nov 6, 2009 6:05 PM

    When performing an update in their system, the same script will be run with different values, thats not the problem.
    The client currently have just that one schema in their environment, however they want to have another instance of the application set up in their environment (same database) and have that go off on its own track separate from the original application.
    Now when we do an exp of the schema and the imp it into another user it get imported, however its incorrect. As stated before the trigger will be acting on the table in the original schema when it should be acting on the table in the newly imported schema.
    I suppose there is no other way around this when a trigger is created the way in which i outlined above. I guess after the data gets imported into a different user the trigger would have to be recompiled to point to the correct table.
    If you have an alternate solution to go about this i am open to suggestions. However like i mentioned in the original post because of SOX purpose the dba is not allowed to log in and execute update scripts as the schema user. The scripts should only be executed as that privileged user (dba).
    Also if i hard code the user when the trigger is created
    create or replace trigger user1.trig01
    before insert on user1.xyz01 for each row
    begin
    if (nvl(:new.col1, -1) = -1) then
    select seq_xyz01.nextval into :new.col1 from dual;
    end if;
    end;
    /it still doesn't get imported the way that i want it to be imported in user2
    create or replace TRIGGER "USER2".trig01
    before insert on user1.xyz01 for each row
    begin
      if (nvl(:new.col1, -1) = -1) then
        select seq_xyz01.nextval into :new.col1 from dual;
      end if;
    end;now when i do an insert to test
    insert into user1.xyz01 (col2, col3) values ('abc', 123); -- Works fine, no problems here
    insert into user2.xyz01 (col2, col3) values ('abc', 123);Results in an error ORA-01400: cannot insert NULL into ("USER2".XYZ01"."COL1") because the trigger doesn't exist on user2.xyz01 table.
    Just try and create the schema like how i outlined above in your environment to see what i'm talking about. It seems that the only way to get the trigger to be imported the way i want it to is to actually log in as that user and create the trigger.
    sqlplus user1/user1
    {code}
    create or replace trigger trig01
    before insert on xyz01 for each row
    begin
    if (nvl(:new.col1, -1) = -1) then
    select seq_xyz01.nextval into :new.col1 from dual;
    end if;
    end;
    {code}
    Edited by: user3868150 on Nov 6, 2009 6:10 PM

  • Looking for a better way to backup end users

    Hello there.
    Not sure if this the best forum for this post but here goes.
    Last week my company had a break in and a couple of laptops got lifted. Not a big deal as I had backups of the machines on tape. It did take a while to recover the files though. During that time I got to thinking about a better way to backup and I am looking for any ideas of if this would work or not.
    First quick overview. About 60 end users majority with laptops.
    What I am thinking of doing is this. Take an xServe RAID configured in RAID 5, for redundancy and speed, and creating a shared folder on it for each of my end users. This way if they need to retrieve a file they can log in and will not have to call me. Then, this is where I need help, use some tool to copy all of their files at specified intervals into their folder, at least once a day. I am currently using Retrospect to backup everyone directly to tape and I could use that to do this but the only thing I do not like about Retrospect is it can only do one user at a time. With 60 users I have to start it pretty early in the day. I know copy files would be alot faster so I am not ruling it out. I was also thinking about using ChronoSync to accomplish this and then use Retrospect to backup the RAID to tape for long term storage. I am ruling out using Home directories on the xServe for my users because they are all laptops and are usually out of the office at least one day a week.
    If anyone could offer some ideas of whether or not this is going to work or not or if you have other suggestions, any help would be greatly appreciated.
    Thanks in advance.
    Daniel Krajc

    Hi,
    you could be using a combination of Mobile Accounts (for the settings of the ibook) and rsync (which is installed by default on 10.3 and 10.4), check man rsync for more info.
    regards.
    Dimi.

  • Error when dropping a user/tablespace

    Hi
    I am trying to drop user/tablespaces, but I got the following error
    SQL> drop user SM92 cascade;
    drop user SM92 cascade
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-00942: table or view does not exist
    Currently this user has not tables or view, the statement select * from all_tables where owner='SM92' return no row
    ANyone knows why I can't drop this user? What should I check?
    Thanks
    Li

    Hi, probably the user has dependency with other schemas, please review the Note:361576.1 into metalink site.
    Good luck.
    Regards.

  • Time Machine does not backup home/user directory (on separate drive)

    I recently installed a SSD into my Mini. Due to size restrictions, my home/user directory has to be kept on another drive. I retained the stock 1TB drive that came with the Mini for this.
    Ok, installed the SSD, restored a Time Machine backups (sans user data). Used a different admin user and configured my user to use the 1TB drive for it's home directory (/Volumes/1TB/home/<user>). Restart, log in as my user, all is good. All data, settings, etc is there. Everything looks normal.
    Time Machine REFUSES to backup this directory. It will backup the 1TB drive and anyting else I create in it, but not the home directory. I tried every permission trick I could think of or found online. I even tested it further by formatting the 1TB drive fresh, adding a new user, configuring the user to use the 1TB for their home directory and it still won't back it up (this was a test of permissions the OS set, to make sure I didn't change my data perms somewhere along the way). Time Machine would not backup the new user's home directory on the 1TB drive.
    Any thoughts? I can't be the first person to have their home directory on a non-OS drive.
    If I were to create a folder/file in /Volumes/1TB/<test file> ... Time Machine gets it perfect. It just will NOT touch /Volumes/1TB/home/<anything here>
    Thanks!

    Open the Time Machine preference pane and unlock the settings, if necessary. Click the Options button. If there is one particular folder with items that are not being backed up reliably, add it to the list of excluded items. If there are many such folders, add your home folder to the list, or add a whole volume (i.e., what Apple calls a "disk.") Save the changes.
    Start a backup, or wait for one to happen automatically. When it's done, open the preference pane again and remove the exclusion(s) you made earlier. Back up again and see whether there's a change.

  • TimeMachine not backing up, as hard drive is full. Can the time machine clear old backups and replace them with new?

    Time Macine not backing up.
    As the harddrive is full, Can the time machine clear old backups, and override them with new backups. For example 5 - 6 days +++ old. Backups were proformed dayly
    Any help much appreciated
    Many thanks,

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message. Clear the text field and scroll back in the log to that time. Select the messages timestamped from then until the end of the backup, or the end of the log if that's not clear. Copy them (command-C) to the Clipboard. Paste (command-V) into a reply to this message.
    If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Some personal information, such as the names of your files, may be included — anonymize before posting.

  • How to backup only users' relevant files from File History on Windows 8.1?

    Hi,
    I would like to find out how to backup certain user's files from File History on Windows 8.1?
    I'm planning to store the backup files on our server, over a shared network, so that client can retrieve them whenever they need.
    I manage to do one by excluding all the folders from c:\, but it occurs quite silly to me to go select one by one files to exclude them.
    Please advise if there is a better way?

    Hi,
    This IT Professional forum is for general questions, feedback, or anything else related to Office 2010, since your question is more related to Windows client, I'd recommend you post a new question in the following forum for further assistance:
    https://social.technet.microsoft.com/Forums/windows/en-US/home?category=w7itpro%2Cw8itpro%2Cwindowsvistaitpro%2Cwindowsxpitpro%2Cwindowsintune
    The reason why we recommend posting appropriately is you will get the most
    qualified pool
    of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Steve Fan
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Reference partition with Trigger

    This post is an extended version if my previous post -Dynamic Reference partition by Trigger on Feb 24
    I need to create a Dynamic Reference partition with Trigger.
    Requirement:-
    1. It should be a list partition
    2. It should be a referencing Partition (There are 2 tables with Parent Child relation)
    3. It should be dynamic(trigger) (Whenever new value enter to the List, It should be create a New partition by trigger, As per my understanding interval partitioning is dynamic but it will work only for range partition. )
    In other way, I can manually create both the tables with List partition initially itself, but when the new value is added to the List the table supposed to add the new partition. List value is unknown at the time of creation. Since the List value may increase and in future it will be more than five, if it is five we need five partition so partition with a default clause also not possible here.
    Almost I have completed this task, but facing some issues. Please find the example i have done. (Please note that it is just an example and not my actual requirement)
    ==============================================================================================================
    --TABLE 1
    create table customers
    cust_id number primary key,
    cust_name varchar2(200),
    rating varchar2(1) not null
    partition by list (rating)
    partition pA values ('A'),
    partition pB values ('B')
    --TABLE 2
    create table sales
    sales_id number primary key,
    cust_id number not null,
    sales_amt number,
    constraint fk_sales_01
    foreign key (cust_id)
    references customers
    partition by reference (fk_sales_01);
    --TRIGGER
    CREATE OR REPLACE TRIGGER CUST_INSERT
    BEFORE INSERT ON CUSTOMERS FOR EACH ROW
    DECLARE
    V_PARTITION_COUNT NUMBER;
    V_PART_NAME VARCHAR2 (100) := :NEW.RATING;
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    SELECT COUNT (PARTITION_NAME)
    INTO V_PARTITION_COUNT
    FROM USER_TAB_PARTITIONS
    WHERE TABLE_NAME = 'CUSTOMERS' AND PARTITION_NAME = V_PART_NAME;
    IF V_PARTITION_COUNT = 0 THEN
    EXECUTE IMMEDIATE 'ALTER TABLE CUSTOMERS ADD PARTITION '||V_PART_NAME||' VALUES ('''||V_PART_NAME||''')' ;
    END IF;
    END;
    --INSERT TO CUSTOMER
    insert into customers values (111, 'OOO', 'C');-- A and B are already in the Customer table List along with the create script, so I am inserting new row with 'C'
    ==============================================================================================================
    When i am inserting the new value 'C' to the customer table it supposed to create a new partition along with new row to the table.
    But I am getting an error like "ORA-14400: inserted partition key does not map to any partition", like partition is not exist.
    But If I execute the same insert statement again it is inserting the row.
    That means Even though it is a before insert trigger, insert is trying to happen before the partition creation.
    Since it is before insert trigger My expectation is
    a) Create a Partition first
    b) Insert the record second
    But actual behavior is
    a) Try to insert first and failing
    b) Creating the partition second
    That is the reason when I am trying second time the record is getting inserted.
    Can anyone help me on this, Thanks in advance.
    Shijo

    You can't do this with a trigger. And you really, really don't want to.
    As you've discovered, Oracle performs a number of checks to determine whether an insert is valid before any triggers are executed. It is going to discover that there is no appropriate partition before your trigger runs. Even if you somehow worked around that problem, it's likely that you'd end up in a rather problematic position from a lock standpoint since you'd be trying to do DDL on the object in the autonomous transaction while doing an insert likely from a stored procedure that would itself be made invalid because of the DDL in another-- that's quite likely to cause an error anyway. And that's before we get into the effects of doing DDL on other sessions running at the same time.
    Why do you believe that you need to add partitions dynamically? If you are doing ETL into a data warehouse, it makes sense to add any partitions you need at the beginning of the load before you start inserting the data, not in a trigger. If this is some sort of OLTP application, it doesn't make sense to do DDL while the application is running. I could see potentially having a default partition and then having a nightly job that does a split partition to add whatever list partitions you need but that would strike me as a corner case at best. If you really don't know what the possible values are and users are constantly creating new ones, list partitioning seems like a poor choice to me-- wouldn't a hash partition make more sense?
    Justin

  • How to create a new user aaa with same rights as existing user bbb ?

    Assume user bbb already exists in Oracla 10g database.
    How can I create a new user aaa with the same rights/permissions as the old user bbb?
    Is this procedure/command also working if the old user is user "system" (=dbadmin)?

    There is some possibilty to generate a EXPDP dump file which contains only DDL statements related to account and
    privileges: EXCLUDE/INCLUDE parameter can help.
    For example, following EXPDP statements seem to work with SYSTEM account:
    expdp / schemas=system content=metadata_only exclude=table,sequence,package,function,procedure,synonym,,type view dumpfile=DPD:system.dmp logfile=DPD:system.log
    Export: Release 10.2.0.2.0 - Production on Thursday, 14 February, 2008 9:41:36
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    Starting "OPS$XXX"."SYS_EXPORT_SCHEMA_01":  /******** schemas=system con
    tent=metadata_only exclude=table,sequence,package,function,procedure,synonym,type view dumpfile=DPD:system.dmp logfile=DPD:system.log
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
    Master table "OPS$XXX"."SYS_EXPORT_SCHEMA_01" successfully loaded/unload
    ed
    Dump file set for OPS$XXX.SYS_EXPORT_SCHEMA_01 is:
      C:\TEMP\SYSTEM.DMP
    Job "OPS$XXX"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:41:41
    impdp / sqlfile=dpd:system.sql dumpfile=DPD:system.dmp logfile=DPD:system.logImport: Release 10.2.0.2.0 - Production on Thursday, 14 February, 2008 9:42:46
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Produc
    tion
    With the Partitioning, OLAP and Data Mining options
    Master table "OPS$XXX"."SYS_SQL_FILE_FULL_05" successfully loaded/unload
    ed
    Starting "OPS$XXX"."SYS_SQL_FILE_FULL_05":  /******** sqlfile=dpd:system
    .sql dumpfile=DPD:system.dmp logfile=DPD:system.log
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
    Job "OPS$XXX"."SYS_SQL_FILE_FULL_05" successfully completed at 09:42:50and system.sql is:
    -- CONNECT OPS$XXX
    -- new object type path is: SCHEMA_EXPORT/USER
    -- CONNECT SYSTEM
    ALTER USER "SYSTEM" IDENTIFIED BY VALUES '970BAA5B81930A40'
          TEMPORARY TABLESPACE "TEMP";
    -- new object type path is: SCHEMA_EXPORT/SYSTEM_GRANT
    GRANT GLOBAL QUERY REWRITE TO "SYSTEM";
    GRANT CREATE MATERIALIZED VIEW TO "SYSTEM";
    GRANT SELECT ANY TABLE TO "SYSTEM";
    GRANT CREATE TABLE TO "SYSTEM";
    GRANT UNLIMITED TABLESPACE TO "SYSTEM" WITH ADMIN OPTION;
    -- new object type path is: SCHEMA_EXPORT/ROLE_GRANT
    GRANT "DBA" TO "SYSTEM" WITH ADMIN OPTION;
    GRANT "AQ_ADMINISTRATOR_ROLE" TO "SYSTEM" WITH ADMIN OPTION;
    GRANT "MGMT_USER" TO "SYSTEM";
    -- new object type path is: SCHEMA_EXPORT/DEFAULT_ROLE
    ALTER USER "SYSTEM" DEFAULT ROLE ALL;
    -- new object type path is: SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    BEGIN
    sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'BAS002.REGRESS.RDBMS.DEV.US.ORACLE.COM', inst_scn=>'1456160');
    COMMIT;
    END;
    -- new object type path is: SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
    BEGIN
    SYS.DBMS_AQ_IMP_INTERNAL.CLEANUP_SCHEMA_IMPORT;
    COMMIT;
    END;
    / These export and import steps don't take into account privileges granted on schema objects belonging to another user likely due to to the EXCLUDE statements.
    Message was edited by:
    Pierre Forstmann

  • How to backup read only tablespace in rman

    Hi,
    I am using oracle 10gR2 in solaris 10.
    I am taking Rman full backup using catalog.
    My target database have 5 datafiles, in that one datafile is in read only mode.
    When I issue the backup command in rman. it takes only 4 datafiles. its not taking the readonly datafile.
    I want to take backup of all 5 datafiles.
    kindly help me..
    Many Thanks

    My target database have 5 datafiles, in that one datafile is in read only mode.I presume that you mean to say that 1 Tablespace consisting of 1 datafile is Read Only.
    What does SHOW ALL in RMAN present ?
    If you have BACKUP OPTIMIZATION set to ON, the Read Only Tablespace will not be backed up after the first backup.
    You could still explicitly backup the tablespace with a BACKUP TABLESPACE tablespacename; , although a BACKUP DATABASE would exclude it.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Edited by: Hemant K Chitale on May 3, 2010 12:02 AM

  • Procedure to fully backup a users email & manually deprovision a OCS user

    Procedure to BACKP/RESTORE email account (Note you need to create a dir on filesystem to store the backup)
    Backup a users email account
    Source the midtier env e.g. ORACLE_HOME , ORACLE_SID , PATH
    oesbkp task=backup type=all user=<email_address> admindn=cn=orcladmin password=<password>
    ldaphost=<hostname> ldapport=3090 backupdir= < path to backup dir >
    Restore a users email account
    Source the midtier env e.g. ORACLE_HOME , ORACLE_SID , PATH
    oesbkp task=restore type=all user=<email_address> admindn=cn=orcladmin password=<password>
    ldaphost=<hostname> ldapport=3090 backupdir= < path to backup dir >
    Procedure to Manually Deprovision A user in OCS
    In this example we will delete the user:
    email [email protected]
    userid 100009
    Step 1 : Delete the user in the GAL
    Source the 10g cal env e.g. ORACLE_HOME , ORACLE_SID , PATH
    Check for user in GAL
    uniuser -ls -n 1 | grep <user_name>
    Delete the user from the GAL
    uniuser -del "S=<last_name>/G=<first_name>*" -n 1
    e.g.
    uniuser -ls -n 1 | grep bruce.wayneEnter a password:
    + [email protected]/UID=100009/AUTOREFRESH=1/
    uniuser -del "S=wayne/G=bruce*" -n 1Enter a password:
    Delete "S=wayne/G=bruce/UID=100009/ID=5566/NODE-ID=1" and its agenda [y/n]: y
    uniuser: "S=wayne/G=bruce/UID=100009/ID=5566/NODE-ID=1" has been deleted
    Step 2: Delete the user in the OID e.g. ORACLE_HOME , ORACLE_SID , PATH
    Source the midtier env
    Check for user in OID (Note ensure you have the correct port in this example we user 3060)
    ldapsearch -h <hostname> -p 3060 -D "cn=orcladmin" -w <password> -s sub \
    -b "cn=Users,dc=...................." -v "cn=<userid>"
    Delete the user in OID
    Create a file called "user.ldif" of the format
    echo "cn=<userid>, cn=Users, dc=....................">user.ldif
    Execute the ldapdelete utility
    ldapdelete -h <hostname> -p 3060 -D "cn=orcladmin" -w <password> -v -f user.ldif
    deleting entry cn=<userid>, cn=Users, dc=............................................
    delete completed
    Step 3: Delete user from the mail store
    Source the midtier env e.g. ORACLE_HOME , ORACLE_SID , PATH
    Check for user in that database
    echo "select username from es_user where USERNAME like '%<username>%';" > user.sql
    sqlplus "es_mail/password"@<user.sql
    Create "mailstore_user.txt" of the format
    echo "mail=<email_address>">mailstore_user.txt
    Clean the mail store
    oesucr mailstore_user.txt -d -v
    oesucr mailstore_user.txt -clean_user_mailstore_data

    Hi Guys,
    Interesting question. I've me wondering how I can do something similar. But not so much for the email(coz we are not using Oracle Mail), but for the security setup of a user in OCS. eg. a user is granted access to many folders or objects, we want an easy way to deprovision everything. (if backing up is possible before the deprovisioning, even better - just in case a wrong delete was performed, it is recoverable).
    The other thing I'm interested is whether a branch in OCS can be backed-up and recovered easily (together with all it's meta-data and attributes) ?
    Regards
    Steve

  • TIP 01: Default User Tablespace in 10g by Joel Pèrez

    Hi OTN Readers!
    Everyday I get connection on Internet and one of the first issues that
    I do is to open the OTN main page to look for any new article or any
    new news about the Oracle Technology. After I open the main page of
    OTN Forums and I check what answers I can write to help some people
    to work with the Oracle Technology and I decide to begin to write some
    threads to help DBAs and Developers to learn the new features of 10g.
    I hope you can take advantage of them which will be published here in
    this forum. For any comment you can write to me directly to : [email protected]
    Please do not replay this thread, if you have any question related to
    this I recommend you to open a new post. Thanks!
    The tip of this thread is: DEFAULT USER TABLESPACE
    Joel Pérez
    http://otn.oracle.com/experts

    Step 9: At the step 5 We changed the default tablespace of the database but there is an
    important detail that We have to realize. That detail is the following : when you change
    the default tablespace of the database. The users were using it are move to store its objects
    in the new default tablespace but the objects that were stored in the original tablespace
    remain there. Let's go to look at it:
    Here We are connected to the database as system user
    SQL> show user
    USER is "SYSTEM"Let's go to get connection as new_user user in order to create a table
    SQL> conn new_user/new_user@base1
    Connected.
    Creating a table in the schema NEW_USER
    SQL> create table t1_new_user(c1 number);
    Table created.As you can see that table is stored in the default tablespace assigned to the user new_user
    SQL> select TABLE_NAME, TABLESPACE_NAME from dba_tables
      2  where owner='NEW_USER';
    TABLE_NAME                     TABLESPACE_NAME
    T1_NEW_USER                    USERS1Now, We are going to perform the change at the database level regarding the default tablespace
    SQL> alter database default tablespace TEST;
    Database altered.Now, Let's go to see where the object is stored
    SQL> select TABLE_NAME, TABLESPACE_NAME from dba_tables
      2  where owner='NEW_USER';
    TABLE_NAME                     TABLESPACE_NAME
    T1_NEW_USER                    USERS1The object remain in the original tablespace assigned to the user new_user.
    Here We can realize that the change was successful but the table still remain in the original
    tablespace. You will have to move manually the objects to a new default tablespace.
    SQL> select USERNAME, DEFAULT_TABLESPACE from dba_users
      2  where username='NEW_USER';
    USERNAME                       DEFAULT_TABLESPACE
    NEW_USER                       TESTJoel Pérez
    http://otn.oracle.com/experts

  • Problem Exporting 'USER' Tablespace Metadata

    I am having problem using Data Pump Export to export the 'USERS' tablespace metadata.
    When I make the USERS tablespace read only and run the Data Export job, the job aborts with "ORA-01647: tablespace 'USERS' is read only, cannot allocate space in it."
    But then when I make the USERS tablespace read and write, the job aborts with "ORA-29335: tablespace 'USERS' is not read only."
    I've been caught in this CATCH-22 for the 2nd day now, and I could really use some help. I'm on a Windows 32 platform running Oracle 10g, and here is a copy of the Data Pump Export script/command that I am running at the operating system prompt, which works fine for other tablespaces' metadata, like 'EXAMPLE':
    $ expdp myusername/**mypassword* TRANSPORT_TABLESPACES=users TRANSPORT_FULL_CHECK=Y DIRECTORY=dtpump DUMPFILE=expdp_users.dmp logfile=expdp_users.log
    Thanks!

    Hi,
    The problem is that the datapump creates a table called the master table. It creates this in the schema running the job. If this schema has a default tablespace that you are trying to transport, then this won't work. You will have to use a different schema that does not use the set of tablespaces that you are transporting, or alter the use that is running the datapump job to have a different default tablespace.
    So, basically, datapump requires write access to the tablesspace that the user uses and transportable requires the tablespace to be read only.
    If it were me, I would do this:
    sqlplus username/password
    alter user username default tablespace system;
    exit
    run your expdp command
    sqlplus username/password
    alter user username default tablespace orig_tablespace;
    exit
    Dean

  • Can't backup my user folder?

    I have to erase and reintall Mac OS 10.4 on my iBook G3 notebook computer. When I try to backup my user folder it come up with the error that it can't copy folder/file because it has too many characters or it can't recognize the name and stops backing up my user folder.
    What is the maximum characters can a folder name/file name can have and which characters are best not to use when naming the folder/file when using Mac OS X.
    I would like to avoid getting the above error when making a backup of my user folder.
    Please reply with the answer.
    Thank you
    Russ
    iBook G3   Mac OS X (10.4)  

    What is the maximum characters can a folder name/file name can have and which characters are best not to use when naming the folder/file when using Mac OS X.
    These depend on the format of the disk you're copying the files to; you can find this out by selecting it in the Finder and choosing Get Info from the File menu. Two disks which have the same format have the same restrictions on filenames, and you can change the disk's format by erasing it in the Disk Utility, which is in the /Applications/Utilities/ folder. All of the Mac OS Extended variants are identical with regards to allowable filenames; Mac OS Standard, MS-DOS, and some other formats are more restrictive.
    (19343)

Maybe you are looking for