Clonig DB wihtout SYSAUX TBS is possible

Hi to all,
The below line is I got from Oracle® Database Backup and Recovery Advanced User's Guide as I was reading, how to clone a database.
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1006474
"You can exclude any tablespace except the SYSTEM tablespace or tablespaces" .
This means can we skip SYSAUX tablspace?
Please clarify..

Vijayaraghavan Krishnan wrote:
Hi to all,
The below line is I got from Oracle® Database Backup and Recovery Advanced User's Guide as I was reading, how to clone a database.
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1006474
"You can exclude any tablespace except the SYSTEM tablespace or tablespaces" .
This means can we skip SYSAUX tablspace?
Please clarify..SKIP TABLESPACE tbs_name      Excludes the specified tablespace from the duplicate database. Note that you cannot exclude the SYSTEM tablespace, SYSAUX tablespace, undo tablespaces, and tablespaces with rollback segments.
http://www.filibeto.org/sun/lib/nonsun/oracle/11.1.0.6.0/B28359_01/backup.111/b28273/rcmsynta020.htm
You can use the SKIP TABLESPACE parameter to exclude specified tablespaces from the duplicate database. Note that you cannot exclude the SYSTEM and SYSAUX tablespaces, undo tablespaces, and tablespaces with rollback segments. You can use the TABLESPACE parameter to specify which tablespaces should be included in the specified database. Unlike SKIP TABLESPACE, which specifies which tablespaces should be excluded from the duplicate database, this option specified which tablespaces should be included and then skips the remaining tablespaces.
http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmdupdb.htm#BRADV89954
Edited by: Surachart Opun (HunterX) on Nov 9, 2009 2:37 PM

Similar Messages

  • Why increase  sysaux tablespace

    Hi Experts,
    I have a bi-direction oracle 10GR2 stream in window 2003.
    I got a error message in alert.
    error 12801 in STREAMS process
    ORA-12801: error signaled in parallel query server P000
    ORA-01653: unable to extend table SYS.STREAMS$_APPLY_SPILL_MESSAGES by 8192 in tablespace SYSAUX
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-12801: error signaled in parallel query server P000
    ORA-01653: unable to extend table SYS.STREAMS$_APPLY_SPILL_MESSAGES by 8192 in tablespace SYSAUX
    The sysaux has data file as 32718.00MB after filled, i added other datafil as12767 MB. it will be fill soon.
    I want to know why it fill it so fast? do we need to shut down instance for this issue?
    or just added addtional datafile? what issue is for our stream?
    The othe rA DB server( input more data than this server ) does not have large size sysaux tbs.A Db server had shut down 2 times in past months
    Thanks for help
    Jim

    Spilled messages (of that kind) are messages that are dequeued and stored on disks by the apply process. It happens when messages are in the queue for too long or consuming too much memory. That has several drawbacks (1) spilled messages are logged in the target database, you don't benefit anymore from the fact messages are usually in-memory and (2) You consume space in the partitioned table you've shown. On the other hand you don't have an unlimited set of memory and you must have a mechanism to release some.
    An easy way to get messages spilled is to modify a large amount of data on the source on one transaction or leave uncommitted transactions. You can get data about the source transactions that generated those messages in DBA_APPLY_SPILL_TXN or directly in SYS.STREAMS$_APPLY_SPILL_MSGS_PART. You can use the the procedure Oracle provides in the doc to display the content of the messages.
    Try to identify why you get those message and shorten transactions on the source. Make sure you run the recommended patches on top of 10.2.0.4 too. There are several known in that area.

  • How to reduce the size of a sysaux tablespace

    Hi All,
    I am using oracle 10g database. My SYSYAUX tablespace size is aroud 6GB.
    Actually statspack was configured in this database because of this the tablespace grows more. Now i removed statspack (perfstat user). now the space used comes around 600MB. I want to reduce the size of the tablespace. Can any body help me.
    Kiran

    Your statspack report is generated in sysaux tablespace ?
    SYSAUX tablespace was installed as an auxiliary tablespace to the system tablespace when we created our database.
    Some database components that formerly created and used separate tablespaces now occupy the SYSAUX tablespace.
    Viewing Components using SYSAUX tbs.
    SQL> column occupant_name format a15
    SQL> column occupant_desc format a30
    SQL> column schema_name format a10
    SQL> select schema_name,occupant_name,occupant_desc,space_usage_kbytes
    from v$sysaux_occupants;
    You can move components tbs other that SYSAUX first you know which component move is applicable or not.
    SQL> column move_procedure format a25
    SQL> column move_procedure_desc format a40
    SQL> select move_procedure,move_procedure_desc from v$sysaux_occupants;
    Two major components consume lot of space.
    1.AWR ( automatic workload repository)
    2.OEM ( oracle enterprise manager repository)

  • Oracle 11.2.0.3.0 Database Upgrade changes DATA_PUMP_DIR

    Along with existing RMAN backups we do Exports - of our DB using and OS User and Oracle Wallet.
    Of the DB's we have upgraded the Data Pump Directory
    Select * from dba_directories; (there are other commands to get this info as well).
    I captured screens from the DBUA upgrades, but did not see an option to change this information.
    Is there a way to feed this information to the install moving forward. IE, ./DBUA -silent ?
    Also, anyone tracked the percentage of storage increase from 10.2/11.1 to 11.2.
    All the ones we did grew, so be aware.
    Thanks in advance!

    Are you stating that the upgrade process is changing or resetting data in DBA_DIRECTORIES ? It should not - if you have documented this experience, pl report it to Support - it may be a known bug or a new one.
    The increase in size of the database should only be restricted to SYSTEM and SYSAUX tablespaces (and possibly TEMP and UNDO), but user tablespaces should see no increase.
    HTH
    Srini

  • Moving oracle database to a new server with an upgraded oracle software

    Hi Guys,
    Here is the scenario, we have an existing Oracle 9.2 demo database on a Solaris 5.9 server. I was asked to transfer this to another server with same OS but with an Oracle 10g software installed. The original plan was to mount the exisiting Oracle home from the original server to the new server and from there copy all the database files and bring up the database.
    Here are the steps:
    - Issue an "alter database backup controlfile to trace", get the script from the trace file
    - Copy all database files to the new server
    - Edit parameter files
    - Do a startup mount recreate the control files, do a recover database, and startup the database on the new server.
    I suggested that we to a transport tablespace but due to limited resources and that the 9.2 database does not have this feature we can't perform this process. I also can't do a exp/imp of the database since we really have limited resources on a disk. I was wondering if the steps I enumerated above are correct? Or would result in an error?

    Hi,
    why you want to recreate the controlfile, does the mount point locations are changing from old server to new server?
    if you recreating controlfile new incarnation starts, keep in mind.
    you have the downtime to upgrade.. :)
    shutdown
    take cold backup and move to new server.
    create the directory structure as exist in PFILE including CRD files.
    install software
    startup upgrade
    upgrade database(need to add sysaux TBS)
    shut immediate
    startup
    *@?/rdbms/admin/utlrp.sql*
    change hostname in listner.ora/tnsnames.ora files.
    Thanks

  • Hoe to call Move Procedure

    Hi all,
    I am using Oracle 10gR2 on solaris 10.
    My sysaux tbs is full, I cant increase the size because it is ASM and the group has no space. The largest chunk occupying space in the sysaux is Enterprise Manager Repository. I know that the move procedure is emd_maintenance.move_em_tblspc.
    But how do I call it? Can some one guide to do this or paste any link which explains how to call this procedure?
    Regards.....

    If i describe sysman.emd_maintenance:
    PROCEDURE MOVE_EM_TBLSPC
    Argument Name                  Type                    In/Out Default?
    DEST_TBS_IN                    VARCHAR2                INYou may want to check this link also.
    Amardeep Sidhu
    http://amardeepsidhu.com/blog
    http://oracleadmins.wordpress.com
    Message was edited by:
    Amardeep Sidhu

  • I want to relocate my iTunes library on a NAS drive and stream to apple tv2 and DLNA tbs around the house is this possible and if so which is the best NAS drive to buy???

    I want to relocate my iTunes library on a NAS drive and stream to apple tv2 and DLNA tbs around the house is this possible and if so which is the best NAS drive to buy???

    ged2001 wrote:
    I want to relocate my iTunes library on a NAS drive and stream to apple tv2 and DLNA TV's around the house is this possible and if so which is the best NAS drive to buy???
    i don't have any experience with DLNA TV's but i recently moved my iTunes library to a NAS. streaming content to airport express remote speakers and TV2 works great. however, i have all my gear hardwired to my time capsule (except the airport express) so i'm not sure how well streaming e.g. HD movies to TV over wifi works.
    i have a Synology DiskStation 411j and am very happy with it.

  • OCS 10g Application De-provisiong problem & SYSAUX possible related issue

    Hi,
    Anyone experienced this before and know how to fix it? When we de-provision applications (RTC, Calendar, Content) it takes forever. Actually its stuck in the ff status:
    Calendar (De-provisioning In Progress)
    Content (De-provisioning In Progress)
    RTC (Pending de-provisioning)
    What could be causing this?
    Can it be related to a problem we have w/ our SYSAUX tablespace? It is marked as status "RECOVER" when running the following SQL:
    SQL> select file#,name,status,enabled from v$datafile where file#=3;
    Its enabled "READ WRITE".
    Is it difficult to fix/resolve this kind of issue problem? How?
    Thanks in advance.

    You are welcome. I'm glad you got it back up.
    (1) You say you did the symbolic link. I will assume this is set correctly; it's very important that it is.
    (2) I don't know what you mean by "Been feeding the [email protected] for several weeks now, 700 emails each day at least." After the initial training period, SpamAssassin doesn't learn from mail it has already processed correctly. At this point, you only need to teach SpamAssassin when it is wrong. [email protected] should only be getting spam that is being passed as clean. Likewise, [email protected] should only be getting legitimate mail that is being flagged as junk. You are redirecting mail to both [email protected] and [email protected] ... right? SpamAssassin needs both.
    (3) Next, as I said before, you need to implement those "Frontline spam defense for Mac OS X Server." Once you have that done and issue "postfix reload" you can look at your SMTP log in Server Admin and watch as Postfix blocks one piece of junk mail after another. It's kind of cool.
    (4) Add some SARE rules:
    Visit http://www.rulesemporium.com/rules.htm and download the following rules:
    70sareadult.cf
    70saregenlsubj0.cf
    70sareheader0.cf
    70sarehtml0.cf
    70sareobfu0.cf
    70sareoem.cf
    70sarespoof.cf
    70sarestocks.cf
    70sareunsub.cf
    72sare_redirectpost
    Visit http://www.rulesemporium.com/other-rules.htm and download the following rules:
    backhair.cf
    bogus-virus-warnings.cf
    chickenpox.cf
    weeds.cf
    Copy these rules to /etc/mail/spamassassin/
    Then stop and restart mail services.
    There are other things you can do, and you'll find differing opinions about such things. In general, I think implementing the "Frontline spam defense for Mac OS X Server" and adding the SARE rules will help a lot. Good luck!

  • Aperture previews on internal HD while RAW on external HD - possible?

    Hi. I need a solution that will allow me to store my RAW and old JPG original image library on external drives and keep high quality JPGs in the Aperture library on the internal drive. For me this would act as a secondary backup to burning RAW files to DVD for archival and disaster recovery, and also allow faster access to certain functions of Aperture that don't require messing with RAW masters, not to mention freeing up a lot of space that is very much needed. Especially now with 24 and 36 MP RAW files from my D7100 and D800, RAW files are often in the 60-70 MB range while an HQ JPG of the same image might be about 10x less. Is this possible with Aperture? Is anyone doing something similar to deal with similar issues with a different solution that doesn't include Aperture (I've been using Aperture since the beginning so I don't have a lot of experience with different solutions, but I'm willing to switch), such as Lightroom or Capture NX2? Obviously cost and hardware are limitations here. Ideally, I would have a RAID setup and offsite auto-backup, but this would require TBs of storage on top of the TBs I already have, which usually costs upwards of $100 per month (i.e. bitcasa). Not an option for me right now. Thanks for any suggestions!
    P.S. Just to be clear, I'm aware of the relatively high quality thumbnails (up to 1024 pixels) Aperture generates, which are available in the program even if the managed library is unavailable (unplugged). This isn't good enough for my purpose. I'm talking about large JPGs, at least 2 to 3 times larger than a 1024 thumbnail that could potentially be of use in a professional context in the event the orignals were lost or unavailable.

    Aperture is structured to provide exactly what you want.  You have a few details mis-understood, however.
    Read my short guide to the parts of Aperture.
    Your Library holds Images (each Image is a record in a database).  You control whether the Image's Original (the file you imported) is stored inside the Library package or outside.  Images that are stored outside the Library package can be on-line or off-line (Images stored inside the Library package are on-line any time the Library is open in Aperture.)  Aperture lets you do much with Images whose Originals are off-line, all of which involves metadata:  you can rate them, put them into different containers in Aperture, assign keywords, etc.  What you cannot do when an Image's Original is off-line is:  print the Image, change Adjustments, or export the Image.
    Aperture is designed to deal with the problem of Libraries taking up too much space an a single drive by relocating older or rarely-used Images' Originals to a second (almost always external) drive.
    Cleverly, though, Aperture also includes another file based on your Image that can be useful.  This is the Preview.  It is a JPG file.  It is stored in your Library package, and thus is always on-line when your Library is open.  You create a copy of your Preview file by dragging an Image from Aperture.  Those copies can be emailed, used in other programs, etc.  (Added:  In fact, it is this Preview file that Aperture makes available to other programs via OS X's Media Browser.)
    You control:
    - whether Previews are created
    - whether Previews are deleted
    - the size and quality settings of the Preview file.
    You can control these things for each Image in your Library, but in practice this is usually set for the entire Library and not changed for any subset of Images.
    Put your Library on your system drive.  Relocate [some, many, almost all] Originals to a dedicated external drive.  (Note that this is not a one-way trip: Aperture makes it easy to relocate or consolidate any Originals at any time.  And don't worry about the Finder folder structure: it seems important to you, but to the computer it makes no difference at all.)  Set your Previews to the highest resolution and quality that you might need.  (I set mine to equal the resolution of my largest display, with "Quality" set to 10.)
    You now have pretty much what you asked for:  a trimmed-down Library, RAW Originals off of your system drive, and the ability to create fairly large, high-quality JPGs of your Images at any time.
    The next step -- should your Library outgrow this set-up -- is to put your Library on a fast external drive.  External drive throughput is excellent today.  Start-up time for a Library on an external drive is slower, but once loaded a Library on an external drive should appear to the user no different than a Library on an internal drive.
    The above makes up your _working copy_ of your Aperture system ("system" = Library + Images' Referenced Originals).  For backup, make two copies of your _working copy_.  Store one off site.  Never have all three in the same physical location.
    The Preview is, as you've noticed, a good "extra" back-up of your work.  Just be aware that it is a JPG, and is likely lower resolution than a new file created by exporting an Image.
    Lastly, you might re-consider using optical media for archival purposes.  I looked into this three or four years ago and concluded that keeping two copies of my digital archive on hard drives was less expensive, easier to maintain, more reliable, and gave me far more latitude for storage and retrieval.
    HTH,
    --Kirby.
    Message was edited by: Kirby Krieger

  • Consistent hot backup possible

    Is a consistent hot backup possible?
    I would like to perform hot backups while the database is in basically a read only state. I am currently using Oracle recommended backups via OEM, for example.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup recovery area;
    Would executing the sql command "alter database begin backup;" before running the above RMAN script accomplish this task? Then off course when completed execute sql "alter database end backup;".
    My basic concern is this type of RMAN hot backup usable in a disaster situation, i.e recreated on another server from tape backup.
    I am open to any other ideas.
    Thanks for your help in advance.
    Ed - Wasilla, Alaska
    Edited by: evankrevelen on Sep 11, 2008 10:18 PM

    Thanks everyone who replied to this thread.
    Just to clarify my complete backup strategy, there are two RMAN scripts run on daily and weekly basis. The daily does pickup the archivelogs. I had shown the weekly when first opening this thread. Here is the daily.
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    release channel oem_disk_backup;
    allocate channel oem_sbt_backup1 type 'SBT_TAPE' format '%U';
    backup archivelog all not backed up;
    backup backupset all not backed up since time 'SYSDATE-1';
    My question now is what RMAN does in the increments. It appears to be updating the original level 0 copies of datafiles with changed blocks only. Is the new copy of the datafile now a level 0 type file?
    Here is a transcript from one of the daily backups.
    Starting recover at 11-SEP-08
    channel oem_disk_backup: starting incremental datafile backupset restore
    channel oem_disk_backup: specifying datafile copies to recover
    recovering datafile copy fno=00001 name=+DEVRVYG1/landesk/datafile/system.2576.616107783
    recovering datafile copy fno=00002 name=+DEVRVYG1/landesk/datafile/undotbs1.2574.616107865
    recovering datafile copy fno=00003 name=+DEVRVYG1/landesk/datafile/sysaux.2575.616107829
    recovering datafile copy fno=00004 name=+DEVRVYG1/landesk/datafile/users.2572.616107871
    recovering datafile copy fno=00005 name=+DEVRVYG1/landesk/datafile/landesk.2914.616107643
    channel oem_disk_backup: reading from backup piece +DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189
    channel oem_disk_backup: restored backup piece 1
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_10/nnndn1_tag20080910t220150_0.12330.665100189 tag=TAG20080910T220150
    channel oem_disk_backup: restore complete, elapsed time: 00:05:16
    Finished recover at 11-SEP-08
    Starting backup at 11-SEP-08
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    input datafile fno=00005 name=+DEVG1/landesk/datafile/landesk.374.614072207
    input datafile fno=00003 name=+DEVG1/landesk/datafile/sysaux.384.614002027
    input datafile fno=00001 name=+DEVG1/landesk/datafile/system.383.614002025
    input datafile fno=00002 name=+DEVG1/landesk/datafile/undotbs1.385.614002027
    input datafile fno=00004 name=+DEVG1/landesk/datafile/users.386.614002027
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/nnndn1_tag20080911t220708_0.12999.665186835 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:02:26
    channel oem_disk_backup: starting incremental level 1 datafile backupset
    channel oem_disk_backup: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel oem_disk_backup: starting piece 1 at 11-SEP-08
    channel oem_disk_backup: finished piece 1 at 11-SEP-08
    piece handle=+DEVRVYG1/landesk/backupset/2008_09_11/ncsnn1_tag20080911t220708_0.2301.665186983 tag=TAG20080911T220708 comment=NONE
    channel oem_disk_backup: backup set complete, elapsed time: 00:00:21
    Finished backup at 11-SEP-08
    It appears to be updating the previous copy with updated blocks thus rolling forward the datafile copy to a new level 0 copy.
    Then to restore from the backup RMAN would first use this new copy of the datafile and then apply any archivelogs to them to bring the database to the point in time the incremental backup was taken.
    Are these assumptions true?
    Thanks for your help,
    ED

  • Problems recovering SYSAUX

    Dear All
    I have a 10g database running on redhat EL 4.2. This morning I tried to run a flashback query on one of the tables only to be told there was a problem with one of the datafiles. Further investigation showed this to be the datafile containing the sysaux tablespace. The status of the file is set as RECOVER. I have tried to recover this with no success. In RMan if I issue the command 'recover tablespace sysaux I get the response.
    Starting recover at 19-JUN-06
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=152 devtype=DISK
    starting media recovery
    archive log thread 1 sequence 424 is already on disk as file /u08/oradata/EDM/archivelog/1_424_580063949.dbf
    archive log thread 1 sequence 425 is already on disk as file /u07/oradata/EDM/archivelog/1_425_580063949.dbf
    archive log thread 1 sequence 426 is already on disk as file /u07/oradata/EDM/archivelog/1_426_580063949.dbf
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 06/19/2006 14:43:16
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of log thread 1 seq 357 lowscn 7103125 found to restore
    RMAN-06025: no backup of log thread 1 seq 356 lowscn 7064717 found to restore
    RMAN-06025: no backup of log thread 1 seq 355 lowscn 7064686 found to restore
    RMAN-06025: no backup of log thread 1 seq 354 lowscn 7029252 found to restore
    RMAN-06025: no backup of log thread 1 seq 353 lowscn 7029220 found to restore
    Similarly issuing the command 'alter database recover datafile 3' in sqlplus gives:
    alter database recover datafile 3
    ERROR at line 1:
    ORA-00279: change 6846564 generated at 04/23/2006 18:00:42 needed for thread 1
    ORA-00289: suggestion : /u08/oradata/EDM/archivelog/1_345_580063949.dbf
    ORA-00280: change 6846564 for thread 1 is in sequence #345
    My questions are these.
    1) based on the fact that the change nuber in the second response is dated 23/04/06 is this how long the problem has been ongoing?
    2) The problem doesn't seem to have had any impact on the daily use of the system. How is this possible? I thought the sysaux file was crucial to the operation of the db.
    3) While RMan runs every night we no longer have the archive log files as far back as appril 06. How can I force the recover so that the file can be brought back on line? Will doing this cause any further problems?
    Many thanks
    Paul
    Many thanks

    Thanks for the reply
    WE have a table named mlm_papertrail which, having added about 2000 rows to it, now causes the error shown whenever someone tries to insert a new row into it. Ihave created a new table and pointed the application to it and the problem no longer occurs.
    However the SYSAUX tablespace is marked recover as is the datafile that forms it. I can only assume that adding the rows to the table has in turn caused sysaux to expand until it has hit a part of the disk that is damaged or corrupt.
    I've tried recovering both the tablespace and the datafile getting the request for the archivelog back in 04/06. Again I can only assume the sysaux file has been damaged at this point, but till now hasn't caused any problems.
    I don't want to roll the whole database back to 04/06 since in the main it works fine. But obviously the SYSAUX tablespace needs recovering.
    My question is how do I do that with as little pain as possible :-)
    Anyway, thanks for the advice. I'll try what you suggest.
    regards
    paul

  • Sysaux datafile recovery

    Dear all,
    10gr2 on windows 2003.
    SYSAUX datafile is corrupted in our environment and I don't have the necessary archive logs to recover the datafile. Am planning to create a new database and migrate the data there . I tried
    a) full db export and schema level export fails with the below error :
    EXP-00056: ORACLE error 376 encountered
    ORA-00376: file 3 cannot be read at this time
    ORA-01110: data file 3: 'S:\ORACLE\PRODUCT\10.2.0\ORADATA\BICCO\SYSAUX01.DBF'
    EXP-00000: Export terminated unsuccessfullyall the data in a single tablespace called users . when am trying to export that tablespace (transportable tablespace) am getting the below error :
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    Note: table data (rows) will not be exported
    About to export transportable tablespace metadata...
    EXP-00008: ORACLE error 1001 encountered
    ORA-01001: invalid cursor
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 899
    ORA-06512: at "SYS.DBMS_SQL", line 19
    ORA-06512: at "SYS.DBMS_TTS", line 838
    ORA-00376: file 3 cannot be read at this time
    ORA-01110: data file 3: 'S:\ORACLE\PRODUCT\10.2.0\ORADATA\BICCO\SYSAUX01.DBF'
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1387
    ORA-06512: at line 1
    EXP-00000: Export terminated unsuccessfullyC:\Documents and Settings\ducadmin>exp transport_tablespace=y tablespaces=USERS file=USERS2_Feb11_4
    .27.dmp log=USERS2_Feb11_4.27.log statistics=none
    Export: Release 10.1.0.4.2 - Production on Thu Feb 11 16:28:06 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Username: sys as sysdba
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    Note: table data (rows) will not be exported
    About to export transportable tablespace metadata...
    EXP-00008: ORACLE error 1001 encountered
    ORA-01001: invalid cursor
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 899
    ORA-06512: at "SYS.DBMS_SQL", line 19
    ORA-06512: at "SYS.DBMS_TTS", line 838
    ORA-00376: file 3 cannot be read at this time
    ORA-01110: data file 3: 'S:\ORACLE\PRODUCT\10.2.0\ORADATA\BICCO\SYSAUX01.DBF'
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1387
    ORA-06512: at line 1
    EXP-00000: Export terminated unsuccessfully
    How do I proceed in this situation ?.. copying the data manually to other database is hectic as the data is huge.
    Please advise ?
    Kai                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Dear Mark,
    I cannot able to export schema level as it is failing with the below error :
    S:\Backup\g_BACKUP\feb13>exp system/manager@BICCO file=Feb13_c012006.dmp owner=c012006 statistics=no
    ne rows=Y constraints=Y
    Export: Release 10.1.0.4.2 - Production on Sat Feb 13 11:57:42 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user C012006
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user C012006
    About to export C012006's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    EXP-00056: ORACLE error 376 encountered
    ORA-00376: file 3 cannot be read at this time
    ORA-01110: data file 3: 'S:\ORACLE\PRODUCT\10.2.0\ORADATA\BICCO\SYSAUX01.DBF'
    EXP-00000: Export terminated unsuccessfully
    Kai

  • Sysaux  Datafile was Corrupted

    hi , i m new to this forums...
    one of my Learning Instance sysaux Datafile was corrupted...
    how to recover it?
    is Point in time Recovery is Possible??

    Hi all,
    If are there no backup for sysaux?
    there is a corrupted block on datafile of sysaux.
    Because this, I cannot export, drop, anything.
    so, I´ve tried to use:
    run
       set newname for datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\DEVDB\SYSAUX01.DBF'
       to 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\DEVDB\DATA\SYSAUX01.DBF';
       restore (tablespace sysaux);
       switch datafile 3;
       recover tablespace sysaux;
    }But with this I got the error:
    RMAN-06054: media recovery requesting unknown log: thread 1 seq 2874 lowscn 47185106
    So I´ve tried:
    BLOCKRECOVER DATAFILE 3 BLOCK 50139;But with this I got the error:
    RMAN-03002: failure of blockrecover command at 04/29/2008 16:04:30
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    Is there any other way to restore and recover this?
    I would like at least to export, but I got error too. :-(
    thanks!!!!
    Obs: I have backup for all database files, but when I did the backups,
    because this datafile is corrupted, I take the tablespace offline.

  • Restore & Recover SYSAUX datafile while DB up

    Hi,
    Due to data block corruption of SYSAUX, need to keep it offline then recover and make it online.
    sql 'alter database datafile 2 offline';
    restore datafile 2;
    recover datafile 2;
    sql 'alter database datafile 2 online';
    Is it possible to do this operation while Database is up? or we need to keep it down?
    BANNER
    Oracle Database 11g Release 11.1.0.6.0 - 64bit Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE 11.1.0.6.0 Production
    TNS for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    Thanks
    Edited by: Nadvi on Jun 10, 2010 10:02 PM
    Edited by: Nadvi on Jun 10, 2010 10:28 PM

    Nadvi wrote:
    Hi,
    Due to data block corruption of SYSAUX, need to keep it offline then recover and make it online.
    sql 'alter database datafile 2 offline';
    restore datafile 2;
    recover datafile 2;
    sql 'alter database datafile 2 online';
    Is it possible to do this operation while Database is up? or we need to keep it down?
    BANNER
    Oracle Database 11g Release 11.1.0.6.0 - 64bit Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE 11.1.0.6.0 Production
    TNS for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    ThanksDear Nadvi
    Why you don't perform Block Media Recovery on the corrupted data blocks using RMAN?
    In the following example I show how you can recover corrupted data block in the SYSAUX tablespace. However, as you use Oracle 11g, you will use the following syntax:
    RECOVER DATAFILE datafile_no BLOCK block)no Here's the example:
    SQL> create user test identified by test;
    User created.
    SQL> grant dba to test;
    Grant succeeded.
    SQL> conn test/test
    Connected.
    SQL> create table tbl_test (name varchar2(10)) tablespace sysaux;
    Table created.
    SQL> insert into tbl_test values('test');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> SELECT header_block FROM dba_segments WHERE segment_name='TBL_TEST';
    HEADER_BLOCK
            2939
    SQL> select tablespace_name from dba_segments where segment_name='TBL_TEST';
    TABLESPACE_NAME
    SYSAUX
    RMAN> backup database plus archivelog delete input;
    [oracle@localhost ~]$ dd of=/u01/oracle/product/10.2.0/db_1/oradata/db1/sysaux01.dbf bs=8192 conv=notrunc seek=2939 <<EOF
    corruption
    EOF0+1 records in
    0+1 records out
    11 bytes (11 B) copied, 0.0182719 seconds, 0.6 kB/s
    SQL> alter system flush buffer_cache;
    System altered.
    SQL> conn test/test
    Connected.
    SQL> select * from tbl_test;
    select * from tbl_test
    ERROR at line 1:
    ORA-01578: ORACLE data block corrupted (file # 3, block # 2939)
    ORA-01110: data file 3:
    '/u01/oracle/product/10.2.0/db_1/oradata/db1/sysaux01.dbf'
    RMAN> blockrecover datafile 3 block 2939;
    starting media recovery
    media recovery complete, elapsed time: 00:00:07
    Finished blockrecover at 10-JUN-10
    RMAN> exit
    SQL> conn test/test
    Connected.
    SQL> select * from tbl_test;
    NAME
    testFor more information on Block Media Recovery, you can watch my Video Tutorial
    http://kamranagayev.wordpress.com/2010/03/18/rman-video-tutorial-series-performing-block-media-recovery-with-rman/

  • Possible scenarios with XI, SAP AutoId, RFID combination.

    Hi All,
    Could you please provide the answers to the below questions
    1. What all possible scenarios we can do with SAP AutoId using RFID datas?
    2. I would like to do scenarios with and without SAP Netweaver XI for the above case. Please provide some scenario examples.
    Regards
    Sara

    Hi Sara,
               The possible scenarios of SAP Auto ID are many. You can have a detailed documentation on th esupported processes by SAP Auto ID on the following link:
    http://help.sap.com/saphelp_autoid70/helpdata/en/5f/064f36300247b686bf0233454dbeb1/frameset.htm
    Without using XI, you will not be able to integrated with SAP ERP.
    So only process thats possible wihtout XI in SAP Auto ID is Slap and Ship Process. The same process has also been explained in the above link.
    I hope this helps. Please revert if you any other doubts.

Maybe you are looking for