Confused with datafile status in v$datafile and v$recovery_file

Hi All,
Am facing great confused problem in prod database, accidently i created datafile so i tried to drop it by using "alter database datafile' ' offline drop"
But its showing recovery status in v$datafile and offline status in v$recovery_file.
Please inform how to resolve this issue.
Regards,
Yasser.(Oracle DBA)

You add datafiles to a Tablespace and when you remove a datafile from the database you have to drop the associated tablespace that the file had belonged to.
Exactly what actions did you take in what order to create and eliminate this datafile to and from the database?
What is the Oracle version?
What are the results of the following queries?
select ts#, file#, rfile#, status, enabled from v$datafile
where name = 'x'
select "ONLINE", online_status, error
from v$recover_file where file# = n
HTH -- Mark D Powell --

Similar Messages

  • Recover database with datafile and logfile

    Hi Export,
    we have one maxdb database. For some reasons, we have no backup and lost the programs binaries. Is it possble to recover the database with datafile and logfile? and how?
    thanks a lot.
    Rongfeng

    Hello Rongfeng,
    1. Please see document u201CHowTo - Creating a clone of a SAP MaxDB databaseu201D at
    http://wiki.sdn.sap.com/wiki/display/MaxDB/SAPMaxDBHowTo
    and review the section u201CCreating a clone manually via reusing volumes and parameters.u201D
    2. You wrote, that you u201Clost all dba passwords.u201D
    Please review the SAP note 25591. This note also has the brief description of the database user types.
    You are SAP customer, I recommend you to create the SAP message to the component u201CBC-DB-SDBu201D to clarify more details about the problem & find solution for you.
    Thank you and best regards, Natalia Khlopina

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

  • Confusion with purchased TV downloaded to ATV and PC

    Hi -
    I have an ATV G1 through which I've purchased a few TV shows in HD and downloaded them directly to my ATV.
    Now, iTunes on my PC keeps trying to download the same shows (not in HD!)
    to my PC.
    I don't want the shows automatically downloaded to my PC (I can transfer them from my ATV if I want them).
    So, I have TWO questions:
    1) How can I stop iTunes from automatically downloading these shows when I purchase through my ATV?
    2) Why are they downloading in SD, not HD?
    (INFO: ATV G1, iTunes 10.1, Win XP Pro)
    THANKS!

    Winston,
    Thanks for your input. In response to your questions:
    "Not sure why the movie won't sync back to the tv at this stage, could well be an authorisation issue. Have you actually altered your sync options to include this movie."
    Yes, I have used the custom sync option and checked the movie as one I want synced. Still no luck.
    "You don't have to do any further setting up of your tv or itunes to enable streaming if you are already syncing. Anything that is not synced will automatically be streamed to the tv,."
    But if it isn't showing up in ATV, how does that happen?
    "All content for syncing and streaming must be checked along side its name in the library on itunes. Additionally any content you want to sync must be selected in the sync options."
    Yes, I believe I have done that by selecting the movie under the custom sync option. Before that, I used the automatic sync option. Doesn't matter which way I sync. Same result.
    "If you try to set up streaming as you are doing, you may well end up turning syncing off."
    Yes, I figured that out the hard way by erasing everything I synced. I had to do another sync but still the movie I purchased didn't sync the second time I tried.

  • Datafile and size

    I have potentially 400 GB of data to store. There are two options I can think of:
    1) use multiple small datafiles;
    create tablespace TEST
    datafile '/oradata/test.dbf' size 2048M
    extent management local
    2) use single bigfile tablespace.
    PRO for 1) is easy to backup and recovery; CON is too many files. For the size of 400 GB, I need about 200 datafiles with 2 GB each (?)
    PRO for 2) is only one datafile to use; CON is the size will be very big. It will make it difficult to backup and recovery.
    Would you please offer some suggestions or opinion?
    Thanks
    Scott

    I understand that with the default 8K blocksize, I can have a datafile as big as 32 G. For the small tablespace with multiple datafiles option, should I keep the size < 2G or < 32G?Little correction required.
    The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    http://docs.oracle.com/cd/B19306_01/server.102/b14237/limits002.htm#sthref2833
    As Mark above and Tom (in the same link) are saying "but over time, things change. Today in 2006, I would not have any problem with a file larger than 2gig - the tools have all caught up to the fact that files can and do get larger."
    So, just start with smallfile tablespace and add the datafiles as and when needed. And one more thing, there is no relationship of big/small file tablespace in view of performance, because tablespace is a logical term of Oracle database. I/Os are related with datafile and OS not with tablespace.
    If someone is searching for I/O tuning then I think he/she is doing a scientific approach to the problem rather than searching for performance vs tablespace issues. I am not finding a link where Sir Thomas kyte has said that the same something like this, even docs are wrong on this issue.
    Regards
    Girish Sharma

  • Leave with approval status ''Work in progress''

    Hi,
    I try to delete or complete the leave with approval status ''Work in progress'' and there is no way to do that .. can anybody help .
    This issue happened when i request annual leave and the session expired .. i login again and i found the request stuck at absence summary and i can't do any thing even deleting.
    snapshot of the issue:
    http://www.2shared.com/photo/B2F8Aysg/work_in_progress_leave.html
    Thanks in advance

    Hi,
    Yes i know its not should be disabled at site level i did that on test envroment .
    i tried to put it at user but its not possible
    after i put the profile on value 'Yes' the delete and update columns appeared at absence summary with warning massege and i can delete the Leave with approval status ''Work in progress'':
    http://www.2shared.com/photo/QqRfjReY/Warning_mass-2.html
    and when i try to create new leave gave this error massege:
    http://www.2shared.com/photo/LL1sU9xB/Warning_mass-1.html
    can you guide me how can i know the earlier personalization done on the absence page ??
    thaks alot for your helping

  • Confused with HD-SD

    Hi,
    All the sudden I am confused with my searches on this forum and would appreciate clarification
    I am presently working on a HD project (from HD clips and HD stills).
    If I want to get the best out of it, should I use only BluRay burning or would it be OK to use HD setting and burning with DVDSP on a "regular" DL disk (the compressed project will be over 7 GB ?
    Should I understand that if I use HD setting with DLdisks only a HD compatible DVD player will be requested ?
    will a Bluray player play the HD-DL fine ? or will a bluray disk be requested on bluray player ?
    As you can see I just got really confused ...
    Thanks in advance
    Ivan

    No doubt this stuff can be confusing. Hopefully I'll be able to clear a few things up.
    You shot and edit an HD project in HD. Now you would like to create a DVD correct?
    You have 2 options for DVD
    1. Standard Definition DVD authored in DVDSP.
    2. Blu-Ray DVD. You'll have to purchase a burner.
    If you are on the newest FCS then you can author and burn a Blu-Ray DVD inside of Compressor. If you are on the previous version then you need another application to author and burn Blu-Ray like Toast or Adobe Encore.
    You made mention of HD-DVD. That format is dead and won't play on anything other than your Mac or a couple specific HD-DVD players that haven't been in production for years. If you look in the DVDSP preferences you can choose which kind of DVD to burn SD-DVD or HD-DVD. Choose SD whenever you are using DVDSP.
    Blu-Ray players play Blu-Ray DVD's and SD DVD's. It will not play HD-DVD.
    How long is your project if the compressed (m2v) files are coming in at over 7GB?

  • ASM Disk preparation for Datafiles and FRA in Oracle 10g RAC Inst

    Dear Friends,
    Please clarify wheteher the below method is correct to confiure ASM disks for Datafiles and FRA
    Partitions provided by IT team for OCR and Voting Disk
    /dev/sda1 - 150 GB (For +DATA)
    /dev/sda2 - 100 GB (For +FRA)
    OS     : RHEL 5.6 (64 Bit)
    kernel version = 2.6.18-238.el5
    Steps:(Node1)
    1) Install the RPM's for ASM
    rpm -Uvh oracleasm-support-2.1.7-1.el5.x86_64.rpm
    rpm -Uvh oracleasm-2.6.18-238.el5-2.0.5-1.el5.x86_64.rpm
    rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm
    2) Configure ASM
    /etc/init.d/oracleasm configure
    Default user to own the driver interface []: oracle
    Default group to own the driver interface []: dba
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]:
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver: [  OK  ]
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    3) Cretae ASM Disk
    /etc/init.d/oracleasm createdisk DISK1 /dev/sda1
    /etc/init.d/oracleasm createdisk DISK2 /dev/sda2
    4)/etc/init.d/oracleasm status
    5)/etc/init.d/oracleasm scandisks
    6)/etc/init.d/oracleasm listdisks
    7) Nothing to perform on Node2
    8) In dbca choose ASM and map the DISK1 for datafiles and DISK2 for FRA
    Please confirm the above steps are right?if not please clarify
    If DBCA ->ASM doesn't discover my disk then what should be the Discovery path i have to give?
    Please refer any document / Metalink ID for the above complete process
    Can i have ASM and oracle DB binary in the same home
    Regards,
    DB

    user564706 wrote:
    If DBCA ->ASM doesn't discover my disk then what should be the Discovery path i have to give?for asm disk created with oracleasm discovery path variable is ORCL:*
    Please refer any document / Metalink ID for the above complete processhttp://docs.oracle.com/cd/B19306_01/install.102/b14203/storage.htm#BABIFHAB
    Can i have ASM and oracle DB binary in the same homeyes. unless you want job role seperation or plan to run multiple versions of oracle homes
    >
    Regards,
    DB

  • Move datafiles and logfiles (Portal 6.0 and Microsoft SQL Server 2000)

    Hello,
    I have installed EP 6.0 with KMC and TREX. The database is MS SQL Server 2000.
    The datafiles and the logfiles of the portal are respectively located in the directory "C:\Program Files\Microsoft SQL Server\MSSQL\Data" and in "C:\Program Files\Microsoft SQL Server\MSSQL\LOG".
    I want to move these files in the disk D.
    How can I do it ?
    Is the procedure described in the site http://support.microsoft.com/kb/224071 correct ?
    I thank you for your help.
    Regards,
    Anne-Marie

    Francesc,
    Microsoft Exchange Server Integration in EP6.0
    The Microsoft Exchange server (subsequently called the Exchange server) is integrated using the Microsoft Exchange server transport. This transport integrates the scheduling capabilities of Exchange Server with SAP Enterprise Portal 6.0. The following versions of Microsoft Exchange server can be integrated with the Enterprise Portal.
    · Exchange Server 5.5 SP4
    · Exchange Server 2000
    · Exchange Server 2003
    The transport uses Microsoft Collaborative Data Objects 1.2.1 (subsequently called CDO) to access data from the Exchange server.
    You can check for more details on;
    http://help.sap.com/saphelp_nw04s/helpdata/en/7a/ec015c8446d447a83776d40f31c84f/frameset.htm
    Regards,
    James

  • Dropping a Datafile and Deleting its References

    Using sqlplus how do I delete datafiles and any references to them?
    At the moment I start the database in mount mode and use:
    ALTER DATABASE DATAFILE 'O:\BCS\test.ora' OFFLINE DROP;
    Which drops the datafile, but when I try to open the database it fails because it is still referencing the datafile.
    How do I delete this reference?
    Any thoughts or suggestions are greatly appreciated.
    Regards
    Toby

    I usually use the following sql to see what is in a tablespace before I drop it with
    "drop tablespace users including contents and datafiles;"
    column meg format 999999999
    column tablespace_name format a20
    set wrap off
    set lines 120
    set heading on
    set pages 50
    select round(sum(a.bytes)/1024/1024) meg,
    count(*) extents,a.tablespace_name,a.segment_type,a.owner||'.'||a.segment_name
    from dba_extents a
    where a.tablespace_name=upper('&1')
    group by a.tablespace_name,a.segment_type,a.owner||'.'||a.segment_name
    order by round(sum(a.bytes)/1024/1024)
    I also map the contents of datafiles with the following you can get the file_id from v$datafile or dba_data_files:
    column owner format a30
    column object format a30
    set lines 120
    set wrap off
    set trunc off
    set pages 50
    set feedback on
    column file_id format 999
    select     /*+ Rule */     'free space' owner     /*"owner" of free space*/
    ,          ' ' object          /*blank object name*/
    ,          file_id                    /*file id for the extent header*/
    ,          block_id               /*block id for the extent header*/
    ,          blocks                    /*length of the extent, in blocks*/
    from          dba_free_space
    where          file_id=&1
    union
    select     /*+ Rule */ substr(owner,1,20)||' '||substr(segment_type,1,9) /*owner name (first 20 chars)*/
    ,          substr(segment_name,1,32)||'.'||partition_name     /*segment name*/
    ,          file_id                    /*file id for the extent header*/
    ,          block_id               /*block id for the extent header*/
    ,          blocks                    /*length of the extent, in blocks*/
    from           dba_extents
    where          file_id=&1
    order by     3,4
    And datafiles can be dropped in 10g if they are empty:
    alter tablespace users drop datafile '/oracle/oradata/users01.dbf';
    Good luck

  • Install sap through copy /datafile and oracle binaries

    Dear Experts
    we have an requirement would like to make an test system without installing sap software but only through copying oracle datafiles and oracle binaries
    on windows 2003
    we have just windows 2008 os on c:\ with 100 gb
                                                            d:\ with 250 gb empty space
    so let me know the copy procedure in c:\ and d:\  which links to be copied
    Thanks & Regards                                                      

    http://scn.sap.com/people/harsha.bs/blog/2013/04/16/system-copy--backuprestore-method
    Hi Rajendra,
    Now I got your point .
    Please check the above  link and let me know if you are facing issues.
    Thanks,
    Pavan

  • Error Message: The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version. The request failed with HTTP status 404: Not Found.

    I have a web page that contains a ReportViewer control.  I am trying to display a report, which is an .rdl file located on the SSRS server, in this ReportViewer control.  I have set the ReportPath and ReportServerUrl correctly.  I am
    getting an error message.
    Am I suppose to use an .rdlc file rather than a .rdl file?  Does the web server configuration need to use a certain account?
     I am getting the following error message:
    The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version.
    The request failed with HTTP status 404: Not Found.

    Hi bucaroov,
    The error "The request failed with HTTP status 404: Not Found." means the ReportServerURL configured in the ReportViewer control is invalid.
    Please follow these steps to solve the issue:
    Logon the Report Server machine.
    Open the Reporting Services Configuration Manager.
    Copy the Report Server URL from 'Web Services URL'.
    Logon the application server(in this case, it is the server that host the web page), check if we can use the URL we got from step 3 to access the Report Server. If so, please replace the ReportServerURL in the ReportViewer control with this URL. If it is
    not available, could you please post the error message.
    Additionaly, we don't need to provide the extension for a server report. The ReportPath should be like: /<reports folder>/<report name>
    For more information, please see:
    Walkthrough: Using the ReportViewer Control in Remote Mode:
    http://msdn.microsoft.com/en-us/library/ms251669(VS.80).aspx
    If you have any more questions, please feel free to ask.
    Thanks,
    Jin Chen
    Jin Chen - MSFT

  • Best RAID configuration for storing Datafiles and Redo log files

    Database version:10gR2
    OS version: Solaris
    Whis is the best RAID level for storing Datafiles and Redo log files?

    Oracle recommends SAME - Stripe And Mirror Everything.
    In the RAC Starter Kit documentation, they specifically recommend not using RAID5 for things like voting disk and so on.
    SAN vendors otoh claims that their RAID5 implementations are as fast as RAID10. They do have these massive memory caches...
    But I would rather err on the safer side. I usually insist on RAID10 - and for those databases that I do not have a vested interest in (other than as a DBA), and owners, developers and management accept RAID5, I put the lead pipe away and do not insist on having it my way. :-)

  • Relation between a datafile and segments.

    Hi All,
    I am trying to understand the storage concepts of oracle. I read that
    1. Tablespace is a logical grouping of datafiles
    2. Segments are made up of Extents and Extents are composed to DB Blocks which are logical grouping of OS blocks.
    But I am unable to establish a relation between Datafiles and Segments. Is it right that a datafile logically divided into Segments?
    Another question I have is, can a segment span across multiple datafiles?
    Please help me understand this concept.
    Regards,
    Ravi
    Message was edited by:
    Ravi-2006

    Tablespaces and segments are LOGICAL database concepts. Datafiles are a PHYSICAL database concept. A segment can indeed span accross multiple datafiles. But a segment can belong to only ONE tablespace. A datafile can belong to one tablespace. But a tablespace can consist of multiple datafiles.
    Daniel

  • Confusion with Membership and Billing

    I currently have Creative Cloud (Student and Teacher Edition) installed on my laptop for about a year now. I downloaded it through free subscription offered on a website, with a redemption code (thanks to an agreement between my college and Adobe.)
    For the last week or so, every time I turn on my laptop and open Photoshop the first time, I receive the following message: "We are having trouble verifying your membership. Either you're offline or there's a billing issue with your account. Please go online to manage your account and verify your billing information."
    I was in a slight panic, and scrambled around finding a way to renew my subscription, -- and I might have messed up in the process. First, I ordered Creative Cloud Student and Teacher Edition with a monthly billing plan with my MasterCard. But the next day, I ordered a new subscription with new redemption code. Now I have the following on my billing history:
    Even after all of these, I still have the same message concerning the status of my membership and billing.
    So, what should I do with my orders? Should I cancel my invoice?

    Cancel the paid subscription so you don't have a duplicate
    Cancel http://helpx.adobe.com/x-productkb/policy-pricing/return-cancel-or-change-order.html
    -or by telephone http://helpx.adobe.com/x-productkb/global/phone-support-orders.html
    As for the original problem, This is an open forum with a mix of program users and Adobe staff, not Adobe support... you need Adobe support
    Adobe contact information - http://helpx.adobe.com/contact.html may help
    -Select your product and what you need help with
    -Click on the blue box "Still need help? Contact us"

Maybe you are looking for