Locked datafiles

Hello,
I have a problem that when I attempt to open a database I get the following error message:
ORA-27086: unable to lock file - already in use
Linux-x86_64 Error: 5: Input/output error
I've removed the lk<sid>.ora files in $ORACLE_HOME/dbs.
Is there anything else I can do to unlock these files?
I know how this problem started - it was after a database rename, however the database was working fine right until now.

1)Restart the Nfs Server
/etc/init.d/nfs restart
All lock will be freed.
2) if 1) doen't work, use this script to know what sessions are blocking:
select s.username, s.sid, s.serial#,
s.osuser, k.ctime, o.object_name object, k.kaddr,
case l.locked_mode when 1 then 'No Lock'
when 2 then 'Row Share'
when 3 then 'Row Exclusive'
when 4 then 'Shared Table'
when 5 then 'Shared Row Exclusive'
when 6 then 'Exclusive'
end locked_mode,
case
when k.type = 'AE' then 'Application Edition'
when k.type = 'BL' then 'Buffer Cache Management (PCM lock)'
when k.type = 'CF' then 'Controlfile Transaction'
when k.type = 'CI' then 'Cross Instance Call'
when k.type = 'CU' then 'Bind Enqueue'
when k.type = 'DF' then 'Data File'
when k.type = 'DL' then 'Direct Loader'
when k.type = 'DM' then 'Database Mount'
when k.type = 'DR' then 'Distributed Recovery'
when k.type = 'DX' then 'Distributed Transaction'
when k.type = 'FS' then 'File Set'
when k.type = 'IN' then 'Instance Number'
when k.type = 'IR' then 'Instance Recovery'
when k.type = 'IS' then 'Instance State'
when k.type = 'IV' then 'Library Cache Invalidation'
when k.type = 'JQ' then 'Job Queue'
when k.type = 'KK' then 'Redo Log Kick'
when k.type like 'L%' then 'Library Cache Lock'
when k.type = 'MM' then 'Mount Definition'
when k.type = 'MR' then 'Media Recovery'
when k.type like 'N%' then 'Library Cache Pin'
when k.type = 'PF' then 'Password File'
when k.type = 'PI' then 'Parallel Slaves'
when k.type = 'PR' then 'Process Startup'
when k.type = 'PS' then 'Parallel slave Synchronization'
when k.type like 'Q%' then 'Row Cache Lock'
when k.type = 'RT' then 'Redo Thread'
when k.type = 'SC' then 'System Commit number'
when k.type = 'SM' then 'SMON synchronization'
when k.type = 'SN' then 'Sequence Number'
when k.type = 'SQ' then 'Sequence Enqueue'
when k.type = 'SR' then 'Synchronous Replication'
when k.type = 'SS' then 'Sort Segment'
when k.type = 'ST' then 'Space Management Transaction'
when k.type = 'SV' then 'Sequence Number Value'
when k.type = 'TA' then 'Transaction Recovery'
when k.type = 'TM' then 'DML Enqueue'
when k.type = 'TS' then 'Table Space (or Temporary Segment)'
when k.type = 'TT' then 'Temporary Table'
when k.type = 'TX' then 'Transaction'
when k.type = 'UL' then 'User-defined Locks'
when k.type = 'UN' then 'User Name'
when k.type = 'US' then 'Undo segment Serialization'
when k.type = 'WL' then 'Writing redo Log'
when k.type = 'XA' then 'Instance Attribute Lock'
when k.type = 'XI' then 'Instance Registration Lock'
end type
from v$session s, sys.v_$_lock c, sys.v_$locked_object l, dba_objects o, sys.v_$lock k
where o.object_id = l.object_id
and l.session_id = s.sid
and k.sid = s.sid
and s.saddr = c.saddr
and k.kaddr = c.kaddr
and k.lmode = l.locked_mode
and k.lmode = c.lmode
and k.request = c.request
order by object;
And kill them to liberate your process.

Similar Messages

  • Locking datafile problem using oracle agent

    Hi All
    we are facing locking datafile problems when we are taking backups using oracle agent
    in veritas we have selected the option not to lock files when taking backups
    is there any other settng which we have missed so that it doe s not lock files
    regards
    kedar

    If you use a version locking policy, you should use a version field, not reuse your primary key for that. If you map the version field (you don't have to), you need to map it as read only. This makes sense: you don't want the application to change this version field, only TopLink should change it. This is where your exception comes from.
    If you want to use existing database fields for optimistic locking, you should use a field locking policy. It does not make sense to use the primary key for that: it never changes, so you never know when the object has been changed by another user.
    So you can do two things now to fix your code:
    create a version column in your database and use a version policy (preferably numeric) or use a field locking policy and use job and salary, or all fields)
    There is a pretty good description of locking policies in the TopLink guide:
    http://www.oracle.com/technology/products/ias/toplink/doc/10131/main/_html/descun008.htm#CIHCFEIB
    Hope this helps,
    Lonneke

  • -01157 cannot identify/lock datafile string - see DBWR trace file

    HI,
    -01157 cannot identify/lock datafile string - see DBWR trace file
    ora -1110 errors are throwing when iam going to startup

    872565 wrote:
    HI,
    -01157 cannot identify/lock datafile string - see DBWR trace file
    ora -1110 errors are throwing when iam going to startupDid you lookup this error on the Internet ??
    What does the alert_log tell you??
    The Google answer on this is:
    ORA-01157:cannot identify/lock data file string - see DBWR trace file
    Cause:     The background process was either unable to find one of the data files or failed to lock it because the file was already in use. The database will prohibit access to this file but other files will be unaffected. However the first instance to open the database will need to access all online data files. Accompanying error from the operating system describes why the file could not be identified.
    Action:     Have operating system make file available to database. Then either open the database or do ALTER SYSTEM CHECK DATAFILES.
    Edit: It is recommended to post database version and OS version also, including more information.
    This post of you looks like:
    Hi
    My Car is not starting. It is smoking from behind and making the weirdest sounds
    Please adviceCheers
    FJFranken
    Edited by: fjfranken on 19-jul-2011 5:33

  • (V8.1.7) OPS : PCM LOCK과 NON-PCM LOCK에 대한 정리

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    (V7.3 ~ V8.1.7) OPS : PCM LOCK과 NON-PCM LOCK에 대한 정리
    =========================================================
    PURPOSE
    OPS 환경에서는 single instance와는 구별되어 PCM lock이라는 것이 존재한다.
    이것이 Non-PCM Lock과 어떻게 다르며, 어떤 방식으로 할당되는지에 대하여
    Oracle 8i(8.1.7) 버젼을 기준으로, 관련된 GC_* 파라미터에 대한 설명과 함
    께 소개하고자 한다.
    오라클 OPS 환경에서는 PCM lock 관련 파라미터를 어떻게 설정하느냐에 따라
    system 전체의 성능에 많은 영향을 미친다.
    SCOPE
    Oracle Parallel Server(OPS) Option은 8~9i Standard Edition에서는
    지원하지 않는다.
    Explanation
    1. Instance Lock
    Instance Lock에는 PCM Lock과 Non-PCM Lock 두가지가 존재한다.
    PCM(Parallel Cache Management) Lock은 Buffer cache 내의 block의 lock
    에 관련된 부분이고, Non-PCM Lock은 그 이외의 Lock이다.
    Non-PCM Lock에는 DFS enqueue lock과 DFS Lock 두가지가 있다.
    Instance Lock은 동기화에 필요한 cost가 굉장히 높으며, OPS 환경에서는
    PCM Lock의 갯수가 Non-PCM Lock의 갯수보다 훨씬 많다.
    이 PCM Lock의 갯수를 적절하게 잘 설정해야만 DLM을 적절하게 잘 구성했
    다 라고 말할 수 있다.
    2. PCM Lock
    1) PCM Lock은 X$LE 또는 GV$LOCK_ELEMENT 내에 기술된 lock element
    block class에 internal하게 mapping된다.
    2) lock element라고 불리우는 data structure 내에 state 정보를 저장한다.
    3) 다음과 같이 두 가지 방법으로 구현된다.
    1:1 or 1:n releasable locks
    1:1 or 1:n fixed locks
    3. Fixed Locking
    1) Oracle7에서는 default로 fixed locking 방법을 사용한다.
    2) DBA(data block address)에 hashing 알고리즘을 적용하여 data file
    block에 instance lock을 할당한다.
    3) Fixed locking 기법은 instance startup 시에 block에 hash 알고리즘
    을 이용하여 정적으로 할당이 된다.
    4) Fixed locking 기법은 보통 여러개의 data block을 cover한다.
    각 data file마다 고정된 수의 PCM lock이 할당되고, 한 datafile 당
    block 수가 몇 개인가에 따라 한 PCM lock이 관할하는 data block의 수
    가 결정이 된다.
    4. GC_FILES_TO_LOCKS
    각 datafile 당 PCM Lock의 갯수를 정하기 위하여 위 파라미터를
    initSID.ora 화일에 셋팅한다. 만약, 이 파라미터가 지정되어 있지 않으
    면, Oracle 7에서는 hashing 알고리즘에 근거하여 fixed하게 lock이
    assign되지만, Oracle 8부터는 releasable lock이 사용된다.
    해당 instance의 총 lock의 갯수를 지정하는 GC_DB_LOCKS 파라미터는
    Oracle 8부터 없어졌다.
    GC_FILES_TO_LOCKS = "{file_list=lock_count[!blocks][R][each]}:..."
    syntax는 위와 같다. 아래에 각각에 대한 설명을 추가한다.
    file_list : datafile 하나 또는 여러개를 set으로 지정 가능
    lock_count : file_list에 나타난 datafile에 대한 PCM lock의 갯수
    !blocks : cover할 연속된 block의 갯수
    R : 지정한 lock에 대해서는 releasable하다는 의미
    each : file_list에 지정한 각 datafile에 대해 할당된 lock의 갯수
    Example
    1=1000!25R
    1-3=500EACH
    1=300:2=100
    5. Instance Lock : FILE_LOCK view
    Oracle 7.3부터 제공되는 view로서 각 datafile에 대하여 PCM Lock이 얼마
    나 많이 할당되었는지 확인하는 view이다.
    select file_id, file_name, ts_name, start_lk, nlocks, blocking
    from file_lock;
    file_id : datafile number
    file_name : datafile name
    ts_name : tablespace name the file belongs to
    start_lk : first lock corresponding to the datafile
    nlocks : number of PCM locks allocated to the datafile
    blocking : number of blocks protected by a PCM lock on the file
    6. 1:1 Releasable Locking
    1) 1:1 releasable locking이 Oracle 8부터는 default이다.
    2) Releasable locking은 dynamic하게 PCM lock을 block에 assign한다.
    3) Lock은 필요할 때 assign되고 release된다.
    4) Block이 release되고 나면 PCM lock 또한 release된다.
    5) Lock element name은 Lock element가 reuse될 때마다 변한다.
    6) Instance startup하는 데 걸리는 시간이 더 빠르다. 그러나, request
    시에 DLM resource를 allocate하는 데 요구되는 시간이 더 많이 든다.
    7. Non-PCM Locks
    1) Non-PCM Lock은 dynamic하게 할당되고, PCM lock에 비해서 그 갯수는
    훨씬 적다. (시스템 전체 Lock의 5 ~ 10% 에 불과)
    2) Non-PCM Lock의 갯수를 직접 조절할 수 있는 초기화 파라미터는 없다.
    (except DML_LOCKS)
    3) Non-PCM Lock은 data block을 protect하지는 않는다. 그것은 PCM Lock
    의 job이다.
    4) 다음과 같은 역할을 하는 많은 type의 Non-PCM Lock이 있다.
    - Control access to data and control files
    - Control library and dictionary caches
    - Provide communication between instances
    5) Non-PCM Lock의 space를 조절하는 parameter들.
    DB_BLOCK_BUFFERS, DB_FILES, DML_LOCKS, PARALLEL_MAX_SERVERS,
    PROCESSES, SESSIONS, TRANSACTIONS
    6) DML_LOCKS = 0 으로 설정하고 운영하는 것이 OPS에서는 일반적이다.
    OPS에서는 block level lock이 우선이므로, table lock이 자주 걸리는
    것이 바람직하지 않다. CREATE 또는 ALTER와 같은 작업을 할 경우에만
    이 파라미터 값을 0이 아닌 다른 값으로 설정하고, 그렇지 않은 경우에
    는 두 instance 모두 0으로 설정한다. 두 instance 모두 같은 값이어
    야 할 필요는 없으나, 0으로 설정할 경우에는 양쪽 모두 0으로 설정해
    야 한다.
    8. PCM Lock을 할당하기 위한 몇 가지 tips
    1) Always set GC_FILES_TO_LOCKS
    2) The value for GC_FILES_TO_LOCKS must be the same on all instances
    3) Do not assign locks to undo segment files
    4) No locks needed for temporary/sort blocks
    5) Group read-only objects together and allocate 1 hashed lock to
    the file
    6) Make tablespaces read-only, (no PCM locks used)
    7) Never assign DBA locking to read-only or mostly read only data
    8) If excessive pinging on undo blocks (down converts: X->SSX),
    increase GC_ROLLBACK_LOCKS
    Example
    none
    Reference Documents
    <Note:30508.1>
    <Note:50244.1>
    OPS 8i Administrator Guide.

  • Error in adding datafile in standby databse.

    Hi all.
    My Environment is as below:
    Oracle-8.1.7.4.0
    OS-HP Unix-11
    Primary database (only 1): Production
    Standby database: Different Machine but same location (HP box)
    Yesterday I have added 2 datafiles to the two different tablespace. I have checked file is available at Production box and one of the file also avilable at standby databse.
    When I am following steps for applying redo log to the standby manually.
    I got error.
    SVRMGRL>connect internal
    SVRMGRL>show parameter db_name
    SVRMGRL>recover standby databse
    After above step I got the Error:
    ORA-00283 recovery session canceled due to error
    ORA-01157 can not identify/lock datafile 24 -see DBWR trace file
    ORA-01110 data file 24: '/location of .dbf file on standby databse disk'
    Please let me know in detail because I am new in this field.
    Thanks in advance

    You will have the datafile information on the standby alert log.
    Something like '/u01/app/oracle/product/8174/db/<filename>.dbf'.
    1. connect as sysdba on standby database.
    2. alter database create datafile 'Production datafile name' as 'alert log filename';
    Example :
    alter database create datafile '/u01/data/user1.dbf'
    as '/u01/app/oracle/product/8174/db/<filename>.dbf';
    3. Recovery managed standby database;
    HTH.
    Regards,
    Arun

  • Relocating datafiles on standby database after mount point on stanby is ful

    Hi,
    We have a physical standby database.
    The location of datafiles on primary database are at /oracle/oradata/ and the location of datafiles on standby database are at /oracle/oradata/
    Now we are facing a situation of mount mount getting full on standby database so we need to move some tablespaces to another location on standby.
    Say old location is /oracle/oradata/ and new location is /oradata_new/ and the tablespaces to be relocated are say tab1 and tab2.
    Can anybody tell me whether following steps are correct.
    1. Stop managed recovery on standby database
    alter database recover managed standby database cancel;
    2. Shutdown standby database
    shutdown immediate;
    3. Open standby database in mount stage
    startup mount;
    4. Copy the datafiles to new location say /oradata_new/ using os level command
    4. Rename the datafile
    alter database rename file
    '/oracle/oradata/tab1.123451.dbf', '/oracle/oradata/tab1.123452.dbf','/oracle/oradata/tab2.123451.dbf',''/oracle/oradata/tab2.123452.dbf'
    to '/oradata_new/tab1.123451.dbf', '/oradata_new/tab1.123452.dbf','/oradata_new/tab2.123451.dbf',''/oradata_new/tab2.123452.dbf';
    5. Edit the parameter db_file_name_convert
    alter system set db_file_name_convert='/oracle/oradata/tab1','/oradata_new/tab1','/oracle/oradata/tab2','/oradata_new/tab2'
    6. Start a managed recovery on standby database
    alter database recover managed standby database disconnect from session;
    I am littelbit confused in step 5 as we want to relocate only two tablespaces and not all tablespaces so we have used.
    Can we use db_file_name_convert like this i.e. does this work for only two tablespaces tab1 and tab2.
    Thanks & Regards
    GirishA

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1010428
    8.3.4 Renaming a Datafile in the Primary Database
    When you rename one or more datafiles in the primary database, the change is not propagated to the standby database. Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent modifications on the standby database because the modifications are not performed automatically, even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.
    The following steps describe how to rename a datafile in the primary database and manually propagate the changes to the standby database.
    To rename the datafile in the primary database, take the tablespace offline:
    SQL> ALTER TABLESPACE tbs_4 OFFLINE;
    Exit from the SQL prompt and issue an operating system command, such as the following UNIX mv command, to rename the datafile on the primary system:
    % mv /disk1/oracle/oradata/payroll/tbs_4.dbf
    /disk1/oracle/oradata/payroll/tbs_x.dbf
    Rename the datafile in the primary database and bring the tablespace back online:
    SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE 2> '/disk1/oracle/oradata/payroll/tbs_4.dbf'
    3> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
    SQL> ALTER TABLESPACE tbs_4 ONLINE;
    Connect to the standby database, query the V$ARCHIVED_LOG view to verify all of the archived redo log files are applied, and then stop Redo Apply:
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# APP
    8 YES
    9 YES
    10 YES
    11 YES
    4 rows selected.
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    Shut down the standby database:
    SQL> SHUTDOWN;
    Rename the datafile at the standby site using an operating system command, such as the UNIX mv command:
    % mv /disk1/oracle/oradata/payroll/tbs_4.dbf /disk1/oracle/oradata/payroll/tbs_x.dbf
    Start and mount the standby database:
    SQL> STARTUP MOUNT;
    Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL.
    SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf'
    2> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
    On the standby database, restart Redo Apply:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
    2> DISCONNECT FROM SESSION;
    If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see error messages similar to the following in the alert log:
    ORA-00283: recovery session canceled due to errors
    ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
    ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf'

  • How to move a specific tablespace datafile from one directory to another

    Database: 10.2.0.1
    OS : Generic
    Problem Description : How to move a specific tablespace datafile from one directory to another considering that the database is on Oracle Dataguard setup
    ** Oracle is working on this issue, but in parallel is opening the topic to the Community so that Community members can add their perspective, experience or knowledge. This will further enhance all knowledge bases including My Oracle Support and My Oracle Support Communities **
    Edited by: ram_orcl on 16-Aug-2010 21:21

    Dear ram_orcl,
    Please follow the procedures here;
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1034172
    8.3.4 Renaming a Datafile in the Primary Database
    When you rename one or more datafiles in the primary database, the change is not propagated to the standby database. Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent modifications on the standby database because the modifications are not performed automatically, even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.
    The following steps describe how to rename a datafile in the primary database and manually propagate the changes to the standby database.
       1.
          To rename the datafile in the primary database, take the tablespace offline:
          SQL> ALTER TABLESPACE tbs_4 OFFLINE;
       2.
          Exit from the SQL prompt and issue an operating system command, such as the following UNIX mv command, to rename the datafile on the primary system:
          % mv /disk1/oracle/oradata/payroll/tbs_4.dbf
          /disk1/oracle/oradata/payroll/tbs_x.dbf
       3.
          Rename the datafile in the primary database and bring the tablespace back online:
          SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE      2> '/disk1/oracle/oradata/payroll/tbs_4.dbf'
            3>  TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
          SQL> ALTER TABLESPACE tbs_4 ONLINE;
       4.
          Connect to the standby database, query the V$ARCHIVED_LOG view to verify all of the archived redo log files are applied, and then stop Redo Apply:
          SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
          SEQUENCE# APP
          8 YES
          9 YES
          10 YES
          11 YES
          4 rows selected.
          SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
       5.
          Shut down the standby database:
          SQL> SHUTDOWN;
       6.
          Rename the datafile at the standby site using an operating system command, such as the UNIX mv command:
          % mv /disk1/oracle/oradata/payroll/tbs_4.dbf /disk1/oracle/oradata/payroll/tbs_x.dbf
       7.
          Start and mount the standby database:
          SQL> STARTUP MOUNT;
       8.
          Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL.
          SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf'
            2> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
       9.
          On the standby database, restart Redo Apply:
          SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
            2> DISCONNECT FROM SESSION;
    If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see error messages similar to the following in the alert log:
    ORA-00283: recovery session canceled due to errors
    ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
    ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf'Hope That Helps.
    Ogan

  • Oracle in Starting (9i)

    Hi
    I delete a datafile of a tablespace manually (I don’t need my database any more so delete a heavy datafile in a tablespace). No my database is not connecting. And giving the following errors.
    Command prompt
    ORA-01157 : cant identify / lock datafile 11 – see DBWR trace file.
    ORA-01110 : data file 11 : ‘file path and name’
    OEM
    ORA-12560: TNS protocol adopter error.
    Note:- I don’t want my data back at all. I need Oracle for a new database. Can some one please tell me how to start oracle now? + can some one tell me how to cleanly uninstall my entire old database.
    Wishes
    Jawad

    Hi
    I delete a datafile of a tablespace manually (I don’t
    need my database any more so delete a heavy datafile
    in a tablespace). No my database is not connecting.
    And giving the following errors.To drop a database that you don't need it anymore, you don't just go and delete the datafile. what you did is like you don't need the car anymore so you removed a piston out of the engine.
    To drop a database properly use the Database Configuration Assistant found under
    start/programs/%ORACLE_HOME%/Configuration and migration tools
    Use this software to drop your database and start with a clean database creation.
    Tony Garabedian

  • Errors-Ora-01157,ORA-01110,ORA-01033

    HI all,
    i am unable to login into database,i will explain clearly what has happened,please help me out...my /home directory was 100%,so i was unable to login,what i have did is i have not deleted
    archive files but i have moved to other directory /ora_bkp,then i was able to login,today i have rebboted the system and later i have started database,started listener,but when i try to login
    through sql*plus or toad i am getting this error.....ORA-01033 (ORACLE INITIALIZATION OR SHUTDOWN IN PROGRESS),and when i have tried to check from sql promt ....i am getting this
    error....ORA-01157(CANNOT IDENTIFY/LOCK DATAFILE 6 -SEE DBWR TRACE FILE),& ORA-01110(DATA FILE 6: '/SAPDATA/SPO'...
    HOW SHOULD I RECTIFY MY PROBLEM...AS I AM VERY NEW IAM TRYING TO EXPLAIN CLEARLY ...PLEASE HELP ME IN SOLVING THIS ISSUE....I HAVE EVEN TRIED ALTER DATABASE OPEN...I AM GETTING ERRORS OF ORA-01157,ORA-01110....WHAT IS MY PROBLEM HOW SHOULD I RECTIFY IT..PLEASE SORT THIS....
    THANKS,
    MIKE

    Well, did you check f.i. your alert.log and other dumps?
    They will be on your server in the bdump, udump, cdump directories:
    Where to Find Files for Analyzing Errors
    Oracle records information about important events that occur in your Oracle RAC environment in trace files. The trace files for Oracle RAC are the same as those in single-instance Oracle databases. As a best practice, monitor and back up trace files regularly for all instances to preserve their content for future troubleshooting.
    Information about ORA-600 errors appear in the alert_SID.log file for each instance where SID is the instance identifier. For troubleshooting, you may need to also provide files from the following bdump locations:
    * $ORACLE_HOME/admin/db_name/bdump on UNIX-based systems
    * %ORACLE_HOME%\admin\db_name\bdump on Windows-based systems
    Some files may also be in the udump directory.
    In addition, the directory cdmp_timestamp contains in-memory traces of Oracle RAC instance failure information. This directory is located in ORACLE_HOME/admin/db_name/bdump/cdmp_timestamp, where timestamp is the time at which the error occurred.
    Trace dump files are stored under the cdmp directory. Oracle creates these files for each process that is connected to the instance. The naming convention for the trace dump files is same as for trace files, but with .trw as the file extension.
    (http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/appsupport.htm#RACAD512)

  • ORACLE PARALLEL SERVER (OPS)

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    ORACLE PARALLEL SERVER (OPS)
    ==============================
    PURPOSE
    다음은 OPS(ORACLE PARALLEL SERVER) 의 구조에 대해서 알아본다.
    SCOPE
    Standard Edition 에서는 Real Application Clusters 기능이 10g(10.1.0) 이상 부터 지원이 됩니다.
    Explanation
    1. Parallel Server Architecture
    OPS는 다수의 Loosely Coupled System들 간의 원활한 Resource
    Sharing을 위해 제공하는 DLM(PCM)을 이용하여 동시에 다수의
    사용자가 각 Node를 통해 한 Database를 Access함으로써 System의
    가용성과 전체적인 성능을 극대화시키기 위한 다중처리 System
    구성이다.
    (1) Loosely Coupled System
    SMP Machine과 같이 Tightly Coupled System들 간에 Data Files,
    Print Queue 같은 Resource를 공유하기 위한 Shared Disk
    Architecture로 Node간의 정보전송은 Common High-Speed Bus를
    이용한다.
    (2) Distributed Lock Manager(DLM)
    Loosely Coupled System에서 Resource Sharing을 조정,관리하는
    Software로 Application들이 같은 Resource에 대해 동시에 Access를
    요구할 때 서로간의 Synchronization을 유지하며 Conflict가
    발생하지 않도록 하는 기능을 담당한다.
    다음은 DLM의 주요 Service이다
    - Resource에 대한 Current "ownership" 유지 관리
    - Application Process들의 Resource Request Accept
    - 요구한 Resource가 가용할 경우 해당 Process에 공지
    - Resource에 대한 Exclusive Access 허용
    (3) Parallel Cache Management(PCM)
    Data File내의 하나 이상의 Data Block을 관리하기 위해
    할당되는 Distributed Lock을 PCM Lock이라 하며 어떤 Instance가
    특정 Resource를 Access하기 위해서는 반드시 그 Resource의 Master
    Copy의 "owner"가 되어야 한다.
    이는 곧 그 Resource를 Cover하는 Distributed Lock의 "owner"가
    됨을 의미한다. 이러한 "owner ship"은 다른 Instance가 동일
    Data Block 또는 그 PCM Lock에 의해 Cover되고 있는 다른 Data
    Block에 대한 Update 요구가 있을때까지 계속 유지된다.
    "owner ship"이 한 Instance에 다른 Instance로 전이 되기 전에
    변경된 Data Block은 반드시 Disk로 "write"하므로 각 Node의
    Instance간 Cache Coherency는 철저하게 보장된다.
    2. Parallel Server의 특성
    - Network상의 각 Node에서 Oracle Instance 기동 가능
    - 각 Instance는 SGA + Background Processes로 구성
    - 모든 Instance는 Control File과 Datafile들을 공유
    - 각 Instance는 자신의 Redo Log를 보유
    - Control File, Datafiles, Redo Log Files는 하나 이상의
    Disk에 위치
    - 다수의 사용자가 각 Instance를 통해 Transaction 실행 가능
    - Row Locking Mode 유지
    3. Tuning Focus
    서로 다른 Node를 통해서 하나의 Database 구성하의 Resource를
    동시에 사용하는 OPS 환경에서 Data의 일관성과 연속성을 위한
    Instance간의 Lock Managing은 불가피한 현실이다. 즉, 위에서
    언급한 Instance간의 Resource "owner ship"의 전이(pinging 현상)
    와 같은 Overhead를 최소화하기 위해선 효율적인 Application
    Partition(Job의 분산)이 가장 중요한 현실 Factor이다.
    다시 말해 서로 다른 Node를 통한 동일 Resource에의 Cross-Access
    상황을 최소화해야 함을 의미한다.
    다음은 OPS 환경에서 Database Structure 차원의 Tuning Point로서
    PCM Lock 관련 GC(Global Constant) Parameters와 Storage에 적용할
    Options 및 기타 필요한 사항이다.
    (1) Initial Parameters
    OPS 환경에서 PCM Locks를 의해 주는 GC(Global Constant)
    Parameters는 Lock Managing에 절대적인 영향을 미치며 각 Node마다
    반드시 동일한 Value로 설정(gc_lck_procs 제외)되어야 한다.
    일반적인 UNIX System에서 GC Parameters로 정의하는 PCM Locks의
    총합은 System에서 제공하는 DLM Configuration 중 "Number of
    Resources"의 범위 내에서 설정 가능하다.
    - gc_db_locks
    PCM Locks(DistributedLocks)의 총합을 정의하는 Parameter로
    gc_file_to_locks Parameter에 정의한 Locks의 합보다 반드시
    커야 한다.
    너무 작게 설정될 경우 하나의 PCM Lock이 관리하는 Data Blocks가
    상대적으로 많아지므로 Pinging(False Pinging) 현상의 발생
    가능성도 그만큼 커지게 되며 그에 따른 Overhead로 인해 System
    Performance도 현격히 저하될 가능성이 크다. 따라서 가능한
    최대의 값으로 설정해야 한다.
    - False Pinging
    하나의 PCM Lock이 다수의 Data Blocks를 관리하므로 자신의
    Block이 아닌 같은 PCM 관할하의 다른 Block의 영향으로 인해
    Pining현상이 발생할 수 있는 데 이를 "False Pinging"이라 한다.
    Database Object별 발생한 Pinging Count는 다음과 같이 확인할 수
    있으며 sum(xnc) > 5(V$PING) 인 경우에 더욱 유의해야 한다.
    - gc_file_to_locks
    결국 gc_db_locks에 정의된 전체 PCM Locks는 각 Datafile 당
    적절히 안배가 되는데 전체 Locks를 운용자의 분석 결과에 의거
    각 Datafile에 적절히 할당하기 위한 Parameter이다.
    운용자의 분석 내용은 각 Datafile 내에 존재하는 Objects의 성격,
    Transaction Type, Access 빈도 등의 세부 사항을 포함하되 전체
    PCM Locks 대비 Data Blocks의 적절하고도 효율적인 안배가
    절대적이다.
    이 Parameter로 각 Datafile당 PCM Locks를 안배한 후 Status를
    다음의 Fixed Table로 확인할 수 있다.
    Sample : gc_db_locks = 1000
    gc_file_to_locks = "1=500:5=200"
    X$KCLFI ----> 정의된 Bucket 확인
    Fileno      Bucket     
    1     1     
    2      0      
    3     0     
    4      0     
    5     2     
    X$KCLFH ----> Bucket별 할당된 Locks 확인
    Bucket     Locks     Grouping     Start     
    0     300     1     0     
    1     500     1     300     
    2     200     1     800     
    gc_files_to_locks에 정의한 각 Datafile당 PCM Locks의 총합은 물론
    gc_db_locks의 범위를 초과할 수 없다.
    다음은 각 Datafile에 할당된 Data Blocks의 수를 알아보는 문장이다.
    select e.file_id id,f.file_name name,sum(e.blocks) allocated,
    f.blocks "file size"
    from dba_extents e,dba_data_files f
    where e.file_id = f.file_id
    group by e.file_id,f.file_name,f.blocks
    order by e.file_id;
    - gc_rollback_segments
    OPS로 구성된 각 Node의 Instance에 만들어진 Rollback Segment
    (Init.ora의 rollback_segments에 정의한)의 총합을 정의한다.
    다수의 Instance가 Rollback Segment를 공용으로 사용할 수는 있으나
    OPS 환경에서는 그로 인한 Contention Overhead가 엄청나므로 반드시
    Instance당 독자적인 Rollback Segment를 만들어야 하며 Instance간
    동일한 이름의 부여는 불가하다.
    select count(*) from dba_rollback_segs
    where status='ONLINE';
    위의 결과치 이상의 값으로 정해야 한다.
    - gc_rollback_locks
    하나의 Rollback Segment에서 동시에 변경되는 Rollback Segment
    Blocks에 부여하는 Distributed Lock의 수를 정의한다.
    Total# of RBS Locks = gc_rollback_locks * (gc_rollback_segments+1)
    위에서 "1"을 더한 것은 System Rollback Segment를 고려한 것이다.
    전체 Rollback Segment Blocks 대비 적절한 Locks를 정의해야 한다.
    다음은 Rollback Segment에 할당된 Blocks의 수를 알아보는 문장이다.
    select s.segment_name name,sum(r.blocks) blocks
    from dba_segments s,dba_extents r
    where s.segment_name = r.segment_name
    and s.segment_type = 'ROLLBACK'
    group by s.segment_name;
    - gc_save_rollback_locks
    특정 시점에서 어떤 Tablespace 내의 Object를 Access하고 있는
    Transaction이 있어도 그 Tablespace를 Offline하는 것은 가능하다.
    Offline 이후에 발생된 Undo는 System Tablespace내의 "Differred
    Rollback Segment"에 기록, 보관 됨으로써 Read Consistency를
    유지한다. 이 때 생성되는 Differred Rollback Segment에 할당하는
    Locks를 정의하는 Parameter이다.
    일반적으로 gc_rollback_locks의 값과 같은 정도로 정의하면 된다.
    - gc_segments
    모든 Segment Header Blocks를 Cover하는 Locks를 정의한다. 이 값이
    작을 경우에도 Pinging 발생 가능성은 그 만큼 커지므로 해당
    Database에 정의된 Segments 이상으로 설정해야 한다.
    select count(*) from dba_segments
    where segment_type in ('INDEX','TABLE','CLUSTER');
    - gc_tablespaces
    OPS 환경에서 동시에 Offline에서 Online으로 또는 Online에서
    Offline으로 전환 가능한 Tablespace의 최대값을 정의하는 것으로
    안전하게 설정하기 위해서 Database에 정의된 Tablespace의 수만큼
    설정한다.
    select count(*) from dba_tablespaces;
    - gc_lck_procs
    Background Lock Process의 수를 정하는 것으로 최대 10개까지
    설정(LCK0-LCK9)할 수 있다. 기본적으로 하나가 설정되지만 필요에
    따라 수를 늘려야 한다.
    (2) Storage Options
    - Free Lists
    Free List는 사용 가능한 Free Blocks에 대한 List를 의미한다.
    Database에서 새롭게 가용한 Space를 필요로 하는 Insert나
    Update시엔 반드시 Free Space List와 관련 정보를 가지고 있는
    Blocks Common Pool을 검색한 후 필요한 만큼의 충분한 Blocks가
    확보되지 않으면 Oracle은 새로운 Extent를 할당하게 된다.
    특정 Object에 동시에 다수의 Transaction이 발생한 경우 다수의
    Free Lists는 그만큼 Free Space에 대한 Contention을 감소시킨다.
    결국 Free List의 개수는 Object의 성격과 Access Type에 따라
    적절히 늘림으로써 커다란 효과를 거둘 수 있다.
    예를 들면 Insert나 크기가 늘어나는 Update가 빈번한 Object인
    경우엔 Access 빈도에 따라 3 - 5 정도로 늘려줘야 한다.
    - freelist groups
    Freelist group의 수를 정의하며 전형적으로 OPS 환경에서
    Instance의 수만큼 설정한다. 특정 Object의 Extent를 특정
    Instance에 할당하여 그 Instance에 대한 Freelist Groups를
    유지하므로 Instance별 Free List 관리도 가능하다.
    (3) 기타
    - Initrans
    동시에 Data Block을 Access할 때 필요한 Transaction Entries에
    대한 초기치를 의미하며 Entry당 23Byte의 Space를 미리 할당한다.
    기본값은 Table이 "1" 이고 Index와 Cluster는 "2" 이다. Access가
    아주 빈번한 Object인 경우 Concurrent Transactions를 고려하여
    적절히 설정한다.
    4. Application Partition
    OPS Application Design의 가장 중요한 부분으로 Partitioning의
    기본 원리는 다음과 같다.
    . Read Intensive Data는 Write Intensive Data로부터 다른
    Tablespaces로 분리한다.
    . 가능한 한 하나의 Application은 한 Node에서만 실행되도록
    한다. 이는 곧 다른 Application들에 의해 Access되는 Data에
    대한 Partition을 의미한다.
    . 각 Node마다 Temporary Tablespaces를 할당한다.
    . 각 Node마다 Rollback Segments를 독립적으로 유지한다.
    5. Backup & Recovery
    일반적으로 OPS 환경의 Sites는 대부분 24 * 365 Online 상황이므로
    전체적인 Database 운영은 Archive Log Mode & Hot Backup으로 갈
    수에 없으며 Failure 발생시 얼마나 빠른 시간 안에 Database를
    완벽하게 복구 할 수 있는 지가 최대 관건이라 하겠다.
    모든 Backup & Recovery에 관한 일반적인 내용은 Exclusive Mode
    Database 운영 환경에서와 동일하다.
    (1) Backup
    - Hot Backup Internal
    Archive Mode로 DB를 정상적으로 운영하며 Online Data Files를
    Backup하는 방법으로 Tablespace 단위로 행해진다.
    alter tablespace - begin backup이 실행되면 해당 Tablespace를
    구성하는 Datafiles에 Checkpoint가 발생되어 Memory상의 Dirty
    Buffers를 해당 Datafiles(Disk)에 "Write"함과 동시에 Hot Backup
    Mode에 있는 모든 Datafiles의 Header에 Checkpoint SCN을 Update
    하는데 이는 Recovery시에 중요한 Point가 된다.
    또한 alter tablespace - end backup이 실행되기 전까지 즉,
    Hot Backup이 행해지는 동안 해당 Datafiles는 "fuzzy" Backup
    Data가 생성되며 특정 Record의 변형 시에도 해당 Block이 Redo
    Log에 기록 되므로 다수의 Archive File이 더 생성되는 것을 볼 수
    있다. 따라서 Admin이 해당 Datafiles를 모두 Backup하고도 end
    backup을 실행하지 않으면 전체 인 System 성능에 심각한 영향을
    미치게 되므로 특히 주의해야 한다.
    Hot Backup 중인지의 여부는 다음 문장을 통해 확인할 수 있다.
    select * from v$backup; -> status 확인
    - Hot Backup Step (Recommended)
    ① alter system archive log current
    ② alter tablespace tablespacename begin backup
    ③ backup the datafiles,control files,redo log files
    ④ alter tablespace tablespacename end backup
    ⑤ alter database backup controlfile to 'filespec'
    ⑥ alter database backup controlfile to trace noresetlogs(safety)
    ⑦ alter system archive log current
    (2) Recovery
    - Instance Failure시
    OPS 환경에서 한 Instance의 Failure시 다른 Instance의 SMON이
    즉시 감지하여 Failed Instance에 대한 Recovery를 자동으로
    수행한다. 이 때 운영중인 Instance는 Failed Instance가 생성한
    Redo Log Entries와 Rollback Images를 이용하여 Recovery한다.
    Multi-Node Failure시엔 바로 다음에 Open 된 Instance가 그 역할을
    담당하게 된다. 아울러 Failed Instance가 Access하던 모든 Online
    Datafiles에 대한 Recovery도 병행되는 데 이런 과정의 일부로
    Datafiles에 관한 Verification이 실패하여 Instance Recovery가 되지
    않을 경우엔 다음 SQL Statement를 실행시키면 된다.
    alter system check datafiles global;
    - Media Failure시
    다양한 형태로 발생하는 Media Failure시엔 Backup Copy를
    Restore한 후 Complete 또는 Incomplete Media Recovery를 행해야
    하며 이는 Exclusive Mode로 Database를 운영할 때와 전혀 다르지
    않다.
    Node별 생성된 즉, Thread마다 생성된 모든 Archived Log Files는
    당연히 필요하며 많은 OPS Node 중 어디에서든지 Recovery 작업을
    수행할 수 있다.
    - Parallel Recovery
    Instance 또는 Media Failure시 ORACLE 7.1 이상에서는 Instance
    Level(init.ora) 또는 Command Level(Recover--)에서 Parallel
    Recovery가 가능하다. 여러 개의 Recovery Processes를 이용하여
    Redo Log Files를 동시에 읽어 해당 After Image를 Datafiles에
    반영시킬 수 있다. Recovery_Parallelism Parameter는 Concurrent
    Recovery Processes의 수를 정하며 Parallel_Max_Servers Parameter의
    값을 초과할 수는 없다.
    (3) 운영 시 발생 가능한 Error
    - ORA-1187 발생
    ORA-1187 : can not read from file name because it
    failed verification tests.
    (상황) 하나의 Node에서 create tablespace ... 한 상태에
    정상적으로 운영하던 중 다른 Node를 통해 특정 Object를
    Access하는데 ORA-1187 발생.
    (원인) 다른 Node에서 raw disk의 owner, group, mode 등을
    Tablespace가 생성된 후 뒤늦게 전환.
    (Admin의 Fault)
    (조치) SQL> alter system check datafiles global;
    Reference Documents
    --------------------

    hal lavender wrote:
    Hi,
    I am trying to achieve Load Balancing & Failover of Database requests to two of the nodes in 8i OPS.
    Both the nodes are located in the same data center.
    Here comes the config of one of the connection pools.
    <JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
    DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
    InitialCapacity="10" MaxCapacity="25" Name="db1Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
    Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
    TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
    TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.160:1421:dbinst01" />
    <JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
    DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
    InitialCapacity="10" MaxCapacity="25" Name="db2Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
    Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
    TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
    TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.161:1421:dbinst01" />
    <JDBCMultiPool AlgorithmType="Load-Balancing" Name="pooledConnection598011"
    PoolList="db1Connection598011,db2Connection598011" Targets="ngusCluster12,ngusCluster34" />
    Please let me know , if you need further information
    HalHi Hal. All that seems fine, as it should be. Tell me how you
    enact a failure so that you'd expect one pool to still be good
    when the other is bad.
    thanks,
    Joe

  • ORA-01157, ORA -01110

    Hi all,
    A oracle 8i on RH 7.2 went down on me. When I attepmt to start i get these errors:
    ORA- 01157 Can't identify/ lock datafile 27 -see DBWR trace file.
    ORA -01110 datafile '/usr3/etc/test.dbf'
    Can someone help please. I am newbie and please treat me like a 5 yrs old. Thank you i advance.
    Sara

    Hi,
    Looks like Datafile 27 ( You can get the name of the datafile from v$datafile) was locked
    while Oracle was coming up. You can not access the data from that datafile. You may have to recover the database. Please call Oracle Support now.
    Krishna
    null

  • Cold backup tablespace restore

    From a cold backup can you restore a tablespace to a different database.
    A datafile was created and dropped and now we are recieving ORA-1186, ORA-1157 cannnot/identify lock datafile
    We know that if our database goes down it won't come back up. The table space has about 250 tables and its huge about 100g in size. Does anyone know of what steps needs to be taken?
    Edited by: user10767182 on Jan 6, 2009 8:35 PM

    I couldn't quite workout what you were saying, but I think you were suggesting that copies of the lost tables and data are sitting in a second database somewhere, and you would like to pull them out of that database and plug them into the broken database. Is that right?
    If so, you cannot take a datafile from one database and plug it into another, unless you use the transportable tablespace option.
    Basically, on your broken database, you'd shut it down, bring it back to the mount state and then say alter database datafile X offline drop
    That will let you issue an alter database open followed by a drop tablespace X, and your broken database will at least be open, minus the important tablespace
    You then get your second database open and make the important tablespace read-only
    You'd drop to the command line and do an export using the TRANSPORT_TABLESPACE option -the command is too susceptible to the specifics to show you here. Check the documentation at http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta063.htm
    You then copy the datafile and the export dump file to the server where your broken database is running
    You then run an import, again specifying the TRANSPORT_TABLESPACE option
    Effectively, the datafile copy gets 'plugged in' to the broken database and gets adopted as a native, brand new tablespace, complete with contents. You finish off by making the tablespace read-write in both databases once more.
    Obviously, you lose data using this sort of process: the data comes back into your 'broken' database in the same state it was in your second database, and you can't apply redo to it to recover it to a more recent state. But that's going to be the best you can do if you don't have proper physical backups of the file. Your subject mentioning cold backups confused me a little on that score too.
    So I won't go into any more detail for now. It may be that I misunderstood the reference to 'restore a tablespace to a different database' and your requirements completely. But if this sounds like what you are after, and if you are stuck on any of the details, then you can always post a follow-up.

  • Windows server locking issue on datafiles

    Hi,
    We are having Oracle database (10.2.0.3) on Windows 2003 (R2 Standard Edition) server. We have database residing on the server, however, the issue is that the database is getting shutdown due to some errors. When we check the alert.log, we understand that some OS process is locking the database files due to which the database is getting shutdown every day around 3.15 - 3.45 am.
    The error as per alert.log is as below;
    Errors in file f:\oracle\product\10.2.0\admin\orclint\bdump\orclint_ckpt_7680.trc:
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCLINT\CONTROL02.CTL'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    When we check the orclint_ckpt_7680.trc file, we have;
    Dump file f:\oracle\product\10.2.0\admin\orclint\bdump\orclint_ckpt_7680.trc
    Sat Nov 19 03:22:15 2011
    ORACLE V10.2.0.3.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Windows NT Version V5.2 Service Pack 2
    CPU : 2 - type 586, 2 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:475M/2047M, Ph+PgF:934M/3434M, VA:1266M/2047M
    Instance name: orclint
    Redo thread mounted by this instance: 1
    Oracle process number: 7
    Windows thread id: 7680, image: ORACLE.EXE (CKPT)
    *** 2011-11-19 03:22:15.440
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-11-19 03:22:15.409
    *** SESSION ID:(165.1) 2011-11-19 03:22:15.409
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCLINT\CONTROL02.CTL'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    error 221 detected in background process
    ORA-00221: error on write to control file
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCLINT\CONTROL02.CTL'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    We have disabled the Antivirus and checked, still the problem exists. We need additional help in identifying the OS process that might be locking the datafiles.Also, when we check the Windows Event Log, it shows nothing wrt the same.
    Please help us with the issue.
    Thanks,
    Rahul

    Try to use the handle tool http://technet.microsoft.com/en-us/sysinternals/bb896655 .
    Example with Oracle XE on Windows 7 with administrator privileges:
    c:\>handle.exe dbf | findstr  SYSTEM
    oracle.exe         pid: 2420   type: File           73C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           744: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: Section        758: \BaseNamedObjects\C:_ORACLEXE_APP_ORACLE_ORADATA_XE_SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           81C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           820: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           844: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           848: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8BC: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8C0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8F0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8F4: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           9AC: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           9B0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           A18: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           A1C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           A9C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           AA0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           AD8: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           ADC: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF

  • Lock during resize datafile

    Dear all,
    During the execution of an "ALTER DATABASE DATAFILE <FILE#> RESIZE <SIZE> " query does Oracle puts some kind of a lock on that datafile or make it unavaliable for use (till the resize is complete) ? Or is it just possible to use the datafile normally during the execution of the "ALTER DATABASE" statement ?
    Version : 10.1.0.5
    Windows 2003
    Thanks in advance,
    Regards

    Hi,
    I've personally never experienced issues with datafile resizing, even on systems with 1000's of users with many active sessions. Although you may wish to note in the Oracle 10g documentation regarding ALTER DATABASE datafile clause it states:
    "You can use any of the following clauses when your instance has the database mounted, open or closed, and the files involved are not in use."
    Cheers,
    Stuart.

  • ORA-27086: unable to lock file over NFS -- but it's NOT Netapp!

    My 10.2 database crashed, and when it came back I got the following error:
    ORA-00202: control file: '/local/opt/oracle/product/10.2.0/dbs/lkFOOBAR'
    ORA-27086: unable to lock file - already in use
    Linux-x86_64 Error: 11: Resource temporarily unavailable
    This is a classic symptom of a Netapp problem, which likes to hold file locks open on NFS mounts. There is a standard procedure for clearing those locks; see, for instance, document 429912.1 on Metalink.
    Unfortunately, my files are mounted on an Isilon, one of Netapp's twisted cousins. I can find no references to "isilon" on Metalink, and we are at a loss how to resolve this.
    My sysadmin assures me that "there are no locks on the Isilon". But I know this cannot be the case, because if I do the following:
    1. delete the lockfile /local/opt/oracle/product/10.2.0/dbs/lkFOOBAR, and then
    2. move my controlfiles aside, and then copy them back into place,
    then the database will mount. However, it will not open, because now all the datafiles have locks.
    Is there anyone with experience in clearing NFS locks? I know this is more of a SA task than DBA, but I am sure my SA has overlooked something.
    Thanks

    New information:
    As stated above, I moved the controlfiles aside and then copied them back into place, like this:
    mv control01.ctl control01-bak.ctl
    cp control01-bak.ctl control01.ctlDid that for each controlfile, and then the database mounted.
    But, after rebooting the machine, we discovered that all locks were back in place-- it looks like the system is locking the files on boot-up, and not letting them go. The lock is held by PID 1, which is init.
    sculkget: lock held by PID: 1This is definitely looking like a major system issue, and not a DBA issue, and hence I have little right to expect assistance in this forum. But nonetheless I lay my situation out here before you all in case someone else recognizes my problem before the server bursts into flames!
    The system is CentOS 4.5-- not my choice, but that's the way it is.

Maybe you are looking for

  • Unload CLOB data to a flat file

    Hi, does anybody knows how to unload CLOB data to a flat file. Can u pl give a example. Is there any ulity in Oracle to unload data to a flat file. Whether we have to use the concatation operator for unloading the data to a flat file. help?????? rega

  • Possibility for automatic column width in column view

    hi there, i would like to ask if anybody has an idea to fulfil my desire. i work in column view in the finder of os 10 and it is not handy that i can not read some of my file names due to the column view. is there a possibility that the columns chang

  • Only text displaying when I upload files

    Hello all, I'm using Dreamweaver CS4 to build my website. However, I not using the ftp client within dreamweaver, I use godaddy's ftp client. When I upload the site only text shows, no images and the template and css I used don't show up at all. Plea

  • Where is the menu to select the waveform monitors and vectorscope?

    I feel stupid but I am unable to find the menu to select the waveform monitors and vectorscope in the newest update to PP CC.  Any guidance would be most helpful.  Happy Halloween!

  • HCM Reporting

    Hi SAP Experts, we use HCM Reports in Enterprise Portal. For the Reports we used until now the functionality of this tool was sufficient but now we have some new requirements. We want to execute a report via HCM Reporting tool with a fix variant - wi