ORA-01545: rollback segment 'R01' specified not available
After a disk crash , I can't open the database. I can only mount it, because of the erro ora-01545. What can I do to resolve this? When I saw the v$datafile that was marked for recover. Someone could help me?
I got these errors when recovering the datafile of RBS tablespace.
RA-00283: recovery session canceled due to errors
ORA-01115: IO error reading block from file 3 (block # 24919)
ORA-01110: data file 3: '/OpenFlexIS/database/dbf/system1/dfRbs01.dbf'
ORA-27072: skgfdisp: I/O error
IBM AIX RISC System/6000 Error: 5: I/O error
But this database is production database. I have tried to open database with comment in rbs line, but these errors occurs:
SMON: enabling cache recovery
Sun May 27 21:17:54 2007
ARC0: Beginning to archive log# 20 seq# 19144
ARC0: Completed archiving log# 20 seq# 19144
Sun May 27 21:17:54 2007
SMON: enabling tx recovery
SMON: about to recover undo segment 1
SMON: mark undo segment 1 as needs recovery
SMON: about to recover undo segment 2
SMON: mark undo segment 2 as needs recovery
SMON: about to recover undo segment 3
SMON: mark undo segment 3 as needs recovery
SMON: about to recover undo segment 4
SMON: mark undo segment 4 as needs recovery
SMON: about to recover undo segment 5
SMON: mark undo segment 5 as needs recovery
SMON: about to recover undo segment 6
SMON: mark undo segment 6 as needs recovery
SMON: about to recover undo segment 7
SMON: mark undo segment 7 as needs recovery
SMON: about to recover undo segment 8
SMON: mark undo segment 8 as needs recovery
SMON: about to recover undo segment 9
SMON: mark undo segment 9 as needs recovery
SMON: about to recover undo segment 10
SMON: mark undo segment 10 as needs recovery
Sun May 27 21:17:56 2007
Errors in file /oradump/OPENFLEX/udump/ora_39216_openflex.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: '/OpenFlexIS/database/dbf/system1/dfRbs01.dbf'
Sun May 27 21:17:56 2007
Similar Messages
-
ORA-01545: rollback segment 'R01' specified not available, after disk crash
After a disk problem, when I try to open the database I receive the error 0ra-01545.What can I do resolve this? It seems that the rollback segment is marked for recover. I an only mount the database.
This is the forms forum. If you ask your question the the database forum you may get an answer much quicker.
-
ORA-01534 rollback segment 'R1' does not exist
Greetings
HELP!! We are using oracle 8.0.5 ...
The database will not open because it says the system
rollback segment does not exist.
The database mounts but won't open. from the file system checks,
the dbf file that contains the system rollback segment appears to
be fine.
Is there anyway to recreate the sys rb seg ? any way to correct
whatever is causing it to think it doesn't exist?
or if it is just hosed, is there any way to retrieve the data in
the data dbf files through another oracle installation or another
oracle database??
Thanks for any input!
nullCheck your init<SID>.ora file. Check the parameter
rollback_segments. If its value contains R01, delete R01. After
that, save the file and startup again.
Robert Xu
Laurie (guest) wrote:
: Greetings
: HELP!! We are using oracle 8.0.5 ...
: The database will not open because it says the system
: rollback segment does not exist.
: The database mounts but won't open. from the file system
checks,
: the dbf file that contains the system rollback segment appears
to
: be fine.
: Is there anyway to recreate the sys rb seg ? any way to
correct
: whatever is causing it to think it doesn't exist?
: or if it is just hosed, is there any way to retrieve the data
in
: the data dbf files through another oracle installation or
another
: oracle database??
: Thanks for any input!
null -
ORA-01534: Rollback Segment "R1" doesn't exist
Hello:
nullCheck your init<SID>.ora file. Check the parameter
rollback_segments. If its value contains R01, delete R01. After
that, save the file and startup again.
Robert Xu
Laurie (guest) wrote:
: Greetings
: HELP!! We are using oracle 8.0.5 ...
: The database will not open because it says the system
: rollback segment does not exist.
: The database mounts but won't open. from the file system
checks,
: the dbf file that contains the system rollback segment appears
to
: be fine.
: Is there anyway to recreate the sys rb seg ? any way to
correct
: whatever is causing it to think it doesn't exist?
: or if it is just hosed, is there any way to retrieve the data
in
: the data dbf files through another oracle installation or
another
: oracle database??
: Thanks for any input!
null -
ORA-27101: shared memory realm does not exist Linux Error: 2: No such file
hello i have a 10 Database R1 installed on Redhat linux AS 4.
i'm having a problem connecting to my database, whenever i try to connect i get the error
ORA-01034: Oracle not available.
ORA-27101 shared memory realm does not exist
Linux Error: 2: No such file or directory.
i checked my startup.log and i found the following after issuing a dbstart.
ORACLE instance started.
Total System Global Area 130023424 bytes
Fixed Size 1218100 bytes
Variable Size 109054412 bytes
Database Buffers 16777216 bytes
Redo Buffers 2973696 bytes
Database mounted.
ORA-01092: ORACLE instance terminated
on dbshut i get the same error message in the shutdown.log
ORA-01034: Oracle not available.
ORA-27101 shared memory realm does not exist
Linux Error: 2: No such file or directory.
my alert_ASYDB.log file has the following
ORA-01534: rollback segment 'R01' doesn't exist
Tue Feb 27 23:14:19 2007
Error 1534 happened during db open, shutting down database
USER: terminating instance due to error 1534
Instance terminated by USER, pid = 3272
ORA-1092 signalled during: ALTER DATABASE OPEN...
in the database creation log, i noticed that this segment could not be created.
i can mount the databse but cannot open it
is there anyway to recreate these segments or any other solution for that?
i have tried to comment the line in initASYDB.ora but dd not work.
regardsHi,
>>do you recommend using auto undo_management instead of RBS?
Yes. To simplify management of rollback segments, the Oracle9i database introduced Automatic Undo Management (AUM) where the database automatically manages allocation and management of undo (rollback) space among various active sessions. In a database using AUM, all transactions share a single undo tablespace. Any executing transaction can consume free space in this tablespace. Undo space is dynamically transferred from committed transactions to executing transactions in the event of space scarcity in the undo tablespace..The AUM feature also provides a way for administrators to exert control on undo retention. You can specify the amount of undo to be retained in terms of wall clock time (number of seconds). With retention control, you can configure your systems to allow long running queries to execute successfully without encountering ORA-1555 (Snapshot too old) errors ...
For more information, you can take a look on these links below:
http://www.oracle.com/technology/obe/obe10gdb/manage/undoadv/undoadv.htm
http://www.oracle-base.com/articles/9i/AutomaticUndoManagement.php
Cheers -
Rollback segment error ORA-1628
Hellow
Currently we are working on Oracle 8i database.The database contain spatial data around 10GB.We have a tablespace RBS containing 4 rollback segments. The issue is whenever we try to insert spatial data we get this error message (in the alert file)
ORA-1628: max # extents 121 reached for rollback segment R01
Failure to extend rollback segment 2 because of 1628 condition
FULL status of rollback segment 2 cleared.
I extended the size of RBS tablespace by adding another datafile, but still it doesn't help.
Also i found that RBS tablespace is marked as PERMANENT...it means the rollback data does not get flushed out periodically?Also Should this tablespace be temporary? How can i counter this problem?
Regards
SamHi,
01628, 00000, "max # extents (%s) reached for rollback segment %s"
// *Cause: An attempt was made to extend a rollback segment that was
// already at the MAXEXTENTS value.
// *Action: If the value of the MAXEXTENTS storage parameter is less than
// the maximum allowed by the system, raise this value. => Alter your RBS and allow more than 121 extents to be created in this RBS.
Also i found that RBS tablespace is marked as PERMANENT.Yes, it's normal.
..it means the rollback data does not get flushed out periodically?No
Also Should this tablespace be temporary? No, this can't be
How can i counter this problem?Free advise: RTFM about Rollback Segments! Start here (8i doc) or read the Concepts book.
Regards,
Yoann. -
ROLLBACK SEGMENT의 MINEXTENTS를 20 이상으로 하면 좋은 이유
제품 : ORACLE SERVER
작성날짜 : 2003-06-19
ROLLBACK SEGMENT의 MINEXTENTS를 20 이상으로 하면 좋은 이유
=========================================================
PURPOSE
이 자료는 다음과 같은 주제에 대하여 소개하는 자료이다.
이 문서는 database application의 요구 사항을 충족시키기 위해 고려되어
져야 할 rollback segment tablespace 구성에 관한 내용을 담고 있다.
Creating, Optimizing, and Understanding Rollback Segments
-Rollback Segment 구성과 기록 방식
-Transaction에 Rollback Segment를 할당하는 Oracle 내부 메커니즘
-Rollback Segment 크기와 갯수
-Rollback Segment의 크기와 갯수 결정을 위한 테스트
-Rollback Segment extent의 크기와 갯수
-Rollback Segment의 minextents를 20 이상으로 하면 좋은 이유?
-Rollback Segment의 Optimal storage parameter와 Shrink
Explanation
Rollback Segment 구성과 기록 방식
Rollback segment는 extent라 불리는 연속적인 여러 개의 block으로 구성된다.
Rollback segment는 ordered circular 방식으로 extent를 쓰게 되는데,
current extent가 full이 되면 next extent로 옮겨 가며 사용하게 된다.
Transaction은 rollback segment 내의 current location에 record를 쓴 다음,
record의 size 만큼 current pointer를 옮겨 간다.
Rollback segment에 현재 record가 쓰여지고 있는 위치를 "Head"라고 한다.
또한, "Tail"이란 용어는 rollback segment에서 가장 오래된 active
transaction record의 시작 위치가 되는 부분을 말한다.
Transaction에 Rollback Segment를 할당하는 Oracle 내부 메커니즘
새로운 transaction이 rollback segment 를 요청하면, 각 rollback segment
를 이용하고 있는 active transaction 갯수를 확인하여 가장 적은 갯수의
active transaction 을 가진 rollback segment를 할당하게 된다.
Rollback segment는 transaction load를 처리하기에 충분한 크기를 가져야
하고, 필요한 만큼의 rollback segment를 사용할 수 있도록 적당한 갯수의
rollback segment를 가져야 한다.
1. 한 transaction은 단 하나의 rollback segment만을 사용할 수 있다.
2. 같은 extent에 여러 transaction이 기록할 수 있다.
3. Rollback segment의 Head는 Tail에 의해 현재 사용 중인 extent를
침범하지 않는다.
4. 링 형태로 구성되어 있는 rollback segment의 extent들은 다음 extent를
찾을 때 절대 건너 뛰는 일이 없으며, 순서를 뒤바꾸어 사용하지도 않는다.
5. Head가 next extent를 찾지 못하면, 새로운 extent를 추가로 할당하고,
그 extent를 링 안에 포함시킨다.
위와 같은 원리를 감안할 때, transaction size 뿐만 아니라 transaction
time도 상당히 중요한 고려 사항이라는 것을 알 수 있다.
Rollback Segment 크기와 갯수
Rollback segment size가 충분한지 판단하는 기준은 transaction activity에
직접적으로 영향을 받는다. 주로 일어나는 transaction activity에 근거하여
rollback segment size를 결정하여야 하고, 잘 일어나지 않는 특수한 경우의
큰 transaction이 문제라면 별도의 rollback segment로 관리되어야 한다.
Transaction 발생 중 Head가 너무 빨리 wrap around 시켜서 tail을 catch하
지 않도록 하여야 하며, 자주 변경되는 data에 대해 long-running query가
수행되었을 경우 read-consistency가 유지될 수 있도록 rollback segment
가 wrap around되지 않아야 한다.
Rollback segment 갯수를 적당히 잡아야 하는 이유는 process들 간에
contention을 방지하기 위함이고, V$WAITSTAT, V$ROLLSTAT, V$ROLLNAME
view를 통해서 contention을 확인할 수 있으며, 조회문은 다음과 같다.
sqlplus system/manager
select rn.name, (rs.waits/rs.gets) rbs_header_wait_ratio
from v$rollstat rs, v$rollname rn
where rs.usn = rn.usn
order by 1;
위의 query에 의해 조회된 rbs_header_wait_ratio 가 0.01 보다 크면,
rollback segment 갯수를 추가한다.
Rollback Segment의 크기와 갯수 결정을 위한 테스트
1. Rollback segment tablespace 생성
2. 테스트하기 위해 생성할 Rollback segment 갯수 결정
3. 같은 크기의 extent로 rollback segment 생성
extent 갯수는 최대 확장 시 10 - 30 개 정도가 되도록 extent 크기를 결정
4. Rollback segment의 minextents는 2이다.
5. 테스트할 rollback segment와 system rollback segment만 online 상태로 한다.
6. Transaction을 수행하고, 필요하면 application을 load한다.
7. Rollback segment contention을 확인한다.
8. Rollback segment가 최대 얼마까지 확장하는지 모니터링한다.
Rollback Segment extent의 크기와 갯수
Rollback segment가 자라나는 최대 사이즈를 알 수 있는데, 이 수치를
"minimum coverage size"라 한다. 만약, contention이 발생한다면 rollback
segment 갯수를 늘려 가면 테스트를 반복한다. 또한, extent 갯수가 10개
미만이나 30개 이상이 될 필요가 있다면 extent 크기를 늘리거나 줄이면서
테스트를 반복해 나가면 된다.
Rollback segment의 extent 크기를 정할 때, 각 extent는 모두 같은 크기로
생성할 것을 recommend한다.
Rollback tablespace의 크기는 extent size의 배수로 지정한다.
최적의 성능을 위한 rollback segment의 minextents는 20 이상이어야 한다.
Rollback Segment의 minextents를 20 이상으로 하면 좋은 이유?
Rollback segment는 dynamic하게 allocate되고, 더 이상 필요 없게 되었을 때
(만약, Optimal parameter가 셋팅되어 있으면) 모두 commit된 extent에
대해서는 optimal size 만큼만 남기고 release(deallocate)된다.
Rollback segment가 적은 수의 extent를 가질 수록, space 할당/해제 시
extent 수가 많을 때보다 큰 사이즈의 space가 할당되고, 해제된다.
다음과 같은 예를 들어 보자.
200M 정도의 rollback segment가 있는데, 100M 짜리 2개의 extent로 이루어져
있다고 가정해보자. 이 rollback segment에 추가로 space를 할당해야 할 일이
생겼을 때, 모든 rollback segment extent는 같은 크기를 가져야 한다는 점을
감안할 때, 100M 짜리 extent를 하나 더 할당해야 할 것이다.
이 결과 직전의 rollback segment 크기에 비하여 50% 만큼의 크기 증가분이
생겨나게 된 것인데, 실제 필요로 하는 space보다 더 많은 space가 할당되었을
것이다.
이와 반대로, 10M 짜리 extent 20개로 구성된 200M 짜리 rollback segment를
생각해보자.
여기에 추가로 space를 할당해야 할 일이 생겼을 때, 10M 짜리 extent 하나만
추가되면 되는 것이다.
Rollback segment가 20개 또는 그 이상의 extent로 구성되어 있다면 extent가
하나 더 증가할 경우가 생겼을 때, rollback segment의 전체 크기가 5% 이상은
늘어나지 않는다는 것이다.
즉, space의 할당과 해제 작업이 보다 유연하고 쉽게 일어날 수 있다.
요약하면, rollback segment의 extent 갯수를 20 이상으로 잡으면 space
할당과 해제가 "보다" 수월해진다.
실제로 extent 갯수를 20 이상으로 잡았을 때, 처리 속도가 훨씬 빨라진다는
사실이 많은 테스트 결과 밝혀졌다.
한가지 확실한 사실은, space를 할당하고 해제하는 작업은 cost가 적게 드는
작업이 아니라는 사실이다.
실제로 extent가 할당/해제되는 작업이 일어날 때, performance가 저하되는
일이 발생한다는 것이다.
Extent 하나에 대한 cost는 별 문제가 안 된다고 할지라도, rollback segment
는 끊임없이 space를 할당하고 해제하는 작업을 반복하기 때문에 작은 크기의
extent를 갖는 것이 cost 측면에서 훨씬 효율적이라는 결론이다.
Rollback Segment의 Optimal storage parameter와 Shrink
Optimal은 deallocate 시에 rollback segment 내에 optimal size 만큼의
extents를 유지하기 위해 사용하는 rollback segment storage parameter이다.
다음과 같은 명령으로 사용한다.
alter rollback segment r01 storage (optimal 1m);Optimal size는 storage 절 안에서 기술되어야 한다.
Optimal size 이상이 되면, 모두 commit된 extent에 대해서는 optimal size
만큼만 남기고 release된다.
즉, optimal에서 지정한 크기 만큼만 rollback segment를 유지하겠다는
뜻이며, 일정한 크기로 늘어났다가 다음번 tx이 해당 rbs를 취할 경우
optimal size만큼 resize하는 option이다.
rbs의 가장 최근에 사용된 extent가 다 차서 다른 extent를 요구할 때
이 optimal size와 rbs size를 비교하게 되며, 만약 rbs size가 더 크다면
active tx에 관여하지 않는 tail extent에 대하여 deallocation이 이루어진다.
특정 rollback segment가 너무 큰 space를 차지해서 다른 rollback segment가
extent를 발생할 수 있는 여유 공간을 부족하게 만들기 때문에 이를 극복하기
위해서 optimal size를 지정할 필요가 있다.
즉, optimal parameter를 지정하면 space availability 측면에서 효율적이다.
다음과 같이 shrink 명령을 수행하는데, size를 지정하지 않으면 optimal
size 만큼 shrink된다.
alter rollback segment [rbs_name] shrink to [size];Shrink 명령 수행 후, 바로 줄어들지 않는 경우가 있는데,
transaction이 있는 경우는 줄어들지 않고, transaction이 종료되면 줄어든다.
Optimal이 적용되는 시간은 session이 빠져 나가고 약 5~10 분 정도 걸린다.
적당한 OPTIMAL SIZE?
=> 20 ~ 30 extents 정도가 적당한데, batch job의 성격에 따라 size는 달라
지며 각 optimal의 합이 datafile의 size를 넘어도 전혀 상관없다.
Optimal size를 initial, next와 같게 주면 extent가 발생하는 매번 shrink가
일어나므로 좋지 않다.
RBS들의 평균 크기를 구하여 이것을 optimal 크기로 지정하여 사용하는 것을
권한다.
다음의 query를 이용하여 peak time에 rollback segment들의 평균 크기를 구한다.
select initial_extent + next_extent * (extents-1) "Rollback_size", extents
from dba_segments
where segment_type ='ROLLBACK';
이 크기의 평균값(bytes)을 rollback segment들의 optimal size로 사용할 수
있다.
주의할 사항은 너무 자주 shrink된다거나 optimal 값을 너무 작게 주면
ora-1555 : snapshot too old error가 발생할 확률이 높아지므로,
사용하지 않는 것이 좋을 수도 있고, 되도록 큰 값으로 셋팅해야 한다.
Rollback segment의 optimal size를 확인할 수 있는 view는 V$ROLLSTAT
이라는 dynamic view로서 OPTSIZE column에서 확인이 가능하다.
Example
none
Reference Documents
<Note:69464.1> -
Error while creating the rollback segment (Oracle 8i & OS Win NT)
hi
I am using Oracle 8i and when i am creating the new rollback segment for my database i have got following error message
ORA-01593 Rollback segment optimal size (30 blks) is smaller than the computed initial size (2560 blks)
CREATE ROLLBACK SEGMENT "RBS11" TABLESPACE "RBS1"
STORAGE ( INITIAL 120K NEXT
120K OPTIMAL
240K MINEXTENTS 2
MAXEXTENTS 100)
Note:- db_block size is 8k
Tablespace RBS1 is the Locally managed Tablespace having datafile of 50m and uniform size of 10m
But Given statement processed while i am using Tablespace RBS (winch is data dictionary managed)
Plz, suggest me to cause of that error and solutionYou said 120K optimal and initial is 120K with minextents of 2. The optimal size then will be smaller than the initial allocation for the rbs.
ORA-01593: rollback segment optimal size (string blks) is smaller than the computed initial size (string blks)
Cause: Specified OPTIMAL size is smaller than the cumulative size of the initial extents during create rollback segment.
Action: Specify a larger OPTIMAL size. -
LVM Volumes not available after update
Hi All!
I haven't updated my system for about two months and today I updated it. Now I have the problem that I cannot boot properly. I have my root partition in an LVM volume and on boot I get the message
ERROR: device 'UUID=xxx' not found. Skipping fs
ERROR: Unable to find root device 'UUID=xxx'
After that I land in the recovery shell. After some research I found, that "lvm lvdisplay" showed that my volumes where not available and I had to reenable them with "lvm vgchange -a y".
Issuing any lvm command also produced the following warning:
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
Anyway, after issuing the commands and exiting the recovery shell, the system booted again. However, I would prefer being able to boot without manual actions.
Thanks in advance!
Further information:
vgdisplay
--- Volume group ---
VG Name ArchLVM
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 232.69 GiB
PE Size 4.00 MiB
Total PE 59568
Alloc PE / Size 59568 / 232.69 GiB
Free PE / Size 0 / 0
VG UUID SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5
lvdisplay (LV Status was 'not available' right after booting)
--- Logical volume ---
LV Path /dev/ArchLVM/Swap
LV Name Swap
VG Name ArchLVM
LV UUID XRYBrz-LojR-k6SD-XIxV-wHnY-f3VG-giKL6V
LV Write Access read/write
LV Creation host, time archiso, 2014-05-16 14:43:06 +0200
LV Status available
# open 0
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
--- Logical volume ---
LV Path /dev/ArchLVM/Root
LV Name Root
VG Name ArchLVM
LV UUID lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv
LV Write Access read/write
LV Creation host, time archiso, 2014-05-16 14:43:27 +0200
LV Status available
# open 1
LV Size 224.69 GiB
Current LE 57520
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1
/etc/fstab
# /etc/fstab: static file system information
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/mapper/ArchLVM-Root
UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 / ext4 rw,relatime,data=ordered 0 1
#/dev/mapper/ArchLVM-Root / ext4 rw,relatime,data=ordered 0 1
# /dev/sda1
UUID=72691888-a781-4cdd-a98e-2613d87925d0 /boot ext2 rw,relatime 0 2
/etc/mkinitcpio.conf
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES="piix ide_disk reiserfs"
MODULES=""
# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image. This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=""
# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way. This is useful for config files.
FILES=""
# HOOKS
# This is the most important setting in this file. The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
## This setup specifies all modules in the MODULES setting above.
## No raid, lvm2, or encrypted root is needed.
# HOOKS="base"
## This setup will autodetect all modules for your system and should
## work as a sane default
# HOOKS="base udev autodetect block filesystems"
## This setup will generate a 'full' image which supports most systems.
## No autodetection is done.
# HOOKS="base udev block filesystems"
## This setup assembles a pata mdadm array with an encrypted root FS.
## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
# HOOKS="base udev block mdadm encrypt filesystems"
## This setup loads an lvm2 volume group on a usb device.
# HOOKS="base udev block lvm2 filesystems"
## NOTE: If you have /usr on a separate partition, you MUST include the
# usr, fsck and shutdown hooks.
HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck"
# COMPRESSION
# Use this to compress the initramfs image. By default, gzip compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
#COMPRESSION="lz4"
# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=""
/boot/grub/grub.cfg
# DO NOT EDIT THIS FILE
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
### BEGIN /etc/grub.d/00_header ###
insmod part_gpt
insmod part_msdos
if [ -s $prefix/grubenv ]; then
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="0"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
if [ x$feature_default_font_path = xy ] ; then
font=unicode
else
insmod part_msdos
insmod lvm
insmod ext2
set root='lvmid/SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5/lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='lvmid/SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5/lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv' 2db82d1a-47a4-4e30-a819-143e8fb75199
else
search --no-floppy --fs-uuid --set=root 2db82d1a-47a4-4e30-a819-143e8fb75199
fi
font="/usr/share/grub/unicode.pf2"
fi
if loadfont $font ; then
set gfxmode=auto
load_video
insmod gfxterm
fi
terminal_input console
terminal_output gfxterm
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-2db82d1a-47a4-4e30-a819-143e8fb75199' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
else
search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
fi
echo 'Loading Linux linux ...'
linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-linux.img
submenu 'Advanced options for Arch Linux' $menuentry_id_option 'gnulinux-advanced-2db82d1a-47a4-4e30-a819-143e8fb75199' {
menuentry 'Arch Linux, with Linux linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-advanced-2db82d1a-47a4-4e30-a819-143e8fb75199' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
else
search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
fi
echo 'Loading Linux linux ...'
linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-linux.img
menuentry 'Arch Linux, with Linux linux (fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-fallback-2db82d1a-47a4-4e30-a819-143e8fb75199' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
else
search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
fi
echo 'Loading Linux linux ...'
linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-linux-fallback.img
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
### BEGIN /etc/grub.d/60_memtest86+ ###
### END /etc/grub.d/60_memtest86+ ###
Last edited by Kirodema (2014-07-16 07:31:34)use_lvmetad = 0
lvm2-lvmetad is not enabled or running on my system. Shall I activate it?
# This is an example configuration file for the LVM2 system.
# It contains the default settings that would be used if there was no
# /etc/lvm/lvm.conf file.
# Refer to 'man lvm.conf' for further information including the file layout.
# To put this file in a different directory and override /etc/lvm set
# the environment variable LVM_SYSTEM_DIR before running the tools.
# N.B. Take care that each setting only appears once if uncommenting
# example settings in this file.
# This section allows you to set the way the configuration settings are handled.
config {
# If enabled, any LVM2 configuration mismatch is reported.
# This implies checking that the configuration key is understood
# by LVM2 and that the value of the key is of a proper type.
# If disabled, any configuration mismatch is ignored and default
# value is used instead without any warning (a message about the
# configuration key not being found is issued in verbose mode only).
checks = 1
# If enabled, any configuration mismatch aborts the LVM2 process.
abort_on_errors = 0
# Directory where LVM looks for configuration profiles.
profile_dir = "/etc/lvm/profile"
# This section allows you to configure which block devices should
# be used by the LVM system.
devices {
# Where do you want your volume groups to appear ?
dir = "/dev"
# An array of directories that contain the device nodes you wish
# to use with LVM2.
scan = [ "/dev" ]
# If set, the cache of block device nodes with all associated symlinks
# will be constructed out of the existing udev database content.
# This avoids using and opening any inapplicable non-block devices or
# subdirectories found in the device directory. This setting is applied
# to udev-managed device directory only, other directories will be scanned
# fully. LVM2 needs to be compiled with udev support for this setting to
# take effect. N.B. Any device node or symlink not managed by udev in
# udev directory will be ignored with this setting on.
obtain_device_list_from_udev = 1
# If several entries in the scanned directories correspond to the
# same block device and the tools need to display a name for device,
# all the pathnames are matched against each item in the following
# list of regular expressions in turn and the first match is used.
preferred_names = [ ]
# Try to avoid using undescriptive /dev/dm-N names, if present.
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
# A filter that tells LVM2 to only use a restricted set of devices.
# The filter consists of an array of regular expressions. These
# expressions can be delimited by a character of your choice, and
# prefixed with either an 'a' (for accept) or 'r' (for reject).
# The first expression found to match a device name determines if
# the device will be accepted or rejected (ignored). Devices that
# don't match any patterns are accepted.
# Be careful if there there are symbolic links or multiple filesystem
# entries for the same device as each name is checked separately against
# the list of patterns. The effect is that if the first pattern in the
# list to match a name is an 'a' pattern for any of the names, the device
# is accepted; otherwise if the first pattern in the list to match a name
# is an 'r' pattern for any of the names it is rejected; otherwise it is
# accepted.
# Don't have more than one filter line active at once: only one gets used.
# Run vgscan after you change this parameter to ensure that
# the cache file gets regenerated (see below).
# If it doesn't do what you expect, check the output of 'vgscan -vvvv'.
# If lvmetad is used, then see "A note about device filtering while
# lvmetad is used" comment that is attached to global/use_lvmetad setting.
# By default we accept every block device:
filter = [ "a/.*/" ]
# Exclude the cdrom drive
# filter = [ "r|/dev/cdrom|" ]
# When testing I like to work with just loopback devices:
# filter = [ "a/loop/", "r/.*/" ]
# Or maybe all loops and ide drives except hdc:
# filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
# Use anchors if you want to be really specific
# filter = [ "a|^/dev/hda8$|", "r/.*/" ]
# Since "filter" is often overridden from command line, it is not suitable
# for system-wide device filtering (udev rules, lvmetad). To hide devices
# from LVM-specific udev processing and/or from lvmetad, you need to set
# global_filter. The syntax is the same as for normal "filter"
# above. Devices that fail the global_filter are not even opened by LVM.
# global_filter = []
# The results of the filtering are cached on disk to avoid
# rescanning dud devices (which can take a very long time).
# By default this cache is stored in the /etc/lvm/cache directory
# in a file called '.cache'.
# It is safe to delete the contents: the tools regenerate it.
# (The old setting 'cache' is still respected if neither of
# these new ones is present.)
# N.B. If obtain_device_list_from_udev is set to 1 the list of
# devices is instead obtained from udev and any existing .cache
# file is removed.
cache_dir = "/etc/lvm/cache"
cache_file_prefix = ""
# You can turn off writing this cache file by setting this to 0.
write_cache_state = 1
# Advanced settings.
# List of pairs of additional acceptable block device types found
# in /proc/devices with maximum (non-zero) number of partitions.
# types = [ "fd", 16 ]
# If sysfs is mounted (2.6 kernels) restrict device scanning to
# the block devices it believes are valid.
# 1 enables; 0 disables.
sysfs_scan = 1
# By default, LVM2 will ignore devices used as component paths
# of device-mapper multipath devices.
# 1 enables; 0 disables.
multipath_component_detection = 1
# By default, LVM2 will ignore devices used as components of
# software RAID (md) devices by looking for md superblocks.
# 1 enables; 0 disables.
md_component_detection = 1
# By default, if a PV is placed directly upon an md device, LVM2
# will align its data blocks with the md device's stripe-width.
# 1 enables; 0 disables.
md_chunk_alignment = 1
# Default alignment of the start of a data area in MB. If set to 0,
# a value of 64KB will be used. Set to 1 for 1MiB, 2 for 2MiB, etc.
# default_data_alignment = 1
# By default, the start of a PV's data area will be a multiple of
# the 'minimum_io_size' or 'optimal_io_size' exposed in sysfs.
# - minimum_io_size - the smallest request the device can perform
# w/o incurring a read-modify-write penalty (e.g. MD's chunk size)
# - optimal_io_size - the device's preferred unit of receiving I/O
# (e.g. MD's stripe width)
# minimum_io_size is used if optimal_io_size is undefined (0).
# If md_chunk_alignment is enabled, that detects the optimal_io_size.
# This setting takes precedence over md_chunk_alignment.
# 1 enables; 0 disables.
data_alignment_detection = 1
# Alignment (in KB) of start of data area when creating a new PV.
# md_chunk_alignment and data_alignment_detection are disabled if set.
# Set to 0 for the default alignment (see: data_alignment_default)
# or page size, if larger.
data_alignment = 0
# By default, the start of the PV's aligned data area will be shifted by
# the 'alignment_offset' exposed in sysfs. This offset is often 0 but
# may be non-zero; e.g.: certain 4KB sector drives that compensate for
# windows partitioning will have an alignment_offset of 3584 bytes
# (sector 7 is the lowest aligned logical block, the 4KB sectors start
# at LBA -1, and consequently sector 63 is aligned on a 4KB boundary).
# But note that pvcreate --dataalignmentoffset will skip this detection.
# 1 enables; 0 disables.
data_alignment_offset_detection = 1
# If, while scanning the system for PVs, LVM2 encounters a device-mapper
# device that has its I/O suspended, it waits for it to become accessible.
# Set this to 1 to skip such devices. This should only be needed
# in recovery situations.
ignore_suspended_devices = 0
# ignore_lvm_mirrors: Introduced in version 2.02.104
# This setting determines whether logical volumes of "mirror" segment
# type are scanned for LVM labels. This affects the ability of
# mirrors to be used as physical volumes. If 'ignore_lvm_mirrors'
# is set to '1', it becomes impossible to create volume groups on top
# of mirror logical volumes - i.e. to stack volume groups on mirrors.
# Allowing mirror logical volumes to be scanned (setting the value to '0')
# can potentially cause LVM processes and I/O to the mirror to become
# blocked. This is due to the way that the "mirror" segment type handles
# failures. In order for the hang to manifest itself, an LVM command must
# be run just after a failure and before the automatic LVM repair process
# takes place OR there must be failures in multiple mirrors in the same
# volume group at the same time with write failures occurring moments
# before a scan of the mirror's labels.
# Note that these scanning limitations do not apply to the LVM RAID
# types, like "raid1". The RAID segment types handle failures in a
# different way and are not subject to possible process or I/O blocking.
# It is encouraged that users set 'ignore_lvm_mirrors' to 1 if they
# are using the "mirror" segment type. Users that require volume group
# stacking on mirrored logical volumes should consider using the "raid1"
# segment type. The "raid1" segment type is not available for
# active/active clustered volume groups.
# Set to 1 to disallow stacking and thereby avoid a possible deadlock.
ignore_lvm_mirrors = 1
# During each LVM operation errors received from each device are counted.
# If the counter of a particular device exceeds the limit set here, no
# further I/O is sent to that device for the remainder of the respective
# operation. Setting the parameter to 0 disables the counters altogether.
disable_after_error_count = 0
# Allow use of pvcreate --uuid without requiring --restorefile.
require_restorefile_with_uuid = 1
# Minimum size (in KB) of block devices which can be used as PVs.
# In a clustered environment all nodes must use the same value.
# Any value smaller than 512KB is ignored.
# Ignore devices smaller than 2MB such as floppy drives.
pv_min_size = 2048
# The original built-in setting was 512 up to and including version 2.02.84.
# pv_min_size = 512
# Issue discards to a logical volumes's underlying physical volume(s) when
# the logical volume is no longer using the physical volumes' space (e.g.
# lvremove, lvreduce, etc). Discards inform the storage that a region is
# no longer in use. Storage that supports discards advertise the protocol
# specific way discards should be issued by the kernel (TRIM, UNMAP, or
# WRITE SAME with UNMAP bit set). Not all storage will support or benefit
# from discards but SSDs and thinly provisioned LUNs generally do. If set
# to 1, discards will only be issued if both the storage and kernel provide
# support.
# 1 enables; 0 disables.
issue_discards = 0
# This section allows you to configure the way in which LVM selects
# free space for its Logical Volumes.
allocation {
# When searching for free space to extend an LV, the "cling"
# allocation policy will choose space on the same PVs as the last
# segment of the existing LV. If there is insufficient space and a
# list of tags is defined here, it will check whether any of them are
# attached to the PVs concerned and then seek to match those PV tags
# between existing extents and new extents.
# Use the special tag "@*" as a wildcard to match any PV tag.
# Example: LVs are mirrored between two sites within a single VG.
# PVs are tagged with either @site1 or @site2 to indicate where
# they are situated.
# cling_tag_list = [ "@site1", "@site2" ]
# cling_tag_list = [ "@*" ]
# Changes made in version 2.02.85 extended the reach of the 'cling'
# policies to detect more situations where data can be grouped
# onto the same disks. Set this to 0 to revert to the previous
# algorithm.
maximise_cling = 1
# Whether to use blkid library instead of native LVM2 code to detect
# any existing signatures while creating new Physical Volumes and
# Logical Volumes. LVM2 needs to be compiled with blkid wiping support
# for this setting to take effect.
# LVM2 native detection code is currently able to recognize these signatures:
# - MD device signature
# - swap signature
# - LUKS signature
# To see the list of signatures recognized by blkid, check the output
# of 'blkid -k' command. The blkid can recognize more signatures than
# LVM2 native detection code, but due to this higher number of signatures
# to be recognized, it can take more time to complete the signature scan.
use_blkid_wiping = 1
# Set to 1 to wipe any signatures found on newly-created Logical Volumes
# automatically in addition to zeroing of the first KB on the LV
# (controlled by the -Z/--zero y option).
# The command line option -W/--wipesignatures takes precedence over this
# setting.
# The default is to wipe signatures when zeroing.
wipe_signatures_when_zeroing_new_lvs = 1
# Set to 1 to guarantee that mirror logs will always be placed on
# different PVs from the mirror images. This was the default
# until version 2.02.85.
mirror_logs_require_separate_pvs = 0
# Set to 1 to guarantee that cache_pool metadata will always be
# placed on different PVs from the cache_pool data.
cache_pool_metadata_require_separate_pvs = 0
# Specify the minimal chunk size (in kiB) for cache pool volumes.
# Using a chunk_size that is too large can result in wasteful use of
# the cache, where small reads and writes can cause large sections of
# an LV to be mapped into the cache. However, choosing a chunk_size
# that is too small can result in more overhead trying to manage the
# numerous chunks that become mapped into the cache. The former is
# more of a problem than the latter in most cases, so we default to
# a value that is on the smaller end of the spectrum. Supported values
# range from 32(kiB) to 1048576 in multiples of 32.
# cache_pool_chunk_size = 64
# Set to 1 to guarantee that thin pool metadata will always
# be placed on different PVs from the pool data.
thin_pool_metadata_require_separate_pvs = 0
# Specify chunk size calculation policy for thin pool volumes.
# Possible options are:
# "generic" - if thin_pool_chunk_size is defined, use it.
# Otherwise, calculate the chunk size based on
# estimation and device hints exposed in sysfs:
# the minimum_io_size. The chunk size is always
# at least 64KiB.
# "performance" - if thin_pool_chunk_size is defined, use it.
# Otherwise, calculate the chunk size for
# performance based on device hints exposed in
# sysfs: the optimal_io_size. The chunk size is
# always at least 512KiB.
# thin_pool_chunk_size_policy = "generic"
# Specify the minimal chunk size (in KB) for thin pool volumes.
# Use of the larger chunk size may improve performance for plain
# thin volumes, however using them for snapshot volumes is less efficient,
# as it consumes more space and takes extra time for copying.
# When unset, lvm tries to estimate chunk size starting from 64KB
# Supported values are in range from 64 to 1048576.
# thin_pool_chunk_size = 64
# Specify discards behaviour of the thin pool volume.
# Select one of "ignore", "nopassdown", "passdown"
# thin_pool_discards = "passdown"
# Set to 0, to disable zeroing of thin pool data chunks before their
# first use.
# N.B. zeroing larger thin pool chunk size degrades performance.
# thin_pool_zero = 1
# This section that allows you to configure the nature of the
# information that LVM2 reports.
log {
# Controls the messages sent to stdout or stderr.
# There are three levels of verbosity, 3 being the most verbose.
verbose = 0
# Set to 1 to suppress all non-essential messages from stdout.
# This has the same effect as -qq.
# When this is set, the following commands still produce output:
# dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
# pvs, version, vgcfgrestore -l, vgdisplay, vgs.
# Non-essential messages are shifted from log level 4 to log level 5
# for syslog and lvm2_log_fn purposes.
# Any 'yes' or 'no' questions not overridden by other arguments
# are suppressed and default to 'no'.
silent = 0
# Should we send log messages through syslog?
# 1 is yes; 0 is no.
syslog = 1
# Should we log error and debug messages to a file?
# By default there is no log file.
#file = "/var/log/lvm2.log"
# Should we overwrite the log file each time the program is run?
# By default we append.
overwrite = 0
# What level of log messages should we send to the log file and/or syslog?
# There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.
# 7 is the most verbose (LOG_DEBUG).
level = 0
# Format of output messages
# Whether or not (1 or 0) to indent messages according to their severity
indent = 1
# Whether or not (1 or 0) to display the command name on each line output
command_names = 0
# A prefix to use before the message text (but after the command name,
# if selected). Default is two spaces, so you can see/grep the severity
# of each message.
prefix = " "
# To make the messages look similar to the original LVM tools use:
# indent = 0
# command_names = 1
# prefix = " -- "
# Set this if you want log messages during activation.
# Don't use this in low memory situations (can deadlock).
# activation = 0
# Some debugging messages are assigned to a class and only appear
# in debug output if the class is listed here.
# Classes currently available:
# memory, devices, activation, allocation, lvmetad, metadata, cache,
# locking
# Use "all" to see everything.
debug_classes = [ "memory", "devices", "activation", "allocation",
"lvmetad", "metadata", "cache", "locking" ]
# Configuration of metadata backups and archiving. In LVM2 when we
# talk about a 'backup' we mean making a copy of the metadata for the
# *current* system. The 'archive' contains old metadata configurations.
# Backups are stored in a human readable text format.
backup {
# Should we maintain a backup of the current metadata configuration ?
# Use 1 for Yes; 0 for No.
# Think very hard before turning this off!
backup = 1
# Where shall we keep it ?
# Remember to back up this directory regularly!
backup_dir = "/etc/lvm/backup"
# Should we maintain an archive of old metadata configurations.
# Use 1 for Yes; 0 for No.
# On by default. Think very hard before turning this off.
archive = 1
# Where should archived files go ?
# Remember to back up this directory regularly!
archive_dir = "/etc/lvm/archive"
# What is the minimum number of archive files you wish to keep ?
retain_min = 10
# What is the minimum time you wish to keep an archive file for ?
retain_days = 30
# Settings for the running LVM2 in shell (readline) mode.
shell {
# Number of lines of history to store in ~/.lvm_history
history_size = 100
# Miscellaneous global LVM2 settings
global {
# The file creation mask for any files and directories created.
# Interpreted as octal if the first digit is zero.
umask = 077
# Allow other users to read the files
#umask = 022
# Enabling test mode means that no changes to the on disk metadata
# will be made. Equivalent to having the -t option on every
# command. Defaults to off.
test = 0
# Default value for --units argument
units = "h"
# Since version 2.02.54, the tools distinguish between powers of
# 1024 bytes (e.g. KiB, MiB, GiB) and powers of 1000 bytes (e.g.
# KB, MB, GB).
# If you have scripts that depend on the old behaviour, set this to 0
# temporarily until you update them.
si_unit_consistency = 1
# Whether or not to display unit suffix for sizes. This setting has
# no effect if the units are in human-readable form (global/units="h")
# in which case the suffix is always displayed.
suffix = 1
# Whether or not to communicate with the kernel device-mapper.
# Set to 0 if you want to use the tools to manipulate LVM metadata
# without activating any logical volumes.
# If the device-mapper kernel driver is not present in your kernel
# setting this to 0 should suppress the error messages.
activation = 1
# If we can't communicate with device-mapper, should we try running
# the LVM1 tools?
# This option only applies to 2.4 kernels and is provided to help you
# switch between device-mapper kernels and LVM1 kernels.
# The LVM1 tools need to be installed with .lvm1 suffices
# e.g. vgscan.lvm1 and they will stop working after you start using
# the new lvm2 on-disk metadata format.
# The default value is set when the tools are built.
# fallback_to_lvm1 = 0
# The default metadata format that commands should use - "lvm1" or "lvm2".
# The command line override is -M1 or -M2.
# Defaults to "lvm2".
# format = "lvm2"
# Location of proc filesystem
proc = "/proc"
# Type of locking to use. Defaults to local file-based locking (1).
# Turn locking off by setting to 0 (dangerous: risks metadata corruption
# if LVM2 commands get run concurrently).
# Type 2 uses the external shared library locking_library.
# Type 3 uses built-in clustered locking.
# Type 4 uses read-only locking which forbids any operations that might
# change metadata.
# N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
# supported in clustered environment. If use_lvmetad=1 and locking_type=3
# is set at the same time, LVM always issues a warning message about this
# and then it automatically disables lvmetad use.
locking_type = 1
# Set to 0 to fail when a lock request cannot be satisfied immediately.
wait_for_locks = 1
# If using external locking (type 2) and initialisation fails,
# with this set to 1 an attempt will be made to use the built-in
# clustered locking.
# If you are using a customised locking_library you should set this to 0.
fallback_to_clustered_locking = 1
# If an attempt to initialise type 2 or type 3 locking failed, perhaps
# because cluster components such as clvmd are not running, with this set
# to 1 an attempt will be made to use local file-based locking (type 1).
# If this succeeds, only commands against local volume groups will proceed.
# Volume Groups marked as clustered will be ignored.
fallback_to_local_locking = 1
# Local non-LV directory that holds file-based locks while commands are
# in progress. A directory like /tmp that may get wiped on reboot is OK.
locking_dir = "/run/lock/lvm"
# Whenever there are competing read-only and read-write access requests for
# a volume group's metadata, instead of always granting the read-only
# requests immediately, delay them to allow the read-write requests to be
# serviced. Without this setting, write access may be stalled by a high
# volume of read-only requests.
# NB. This option only affects locking_type = 1 viz. local file-based
# locking.
prioritise_write_locks = 1
# Other entries can go here to allow you to load shared libraries
# e.g. if support for LVM1 metadata was compiled as a shared library use
# format_libraries = "liblvm2format1.so"
# Full pathnames can be given.
# Search this directory first for shared libraries.
# library_dir = "/lib"
# The external locking library to load if locking_type is set to 2.
# locking_library = "liblvm2clusterlock.so"
# Treat any internal errors as fatal errors, aborting the process that
# encountered the internal error. Please only enable for debugging.
abort_on_internal_errors = 0
# Check whether CRC is matching when parsed VG is used multiple times.
# This is useful to catch unexpected internal cached volume group
# structure modification. Please only enable for debugging.
detect_internal_vg_cache_corruption = 0
# If set to 1, no operations that change on-disk metadata will be permitted.
# Additionally, read-only commands that encounter metadata in need of repair
# will still be allowed to proceed exactly as if the repair had been
# performed (except for the unchanged vg_seqno).
# Inappropriate use could mess up your system, so seek advice first!
metadata_read_only = 0
# 'mirror_segtype_default' defines which segtype will be used when the
# shorthand '-m' option is used for mirroring. The possible options are:
# "mirror" - The original RAID1 implementation provided by LVM2/DM. It is
# characterized by a flexible log solution (core, disk, mirrored)
# and by the necessity to block I/O while reconfiguring in the
# event of a failure.
# There is an inherent race in the dmeventd failure handling
# logic with snapshots of devices using this type of RAID1 that
# in the worst case could cause a deadlock.
# Ref: https://bugzilla.redhat.com/show_bug.cgi?id=817130#c10
# "raid1" - This implementation leverages MD's RAID1 personality through
# device-mapper. It is characterized by a lack of log options.
# (A log is always allocated for every device and they are placed
# on the same device as the image - no separate devices are
# required.) This mirror implementation does not require I/O
# to be blocked in the kernel in the event of a failure.
# This mirror implementation is not cluster-aware and cannot be
# used in a shared (active/active) fashion in a cluster.
# Specify the '--type <mirror|raid1>' option to override this default
# setting.
mirror_segtype_default = "raid1"
# 'raid10_segtype_default' determines the segment types used by default
# when the '--stripes/-i' and '--mirrors/-m' arguments are both specified
# during the creation of a logical volume.
# Possible settings include:
# "raid10" - This implementation leverages MD's RAID10 personality through
# device-mapper.
# "mirror" - LVM will layer the 'mirror' and 'stripe' segment types. It
# will do this by creating a mirror on top of striped sub-LVs;
# effectively creating a RAID 0+1 array. This is suboptimal
# in terms of providing redundancy and performance. Changing to
# this setting is not advised.
# Specify the '--type <raid10|mirror>' option to override this default
# setting.
raid10_segtype_default = "raid10"
# The default format for displaying LV names in lvdisplay was changed
# in version 2.02.89 to show the LV name and path separately.
# Previously this was always shown as /dev/vgname/lvname even when that
# was never a valid path in the /dev filesystem.
# Set to 1 to reinstate the previous format.
# lvdisplay_shows_full_device_path = 0
# Whether to use (trust) a running instance of lvmetad. If this is set to
# 0, all commands fall back to the usual scanning mechanisms. When set to 1
# *and* when lvmetad is running (automatically instantiated by making use of
# systemd's socket-based service activation or run as an initscripts service
# or run manually), the volume group metadata and PV state flags are obtained
# from the lvmetad instance and no scanning is done by the individual
# commands. In a setup with lvmetad, lvmetad udev rules *must* be set up for
# LVM to work correctly. Without proper udev rules, all changes in block
# device configuration will be *ignored* until a manual 'pvscan --cache'
# is performed. These rules are installed by default.
# If lvmetad has been running while use_lvmetad was 0, it MUST be stopped
# before changing use_lvmetad to 1 and started again afterwards.
# If using lvmetad, the volume activation is also switched to automatic
# event-based mode. In this mode, the volumes are activated based on
# incoming udev events that automatically inform lvmetad about new PVs
# that appear in the system. Once the VG is complete (all the PVs are
# present), it is auto-activated. The activation/auto_activation_volume_list
# setting controls which volumes are auto-activated (all by default).
# A note about device filtering while lvmetad is used:
# When lvmetad is updated (either automatically based on udev events
# or directly by pvscan --cache <device> call), the devices/filter
# is ignored and all devices are scanned by default. The lvmetad always
# keeps unfiltered information which is then provided to LVM commands
# and then each LVM command does the filtering based on devices/filter
# setting itself.
# To prevent scanning devices completely, even when using lvmetad,
# the devices/global_filter must be used.
# N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
# supported in clustered environment. If use_lvmetad=1 and locking_type=3
# is set at the same time, LVM always issues a warning message about this
# and then it automatically disables lvmetad use.
use_lvmetad = 0
# Full path of the utility called to check that a thin metadata device
# is in a state that allows it to be used.
# Each time a thin pool needs to be activated or after it is deactivated
# this utility is executed. The activation will only proceed if the utility
# has an exit status of 0.
# Set to "" to skip this check. (Not recommended.)
# The thin tools are available as part of the device-mapper-persistent-data
# package from https://github.com/jthornber/thin-provisioning-tools.
# thin_check_executable = "/usr/bin/thin_check"
# Array of string options passed with thin_check command. By default,
# option "-q" is for quiet output.
# With thin_check version 2.1 or newer you can add "--ignore-non-fatal-errors"
# to let it pass through ignorable errors and fix them later.
# thin_check_options = [ "-q" ]
# Full path of the utility called to repair a thin metadata device
# is in a state that allows it to be used.
# Each time a thin pool needs repair this utility is executed.
# See thin_check_executable how to obtain binaries.
# thin_repair_executable = "/usr/bin/thin_repair"
# Array of extra string options passed with thin_repair command.
# thin_repair_options = [ "" ]
# Full path of the utility called to dump thin metadata content.
# See thin_check_executable how to obtain binaries.
# thin_dump_executable = "/usr/bin/thin_dump"
# If set, given features are not used by thin driver.
# This can be helpful not just for testing, but i.e. allows to avoid
# using problematic implementation of some thin feature.
# Features:
# block_size
# discards
# discards_non_power_2
# external_origin
# metadata_resize
# external_origin_extend
# thin_disabled_features = [ "discards", "block_size" ]
activation {
# Set to 1 to perform internal checks on the operations issued to
# libdevmapper. Useful for debugging problems with activation.
# Some of the checks may be expensive, so it's best to use this
# only when there seems to be a problem.
checks = 0
# Set to 0 to disable udev synchronisation (if compiled into the binaries).
# Processes will not wait for notification from udev.
# They will continue irrespective of any possible udev processing
# in the background. You should only use this if udev is not running
# or has rules that ignore the devices LVM2 creates.
# The command line argument --nodevsync takes precedence over this setting.
# If set to 1 when udev is not running, and there are LVM2 processes
# waiting for udev, run 'dmsetup udevcomplete_all' manually to wake them up.
udev_sync = 1
# Set to 0 to disable the udev rules installed by LVM2 (if built with
# --enable-udev_rules). LVM2 will then manage the /dev nodes and symlinks
# for active logical volumes directly itself.
# N.B. Manual intervention may be required if this setting is changed
# while any logical volumes are active.
udev_rules = 1
# Set to 1 for LVM2 to verify operations performed by udev. This turns on
# additional checks (and if necessary, repairs) on entries in the device
# directory after udev has completed processing its events.
# Useful for diagnosing problems with LVM2/udev interactions.
verify_udev_operations = 0
# If set to 1 and if deactivation of an LV fails, perhaps because
# a process run from a quick udev rule temporarily opened the device,
# retry the operation for a few seconds before failing.
retry_deactivation = 1
# How to fill in missing stripes if activating an incomplete volume.
# Using "error" will make inaccessible parts of the device return
# I/O errors on access. You can instead use a device path, in which
# case, that device will be used to in place of missing stripes.
# But note that using anything other than "error" with mirrored
# or snapshotted volumes is likely to result in data corruption.
missing_stripe_filler = "error"
# The linear target is an optimised version of the striped target
# that only handles a single stripe. Set this to 0 to disable this
# optimisation and always use the striped target.
use_linear_target = 1
# How much stack (in KB) to reserve for use while devices suspended
# Prior to version 2.02.89 this used to be set to 256KB
reserved_stack = 64
# How much memory (in KB) to reserve for use while devices suspended
reserved_memory = 8192
# Nice value used while devices suspended
process_priority = -18
# If volume_list is defined, each LV is only activated if there is a
# match against the list.
# "vgname" and "vgname/lvname" are matched exactly.
# "@tag" matches any tag set in the LV or VG.
# "@*" matches if any tag defined on the host is also set in the LV or VG
# If any host tags exist but volume_list is not defined, a default
# single-entry list containing "@*" is assumed.
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
# If auto_activation_volume_list is defined, each LV that is to be
# activated with the autoactivation option (--activate ay/-a ay) is
# first checked against the list. There are two scenarios in which
# the autoactivation option is used:
# - automatic activation of volumes based on incoming PVs. If all the
# PVs making up a VG are present in the system, the autoactivation
# is triggered. This requires lvmetad (global/use_lvmetad=1) and udev
# to be running. In this case, "pvscan --cache -aay" is called
# automatically without any user intervention while processing
# udev events. Please, make sure you define auto_activation_volume_list
# properly so only the volumes you want and expect are autoactivated.
# - direct activation on command line with the autoactivation option.
# In this case, the user calls "vgchange --activate ay/-a ay" or
# "lvchange --activate ay/-a ay" directly.
# By default, the auto_activation_volume_list is not defined and all
# volumes will be activated either automatically or by using --activate ay/-a ay.
# N.B. The "activation/volume_list" is still honoured in all cases so even
# if the VG/LV passes the auto_activation_volume_list, it still needs to
# pass the volume_list for it to be activated in the end.
# If auto_activation_volume_list is defined but empty, no volumes will be
# activated automatically and --activate ay/-a ay will do nothing.
# auto_activation_volume_list = []
# If auto_activation_volume_list is defined and it's not empty, only matching
# volumes will be activated either automatically or by using --activate ay/-a ay.
# "vgname" and "vgname/lvname" are matched exactly.
# "@tag" matches any tag set in the LV or VG.
# "@*" matches if any tag defined on the host is also set in the LV or VG
# auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
# If read_only_volume_list is defined, each LV that is to be activated
# is checked against the list, and if it matches, it as activated
# in read-only mode. (This overrides '--permission rw' stored in the
# metadata.)
# "vgname" and "vgname/lvname" are matched exactly.
# "@tag" matches any tag set in the LV or VG.
# "@*" matches if any tag defined on the host is also set in the LV or VG
# read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
# Each LV can have an 'activation skip' flag stored persistently against it.
# During activation, this flag is used to decide whether such an LV is skipped.
# The 'activation skip' flag can be set during LV creation and by default it
# is automatically set for thin snapshot LVs. The 'auto_set_activation_skip'
# enables or disables this automatic setting of the flag while LVs are created.
# auto_set_activation_skip = 1
# For RAID or 'mirror' segment types, 'raid_region_size' is the
# size (in KiB) of each:
# - synchronization operation when initializing
# - each copy operation when performing a 'pvmove' (using 'mirror' segtype)
# This setting has replaced 'mirror_region_size' since version 2.02.99
raid_region_size = 512
# Setting to use when there is no readahead value stored in the metadata.
# "none" - Disable readahead.
# "auto" - Use default value chosen by kernel.
readahead = "auto"
# 'raid_fault_policy' defines how a device failure in a RAID logical
# volume is handled. This includes logical volumes that have the following
# segment types: raid1, raid4, raid5*, and raid6*.
# In the event of a failure, the following policies will determine what
# actions are performed during the automated response to failures (when
# dmeventd is monitoring the RAID logical volume) and when 'lvconvert' is
# called manually with the options '--repair' and '--use-policies'.
# "warn" - Use the system log to warn the user that a device in the RAID
# logical volume has failed. It is left to the user to run
# 'lvconvert --repair' manually to remove or replace the failed
# device. As long as the number of failed devices does not
# exceed the redundancy of the logical volume (1 device for
# raid4/5, 2 for raid6, etc) the logical volume will remain
# usable.
# "allocate" - Attempt to use any extra physical volumes in the volume
# group as spares and replace faulty devices.
raid_fault_policy = "warn"
# 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define
# how a device failure affecting a mirror (of "mirror" segment type) is
# handled. A mirror is composed of mirror images (copies) and a log.
# A disk log ensures that a mirror does not need to be re-synced
# (all copies made the same) every time a machine reboots or crashes.
# In the event of a failure, the specified policy will be used to determine
# what happens. This applies to automatic repairs (when the mirror is being
# monitored by dmeventd) and to manual lvconvert --repair when
# --use-policies is given.
# "remove" - Simply remove the faulty device and run without it. If
# the log device fails, the mirror would convert to using
# an in-memory log. This means the mirror will not
# remember its sync status across crashes/reboots and
# the entire mirror will be re-synced. If a
# mirror image fails, the mirror will convert to a
# non-mirrored device if there is only one remaining good
# copy.
# "allocate" - Remove the faulty device and try to allocate space on
# a new device to be a replacement for the failed device.
# Using this policy for the log is fast and maintains the
# ability to remember sync state through crashes/reboots.
# Using this policy for a mirror device is slow, as it
# requires the mirror to resynchronize the devices, but it
# will preserve the mirror characteristic of the device.
# This policy acts like "remove" if no suitable device and
# space can be allocated for the replacement.
# "allocate_anywhere" - Not yet implemented. Useful to place the log device
# temporarily on same physical volume as one of the mirror
# images. This policy is not recommended for mirror devices
# since it would break the redundant nature of the mirror. This
# policy acts like "remove" if no suitable device and space can
# be allocated for the replacement.
mirror_log_fault_policy = "allocate"
mirror_image_fault_policy = "remove"
# 'snapshot_autoextend_threshold' and 'snapshot_autoextend_percent' define
# how to handle automatic snapshot extension. The former defines when the
# snapshot should be extended: when its space usage exceeds this many
# percent. The latter defines how much extra space should be allocated for
# the snapshot, in percent of its current size.
# For example, if you set snapshot_autoextend_threshold to 70 and
# snapshot_autoextend_percent to 20, whenever a snapshot exceeds 70% usage,
# it will be extended by another 20%. For a 1G snapshot, using up 700M will
# trigger a resize to 1.2G. When the usage exceeds 840M, the snapshot will
# be extended to 1.44G, and so on.
# Setting snapshot_autoextend_threshold to 100 disables automatic
# extensions. The minimum value is 50 (A setting below 50 will be treated
# as 50).
snapshot_autoextend_threshold = 100
snapshot_autoextend_percent = 20
# 'thin_pool_autoextend_threshold' and 'thin_pool_autoextend_percent' define
# how to handle automatic pool extension. The former defines when the
# pool should be extended: when its space usage exceeds this many
# percent. The latter defines how much extra space should be allocated for
# the pool, in percent of its current size.
# For example, if you set thin_pool_autoextend_threshold to 70 and
# thin_pool_autoextend_percent to 20, whenever a pool exceeds 70% usage,
# it will be extended by another 20%. For a 1G pool, using up 700M will
# trigger a resize to 1.2G. When the usage exceeds 840M, the pool will
# be extended to 1.44G, and so on.
# Setting thin_pool_autoextend_threshold to 100 disables automatic
# extensions. The minimum value is 50 (A setting below 50 will be treated
# as 50).
thin_pool_autoextend_threshold = 100
thin_pool_autoextend_percent = 20
# While activating devices, I/O to devices being (re)configured is
# suspended, and as a precaution against deadlocks, LVM2 needs to pin
# any memory it is using so it is not paged out. Groups of pages that
# are known not to be accessed during activation need not be pinned
# into memory. Each string listed in this setting is compared against
# each line in /proc/self/maps, and the pages corresponding to any
# lines that match are not pinned. On some systems locale-archive was
# found to make up over 80% of the memory used by the process.
# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
# Set to 1 to revert to the default behaviour prior to version 2.02.62
# which used mlockall() to pin the whole process's memory while activating
# devices.
use_mlockall = 0
# Monitoring is enabled by default when activating logical volumes.
# Set to 0 to disable monitoring or use the --ignoremonitoring option.
monitoring = 1
# When pvmove or lvconvert must wait for the kernel to finish
# synchronising or merging data, they check and report progress
# at intervals of this number of seconds. The default is 15 seconds.
# If this is set to 0 and there is only one thing to wait for, there
# are no progress reports, but the process is awoken immediately the
# operation is complete.
polling_interval = 15
# Report settings.
# report {
# Align columns on report output.
# aligned=1
# When buffered reporting is used, the report's content is appended
# incrementally to include each object being reported until the report
# is flushed to output which normally happens at the end of command
# execution. Otherwise, if buffering is not used, each object is
# reported as soon as its processing is finished.
# buffered=1
# Show headings for columns on report.
# headings=1
# A separator to use on report after each field.
# separator=" "
# Use a field name prefix for each field reported.
# prefixes=0
# Quote field values when using field name prefixes.
# quoted=1
# Output each column as a row. If set, this also implies report/prefixes=1.
# colums_as_rows=0
# Comma separated list of columns to sort by when reporting 'lvm devtypes' command.
# See 'lvm devtypes -o help' for the list of possible fields.
# devtypes_sort="devtype_name"
# Comma separated list of columns to report for 'lvm devtypes' command.
# See 'lvm devtypes -o help' for the list of possible fields.
# devtypes_cols="devtype_name,devtype_max_partitions,devtype_description"
# Comma separated list of columns to report for 'lvm devtypes' command in verbose mode.
# See 'lvm devtypes -o help' for the list of possible fields.
# devtypes_cols_verbose="devtype_name,devtype_max_partitions,devtype_description"
# Comma separated list of columns to sort by when reporting 'lvs' command.
# See 'lvs -o help' for the list of possible fields.
# lvs_sort="vg_name,lv_name"
# Comma separated list of columns to report for 'lvs' command.
# See 'lvs -o help' for the list of possible fields.
# lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,move_pv,mirror_log,copy_percent,convert_lv"
# Comma separated list of columns to report for 'lvs' command in verbose mode.
# See 'lvs -o help' for the list of possible fields.
# lvs_cols_verbose="lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert
# Comma separated list of columns to sort by when reporting 'vgs' command.
# See 'vgs -o help' for the list of possible fields.
# vgs_sort="vg_name"
# Comma separated list of columns to report for 'vgs' command.
# See 'vgs -o help' for the list of possible fields.
# vgs_cols="vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
# Comma separated list of columns to report for 'vgs' command in verbose mode.
# See 'vgs -o help' for the list of possible fields.
# vgs_cols_verbose="vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
# Comma separated list of columns to sort by when reporting 'pvs' command.
# See 'pvs -o help' for the list of possible fields.
# pvs_sort="pv_name"
# Comma separated list of columns to report for 'pvs' command.
# See 'pvs -o help' for the list of possible fields.
# pvs_cols="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
# Comma separated list of columns to report for 'pvs' command in verbose mode.
# See 'pvs -o help' for the list of possible fields.
# pvs_cols_verbose="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
# Comma separated list of columns to sort by when reporting 'lvs --segments' command.
# See 'lvs --segments -o help' for the list of possible fields.
# segs_sort="vg_name,lv_name,seg_start"
# Comma separated list of columns to report for 'lvs --segments' command.
# See 'lvs --segments -o help' for the list of possible fields.
# segs_cols="lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
# Comma separated list of columns to report for 'lvs --segments' command in verbose mode.
# See 'lvs --segments -o help' for the list of possible fields.
# segs_cols_verbose="lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
# Comma separated list of columns to sort by when reporting 'pvs --segments' command.
# See 'pvs --segments -o help' for the list of possible fields.
# pvsegs_sort="pv_name,pvseg_start"
# Comma separated list of columns to sort by when reporting 'pvs --segments' command.
# See 'pvs --segments -o help' for the list of possible fields.
# pvsegs_cols="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
# Comma separated list of columns to sort by when reporting 'pvs --segments' command in verbose mode.
# See 'pvs --segments -o help' for the list of possible fields.
# pvsegs_cols_verbose="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
# Advanced section #
# Metadata settings
# metadata {
# Default number of copies of metadata to hold on each PV. 0, 1 or 2.
# You might want to override it from the command line with 0
# when running pvcreate on new PVs which are to be added to large VGs.
# pvmetadatacopies = 1
# Default number of copies of metadata to maintain for each VG.
# If set to a non-zero value, LVM automatically chooses which of
# the available metadata areas to use to achieve the requested
# number of copies of the VG metadata. If you set a value larger
# than the the total number of metadata areas available then
# metadata is stored in them all.
# The default value of 0 ("unmanaged") disables this automatic
# management and allows you to control which metadata areas
# are used at the individual PV level using 'pvchange
# --metadataignore y/n'.
# vgmetadatacopies = 0
# Approximate default size of on-disk metadata areas in sectors.
# You should increase this if you have large volume groups or
# you want to retain a large on-disk history of your metadata changes.
# pvmetadatasize = 255
# List of directories holding live copies of text format metadata.
# These directories must not be on logical volumes!
# It's possible to use LVM2 with a couple of directories here,
# preferably on different (non-LV) filesystems, and with no other
# on-disk metadata (pvmetadatacopies = 0). Or this can be in
# addition to on-disk metadata areas.
# The feature was originally added to simplify testing and is not
# supported under low memory situations - the machine could lock up.
# Never edit any files in these directories by hand unless you
# you are absolutely sure you know what you are doing! Use
# the supplied toolset to make changes (e.g. vgcfgrestore).
# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
# Event daemon
dmeventd {
# mirror_library is the library used when monitoring a mirror device.
# "libdevmapper-event-lvm2mirror.so" attempts to recover from
# failures. It removes failed devices from a volume group and
# reconfigures a mirror as necessary. If no mirror library is
# provided, mirrors are not monitored through dmeventd.
mirror_library = "libdevmapper-event-lvm2mirror.so"
# snapshot_library is the library used when monitoring a snapshot device.
# "libdevmapper-event-lvm2snapshot.so" monitors the filling of
# snapshots and emits a warning through syslog when the use of
# the snapshot exceeds 80%. The warning is repeated when 85%, 90% and
# 95% of the snapshot is filled.
snapshot_library = "libdevmapper-event-lvm2snapshot.so"
# thin_library is the library used when monitoring a thin device.
# "libdevmapper-event-lvm2thin.so" monitors the filling of
# pool and emits a warning through syslog when the use of
# the pool exceeds 80%. The warning is repeated when 85%, 90% and
# 95% of the pool is filled.
thin_library = "libdevmapper-event-lvm2thin.so"
# Full path of the dmeventd binary.
# executable = "/usr/sbin/dmeventd" -
Cannot use system rollback segment for non-system tablespace 'TEMP
Hi everyone!
I encountered this error: "Cannot use system rollback segment for non-system tablespace 'TEMP"
So this is what I did to check if the undo stuffs are online.
SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
SYSTEM ONLINE
UNDO ONLINE
SYSAUX ONLINE
TEMP ONLINE
LARGEDATA ONLINE
LARGEINDEXES ONLINE
MEDIUMDATA ONLINE
MEDIUMINDEXES ONLINE
SMALLDATA ONLINE
SMALLINDEXES ONLINE
XSMALLDATA ONLINE
TABLESPACE_NAME STATUS
XSMALLINDEXES ONLINE
XXSMALLTABS ONLINE
USERS ONLINE
CONVTABLES ONLINE
UNDO_02 ONLINE
16 rows selected.
SQL> SELECT tablespace_name, sum((bytes/1024)/1024) free FROM DBA_FREE_SPACE gr
oup by tablespace_name;
TABLESPACE_NAME FREE
LARGEDATA 18.3105469
SMALLDATA 10.46875
SYSAUX 106.5625
UNDO_02 67.125
XXSMALLTABS 13.0078125
CONVTABLES 170.039063
MEDIUMDATA 22
USERS 37.265625
SYSTEM 55.875
LARGEINDEXES 30.5175781
XSMALLINDEXES 17.34375
TABLESPACE_NAME FREE
UNDO 546.9375
MEDIUMINDEXES 33.25
SMALLINDEXES 31.015625
XSMALLDATA 23.6328125
15 rows selected.
SQL> select file#,status from v$datafile;
FILE# STATUS
1 SYSTEM
2 ONLINE
3 ONLINE
4 ONLINE
5 ONLINE
6 ONLINE
7 ONLINE
8 ONLINE
9 ONLINE
10 ONLINE
11 ONLINE
FILE# STATUS
12 ONLINE
13 ONLINE
14 ONLINE
15 ONLINE
15 rows selected.
SQL> select segment_name, tablespace_name, initial_extent,status
2 from dba_rollback_segs;
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
SYSTEM SYSTEM 102400
ONLINE
_SYSSMU1$ UNDO 131072
OFFLINE
_SYSSMU2$ UNDO 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU3$ UNDO 131072
OFFLINE
_SYSSMU4$ UNDO 131072
OFFLINE
_SYSSMU5$ UNDO 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU6$ UNDO 131072
OFFLINE
_SYSSMU7$ UNDO 131072
OFFLINE
_SYSSMU8$ UNDO 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU9$ UNDO 131072
OFFLINE
_SYSSMU10$ UNDO 131072
OFFLINE
_SYSSMU11$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU12$ UNDO_02 131072
OFFLINE
_SYSSMU13$ UNDO_02 131072
OFFLINE
_SYSSMU14$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU15$ UNDO_02 131072
OFFLINE
_SYSSMU16$ UNDO_02 131072
OFFLINE
_SYSSMU17$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU18$ UNDO_02 131072
OFFLINE
_SYSSMU19$ UNDO_02 131072
OFFLINE
_SYSSMU20$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU21$ UNDO_02 131072
OFFLINE
22 rows selected.How should I be bringing them online?
I tried this but didn't work for me.
SQL> alter rollback segment _SYSSMU1$ online;
alter rollback segment _SYSSMU1$ online
ERROR at line 1:
ORA-00911: invalid character
SQL> alter rollback segment '_SYSSMU1$' online;
alter rollback segment '_SYSSMU1$' online
ERROR at line 1:
ORA-02245: invalid ROLLBACK SEGMENT name
SQL> alter rollback segment _SYSSMU21$ online;
alter rollback segment _SYSSMU21$ online
ERROR at line 1:
ORA-00911: invalid character
SQL> alter rollback segment SYSSMU21$ online;
alter rollback segment SYSSMU21$ online
ERROR at line 1:
ORA-01534: rollback segment 'SYSSMU21$' doesn't exist
SQL> alter rollback segment '_SYSSMU21$' online;
alter rollback segment '_SYSSMU21$' online
ERROR at line 1:
ORA-02245: invalid ROLLBACK SEGMENT name
SQL> alter rollback segment "_SYSSMU21$" online;
alter rollback segment "_SYSSMU21$" online
ERROR at line 1:
ORA-30017: segment '_SYSSMU21$' is not supported in MANUAL Undo Management mode
SQL> ALTER SYSTEM SET UNDO_MANAGEMENT=AUTO SCOPE=SPFILE;
System altered.Should I be bringing every segment online separately? Please guide me.
Nith
Edited by: user645399 on Feb 23, 2011 2:52 PMSQL> select segment_name, tablespace_name, initial_extent,status
2 from dba_rollback_segs;
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
SYSTEM SYSTEM 102400
ONLINE
_SYSSMU1$ UNDO 131072
ONLINE
_SYSSMU2$ UNDO 131072
ONLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU3$ UNDO 131072
ONLINE
_SYSSMU4$ UNDO 131072
ONLINE
_SYSSMU5$ UNDO 131072
ONLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU6$ UNDO 131072
ONLINE
_SYSSMU7$ UNDO 131072
ONLINE
_SYSSMU8$ UNDO 131072
ONLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU9$ UNDO 131072
ONLINE
_SYSSMU10$ UNDO 131072
ONLINE
_SYSSMU11$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU12$ UNDO_02 131072
OFFLINE
_SYSSMU13$ UNDO_02 131072
OFFLINE
_SYSSMU14$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU15$ UNDO_02 131072
OFFLINE
_SYSSMU16$ UNDO_02 131072
OFFLINE
_SYSSMU17$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU18$ UNDO_02 131072
OFFLINE
_SYSSMU19$ UNDO_02 131072
OFFLINE
_SYSSMU20$ UNDO_02 131072
OFFLINE
SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT
STATUS
_SYSSMU21$ UNDO_02 131072
OFFLINEStill undo_02's segments are offline. -
I have a UNIX script through which i call a procedure. In that procedure i have two statements
EXECUTE IMMEDIATE 'ALTER ROLLBACK SEGMENT R01 SHRINK';
EXECUTE IMMEDIATE 'SET TRANSACTION USE ROLLBACK SEGMENT R01';
The procedure gives me the error insufficent insufficent privileges. But when i execute these statements in SQL*Plus (with same username password) they are running fine.
Any Help ??Which is the exact Oracle message error do you have ?
What is your Oracle version ?
if you get
"ORA-01650 unable to extend rollback segment ... by ... in tablespace ...", you could extend one of your rollback segment tablespace datafile with:
ALTER DATABASE DATAFILE .... RESIZE ... or add a new datafile to the tablespace:
ORA-01650 unable to extend rollback segment string by string in tablespace string
Cause: Failed to allocate an extent for rollback segment in tablespace.
Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more files to the tablespace indicated.
Message was edited by:
Pierre Forstmann -
The OPTIMAL storage parameter in the rollback segment
Hi,
in metalink note Subject: ORA-01555 "Snapshot too old" in Very Large Databases (if using Rollback Segments)
Doc ID: 45895.1
I see :
Solution 1d:
Don't use the OPTIMAL storage parameter in the rollback segment. but how not to use the OPTIMAL storage parameter in the rollback segment ?
Thank you.If you are using undo_management=AUTO (in 9i or higher) then there is no "OPTIMAL" setting.
"OPTIMAL" is when using Manual Undo Management with Rollback Segments created by the DBA.
If you are using Manual Undo Management, check your Rollback Segments. The Optimal size would be visible in V$ROLLSTAT.
select a.segment_name a, b.xacts b, b.waits c, b.shrinks e, b.wraps f,
b.extends g, b.rssize/1024/1024 h, b.optsize/1024/1024 i,
b.hwmsize/1024/1024 j, b.aveactive/1024/1024 k , b.status l
-- from v$rollname a, v$rollstat b
from dba_rollback_segs a, v$rollstat b
where a.segment_id = b.usn(+)
and b.status in ('ONLINE', 'PENDING OFFLINE','FULL')
order by a.segment_name
/To unset the Optimal setting you can run
alter rollback segment SEGMENT_NAME storage (optimal NULL);Note that if you unset OPTIMAL, then your Rollback Segments will remain at very large sizes if and when they grow running large transactions ("OPTIMAL" is the pre-9i method for Oracle to automatically shrink Rollback Segments). You can manually SHRINK or DROP and CREATE Rollback Segments then. -
Quick Cleanup of Temporary Segments and also for Rollback Segment
Do you know a tip allowe to do a quick
Cleanup of Temporary Segments and also for Rollback Segment; same time we
need to wait about 2 hours to cleanup these tablespaces.
we have to take action immediately to prevent other sessions to fail as well
and we cannot bounce the Oracle instance. So, how do we get rid of this
temporary segment as quickly as possible?
Rq: the shrink commande same time not cleanup Rollback Segment, i don't know why (Ex:Alter rollback segment R01 shrink; )
Thank youAccording to my knowledge the rollback tablespace can not be cleanup as you want because in the rollback segments are the transactions without commit and because of that you can not do that. The Rollback tablespace cleans up itself when pending transactions make commit.
[email protected]
Joel P�rez -
Enough RG1 balance is not available to issue the Goods
Hi All
I am getting this error when I run "India - Excise Invoice Generation" Request by entering the Delivery Id. The Log file of the request showing this error ORA-20199: Enough RG1 Balance is not available to issue the Goods. But when I check the on hand quantity exists. Quantity is available even after reservation and shipping is done.
Can anyone please tell me the reason and resolution if you had faced this error before.
Thanks in Advance
Prem.Process steps are as below .
Material master with base unit "SHT-Sheet" and maintained alternative unit "M2- Square Meter" along with conversion factor 4930 M2 = 100 SHT. Incoming quantity inspection type (01) is also active on the material.
Purchase Order quantity in alternative UnM like 1500 M2.
Purchasing Info Record maintained with alternative pricing Unit “M2”.
Good Receipt Quantity posted in the alternative Unit of Measure with 1500 M2.
Usage decision recorded with Base unit Quantity 30.426 SHTs
Error - PU GR blocked stock exceeded by 0.002 SHTs : 2847 1000 M7 22
Cause of error which i observed is below.
On usage decision, system calculating stock posting quantity "30.426" / Conversion factor "( 100 / 4930 )" = 1500.002 and this quantity "1500.002" exceeded from block stock quantity 1500 as posted at the time of GR in the block stock via 103 mvt. -
Rollback segment issue in oracle8i
We are having 33GB size of Rollback segment , it is not coming down even after when the database get bounced.
Even the transactions are not happening also it is showing same value and it is not coming down.
My question is after transaction get over whether rollback segment is deallocated or not?
Please explain in detail , how to deallocate those unused rollback segments.we dont have any distributed transactions.
can u tell me is there any formula to set OPTIMAL value for roll back segments.
please tell me the dependencies,
& i have some more quries....
1. If we shrink the rollback segment, will it grow later on when the transactions are going high???
2.If we shrink through OEM, automatically oracle is shrinking that perticular RBS.
here my doubt is on which basis Oracle is shrinking that RBS.
Please give info,, in detail...
Maybe you are looking for
-
Windows 7 - "Tick" the box DNS "Register this connection's addresses in DNS
Hi, i hope i have the right forum, this is DNS issue but on Windows 7 machines. I have been trawling the net looking for a fix. Basically we have some machines that are having DNS issues on various sites. I found one machine "register this connection
-
How to change content of DataGrid cell on mouse over
I am trying to change content of a datagrid cell when the mouse is over it. Changing the dataProvider does not provide immediate feedback because update is performed by renderer. I am trying to directly update content of a cell onMouseOver and restor
-
What is Adobe AIR 2.5.1 for TV?
Adobe AIR for TV enables the creation and delivery of rich, expressive applications, video, and content for TV. With AIR for TV, developers enjoy greater value for their investment in the Adobe Flash Platform and are able to participate in true multi
-
Can't install drivers after installing windows 7
I have just installed windows 7 on my mbp. Partitioning and installation all seemed to go pretty smoothly. I get to windows and try to install the drivers via the Mac OS Install DVD (version 10.6.3) and it doesn't install the drivers. The only thing
-
Oracle wharehouse installtion.
Dear all, Is it possible to run oracle warehouse without installing oracle workflow. Please help. upul indika.