File system description
Hi all,
Where do I find an file system description for below file system?
I want to know what they are being used for.
MOUNT POINT TYPE DEVICE SIZE INUSE FREE USE%
/sw internal /dev/md0 991MB 793MB 198MB 80%
/swstore internal /dev/md1 991MB 718MB 273MB 72%
/state internal /dev/md2 5951MB 195MB 5756MB 3%
/local/local1 SYSFS /dev/md4 14878MB 483MB 14395MB 3%
/vbspace GUEST /dev/data1/vbsp 230128MB 128MB 230000MB 0%
.../local1/spool PRINTSPOOL /dev/data1/spool 991MB 32MB 959MB 3%
/obj1 CONTENT /dev/data1/obj 121015MB 355MB 120660MB 0%
/dre1 CONTENT /dev/data1/dre 79354MB 53802MB 25552MB 67%
/ackq1 internal /dev/data1/ackq 1189MB 0MB 1189MB 0%
/plz1 internal /dev/data1/plz 2379MB 26MB 2353MB 1%
Jan
Hi Michael,
I manage a WAAS network with waas core WAVE7541 in data center, Central Manager
installed on WAVE 594 with 4.4.3 version and remote waas modules (sm-sre-910-4.4.3.4
version) integrated on Cisco2951/K9 router.
On remote module I have
#show disk detail
Physical disk information:
disk00: Present 22DCP07VT (h00 c00 i00 l00 - Int DAS-SATA)
476937MB(465.8GB)
disk01: Present 22HZT0SUT (h01 c00 i00 l00 - Int DAS-SATA)
476937MB(465.8GB)
Mounted file systems:
MOUNT POINT TYPE DEVICE SIZE INUSE FREE USE%
/sw internal /dev/md0 991MB 698MB 293MB 70%
/swstore internal /dev/md1 991MB 304MB 687MB 30%
/state internal /dev/md2 3967MB 127MB 3840MB 3%
/local/local1 SYSFS /dev/md4 14878MB 276MB 14602MB 1%
.../local1/spool PRINTSPOOL /dev/data1/spool 991MB 32MB 959MB 3%
/obj1 CONTENT /dev/data1/obj 121015MB 11712MB 109303MB 9%
/dre1 CONTENT /dev/data1/dre 119031MB 116977MB 2054MB 98%
/ackq1 internal /dev/data1/ackq 1189MB 0MB 1189MB 0%
/plz1 internal /dev/data1/plz 2379MB 1MB 2378MB 0%
Software RAID devices:
DEVICE NAME TYPE STATUS PHYSICAL DEVICES AND STATUS
/dev/md0 RAID-1 NORMAL OPERATION disk00/00[GOOD] disk01/00[GOOD]
/dev/md1 RAID-1 NORMAL OPERATION disk00/01[GOOD] disk01/01[GOOD]
/dev/md2 RAID-1 NORMAL OPERATION disk00/02[GOOD] disk01/02[GOOD]
/dev/md3 RAID-1 NORMAL OPERATION disk00/03[GOOD] disk01/03[GOOD]
/dev/md4 RAID-1 NORMAL OPERATION disk00/04[GOOD] disk01/04[GOOD]
/dev/md5 RAID-1 NORMAL OPERATION disk00/05[GOOD] disk01/05[GOOD]
Disk encryption feature is disabled.
I ask you details on size, use and contents of following partitions:
/obj1 that is CIFS object cache and /dre1 that is used for the DRE byte level cache
as indicated by you.
In particular I don't understand why if i connect to it through interface web,
from the tab CifsAo->Monitoring->Cache, I see
Maximum cache disk size: 95.75391 GB
Is this value contained in the 121015 MB of /obj1, Right?
Infact I have configured in the Central Manger a Prepositon Directive with Total Size as % of Cache Volume=20
What is the remaining content in /obj1?
While is the redundancy library contained in /dre1
to which WAAS device accesses to compress the traffic to as well as in /plz1?
Please can you do clarify me?
Thanks a lot in advance
Similar Messages
-
Resource Description Property in File System Repository
Hi Experts,
I have a File System Repository that contains files with cryptic file-names. For displaying them in portal I'd like to add descriptions to those files. I searched in SDN and found out that description property is not supported for File System Repositorys. Is that really true?
So what could be an alternative if I have an existing file system directory and I want to add description property.
Best Regards
ManuelHi Manuel,
the alternative is to integrate the fileshare via a CM repository in FSDB mode, where folders and documents are stored in the file system, but metadata is stored in the database.
See more infos and restrictions here: http://help.sap.com/saphelp_nw04/helpdata/en/62/468698a8e611d5993600508b6b8b11/frameset.htm
Hope this helps,
Robert -
How to insert a JPG file from file system to Oracle 10g?
I have developed a schema to store photos as BLOB which store the text description as CLOB original filename, file size.
I also use ctxsys.context to index TEXT_DESCRIPTION in order to perform Oracle Text Search and it works.
I would like to insert some JPG file from say C:\MYPHOTO\Photo1.jpg as a new record. How can I do this in SQL PLus and/or Loader?
How can I retrieve the PHOTO_IMAGE back to the file system using SQL Plus and/or command line in DOS?
See the following script:
create user myphoto identified by myphoto;
grant connect, resource, ctxapp to myphoto;
connect myphoto/myphoto@orcl;
PROMPT Creating Table PHOTOS
CREATE TABLE PHOTOS
(PHOTO_ID VARCHAR2(15) NOT NULL,
PHOTO_IMAGE BLOB,
TEXT_DESCRIPTION CLOB,
FILENAME VARCHAR2(50),
FILE_SIZE NUMBER NOT NULL,
CONSTRAINT PK_PHOTOS PRIMARY KEY (PHOTO_ID)
create index idx_photos_text_desc on
PHOTOS(TEXT_DESCRIPTION) indextype is ctxsys.context;
INSERT INTO PHOTOS VALUES
('P00000000000001', empty_blob(), empty_clob(),
'SCGP1.JPG',100);
INSERT INTO PHOTOS VALUES
('P00000000000002', empty_blob(), 'Cold Play with me at the concert in Melbourne 2005',
'COLDPLAY1.JPG',200);
INSERT INTO PHOTOS VALUES
('P00000000000003', empty_blob(), 'My parents in Melbourne 2001',
'COLDPLAY1.JPG',200);
EXEC CTX_DDL.SYNC_INDEX('idx_photos_text_desc');
SELECT PHOTO_ID ,TEXT_DESCRIPTION
FROM PHOTOS;
SELECT score(1),PHOTO_ID ,TEXT_DESCRIPTION
FROM PHOTOS
WHERE CONTAINS(TEXT_DESCRIPTION,'parents',1)> 0
ORDER BY score(1) DESC;
SELECT score(1),PHOTO_ID ,TEXT_DESCRIPTION
FROM PHOTOS
WHERE CONTAINS(TEXT_DESCRIPTION,'cold play',1)> 0
ORDER BY score(1) DESC;
SELECT score(1),score(2), PHOTO_ID ,TEXT_DESCRIPTION
FROM photos
WHERE CONTAINS(TEXT_DESCRIPTION,'Melbourne',1)> 0
AND CONTAINS(TEXT_DESCRIPTION,'2005',2)> 0
ORDER BY score(1) DESC;Hi
You can use the following to insert an image:
create table imagetab(id number primary key,imagfile blob, fcol varchar2(10));
create or replace directory imagefiles as 'c:\'
declare
v_bfile BFILE;
v_blob BLOB;
begin
insert into imagetab (id,imagfile,fcol)
values (3,empty_blob(),'BINARY')
return imagfile into v_blob;
v_bfile := BFILENAME ('IMAGEFILES', 'MyImage.JPG');
Dbms_Lob.fileopen (v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile (v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);
commit;
end;
/ -
Hi,
I face this error message when I try to 'process update' a dimention, sometime it works but sometime it failed with the error below:
Description: File system error: The record ID is incorrect. Physical file: . Logical file: .
My environment: Window Server 2008 R2+SQL Server 2008 R2.
Note: this issue can be resolved by full cube process, but I can't to that every time if it failed, just want to know what happen at backend of SSAS, It block me neary 2 week without any finding, really appreciate that someone can help on this.
I have do some research with google, I think the root cause here is not TopCount function(It may be a problem with SQL Server2005), also not the size limitation of 4GB size of .astore file.(I check the size of my cube under 'Data\XXX.db\' folder, which only
have few MB size).
Thanks,
TomHi Tom,
Here is a microsoft fix
FIX: "File system error" occurs when you try to run a process update operation on a dimension in Microsoft SQL Server 2008 Analysis Services and in Microsoft SQL Server 2008 R2 Analysis Services
In your acenario, in order to fix this issue, please apply the latest Cumulative update and Service Pack according to your SQL Server version. Besides, there are two workarounds in that fix, plese try it and check if it works or not.
http://support.microsoft.com/default.aspx?scid=kb;en-US;2276495
Regards,
Charlie Liao
TechNet Community Support -
File System error while restoring a backup from analysis server to another
Hi
I have restored an analysis database backup to from server 1 (sql server 2008) to server 2(sql 2008) and its working fine
and now am trying to restore another database from analysis server server 3 (sql 2005) to server 2 (sql 2008 )but its giving error that
File System error occured while opening the file G:\ProgramFiles\MSAS10.MSSQLSERVER\OLAP\Backup\Workordermodule.0.db\WomDW1.7.00 ..etcHi Maverick,
According to your description, you are experiencing the error when restoring the SSAS 2005 database on SSAS 2008 server, right?
In your scenario, how do you backup your SSAS 2005 database? Please ensure that you backup and restore steps are correct. Here is a blog which describe how to migrate a cube in SQL Server Analysis Services 2005 to SQL Server Analysis Services 2008 step by
step, please refer to the link below.
http://blogs.technet.com/b/mdegre/archive/2010/03/31/migrating-a-cube-in-sql-server-analysis-services-2005-to-sql-server-analysis-services-2008.aspx
Besides, you can import SQL 2005 AS database in SQL 2008 BIDS Project. Then fix AMO warnings and deploy that database on SQL 2008 Server.
http://technet.microsoft.com/en-in/library/ms365361(v=sql.100).aspx
Regards,
Charlie Liao
TechNet Community Support -
Not able to run adadmin adfter installing R12 Upgrade file system
Sawwan,
Now we are doing 11i apps to R12 upgrade single node, before we did database upgrade 9i to 10.2.0.4,
I successfully installed R12 upgrade file system with out any errors, i have chagned the
new Installation creates APPL_TOP, COMMON_TOP, INST_TOP and 10g Oracle Home
i moved the new environment file into .bash_profile
mv .bash_profile .bash_profile_11.5.10
Iset the R12 environment , When ever i tried to run adadmin system manager password is not taking its giving error
[applupg@peabody appl]$ adadmin
Copyright (c) 2002 Oracle Corporation
Redwood Shores, California, USA
Oracle Applications AD Administration
Version 12.0.0
NOTE: You may not use this utility for custom development
unless you have written permission from Oracle Corporation.
Your default directory is '/u01/R12/apps/apps_st/appl'.
Is this the correct APPL_TOP [Yes] ?
AD Administration records your AD Administration session in a text file
you specify. Enter your AD Administration log file name or press [Return]
to accept the default file name shown in brackets.
Filename [adadmin.log] :
************* Start of AD Administration session *************
AD Administration version: 12.0.0
AD Administration started at: Sat Apr 18 2009 06:12:52
APPL_TOP is set to /u01/R12/apps/apps_st/appl
Backing up restart files, if any......Done.
Your previous AD Administration session did not run to completion.
Do you wish to continue with your previous AD Administration session [Yes] ?
You are about to use or modify Oracle Applications product tables
in your ORACLE database 'upg'
using ORACLE executables in '/u01/R12/apps/tech_st/10.1.2'.
Is this the correct database [Yes] ?
AD Administration needs the password for your 'SYSTEM' ORACLE schema
in order to determine your installation configuration.
Enter the password for your 'SYSTEM' ORACLE schema: manager
...Unable to connect.
AD Administration error:
The following ORACLE error:
ORA-12541: TNS:no listener
occurred while executing the SQL statement:
CONNECT SYSTEM/*****
AD Administration error:
Unable to connect to 'SYSTEM'; password may be invalid.
AD Administration needs the password for your 'SYSTEM' ORACLE schema
in order to determine your installation configuration.
Enter the password for your 'SYSTEM' ORACLE schema:
Enter the password for your 'SYSTEM' ORACLE schema:
Before running R12 adadmin is there nessary 11i apps servicess should up or down..
Edited by: HumanDBA on May 5, 2009 12:26 PMSawwan,
As a oracle user i am able to see the tns_admin home dir
[oraupg@peabody upg_peabody]$ echo $TNS_ADMIN
/stage/10gsoftware/network/admin/upg_peabody
[oraupg@peabody upg_peabody]$
After installing R12 uprade file system i am able do the tnsping
[applupg@peabody appl]$ tnsping upg
TNS Ping Utility for Linux: Version 10.1.0.5.0 - Production on 18-APR-2009 06:26:49
Copyright (c) 1997, 2003, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=peabody)(PORT=1526)) (CONNECT_DATA=(SID=upg)))
TNS-12541: TNS:no listener
[applupg@peabody appl]$
[applupg@peabody appl]$ sqlplus apps/xxxx@upg
SQL*Plus: Release 10.1.0.5.0 - Production on Sat Apr 18 06:29:45 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
ERROR:
ORA-12541: TNS:no listener
Enter user-name:
========================================
Sawwan
after installing i am trying to apply R12.AD.B patch, do we need to apply this patch in 11i fiile system or New R12 file system, for 11i file system i am not getting any issue, but when ever i set the R12 new file system enviroment file then only i am not able to connect the database,i have changed new port numbers in R12 file system, is that causing any issues..?
Edited by: HumanDBA on May 5, 2009 12:37 PM -
Lucreate - Cannot make file systems for boot environment
Hello!
I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <d100> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <S10U6_20081207>.
Source boot environment is <d100>.
Creating boot environment <S10U6_20081207>.
Creating file systems on boot environment <S10U6_20081207>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
Mounting file systems for boot environment <S10U6_20081207>.
Calculating required sizes of file systems for boot environment <S10U6_20081207>.
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
Well - why's that?
I can do a "newfs /dev/md/dsk/d200" just fine.
When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
/bin/nawk: can't open file /etc/lu/ICF.2
Quellcodezeilennummer 1
Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
Would anyone maybe have an idea about what's broken there?
--($ ~)-- LC_ALL=C metastat
d200: Mirror
Submirror 0: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d20: Submirror of d200
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes
d100: Mirror
Submirror 0: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d10: Submirror of d100
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Okay Yes
d201: Mirror
Submirror 0: d21
State: Okay
Submirror 1: d11
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2097414 blocks (1.0 GB)
d21: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s1 0 No Okay Yes
d11: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s1 0 No Okay Yes
hsp001: is empty
Device Relocation Information:
Device Reloc Device ID
c1t1d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____434N5641
c1t0d0 Yes id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
--($ ~)-- /bin/df -k | grep md
/dev/md/dsk/d100 15490539 10772770 4562864 71% /Thanks,
MichaelHello.
(sys01)root# devfsadm -Cv
(sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
--($ ~)-- sudo env LC_ALL=C LANG=C lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
d100 yes yes yes no -
--($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
boot environment name: d100
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
/dev/md/dsk/d100 ufs 16106660352 / logging
/dev/md/dsk/d201 swap 1073875968 - -In the rebooted system, I re-did the original lucreate:
<code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
Copying.
*{color:#ff0000}Excellent! It now works!{color}*
Thanks a lot,
Michael -
Oc 11gR1 update 3: doesn't show ZFS file systems created on brownfield zone
Subject is a pretty concise description here. I have several brownfield Solaris 10U10 containers running on M5000s, and I have delegated three zpool to each container for use by Oracle. Below is relevant output from zonecfg export for one of these containers. They were all built in the same manner, then placed under management by OC. (Wish I'd been able to build them as green field with Ops Center, but there just wasn't enough time to learn how to configure OpsCenter the way I needed to use it.)
set name=Oracle-DB-Instance
set type=string
set value="Oracle e-Business Suite PREPROD"
end
add dataset
set name=PREPRODredoPOOL
end
add dataset
set name=PREPRODarchPOOL
end
add dataset
set name=PREPRODdataPOOL
end
The problem is, none of the file systems built on these delegated pools in the container appear in the Ops Center File System Utilization charts. Does anyone have a suggestion for how to get OC to monitor the file systems in the zone?
Here's the output from zfs list within the zone described by the zonecfg output above:
[root@acdpreprod ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
PREPRODarchPOOL 8.91G 49.7G 31K none
PREPRODarchPOOL/d05 8.91G 49.7G 8.91G /d05
PREPRODdataPOOL 807G 364G 31K none
PREPRODdataPOOL/d02 13.4G 36.6G 13.4G /d02
PREPRODdataPOOL/d03 782G 364G 782G /d03
PREPRODdataPOOL/d06 11.4G 88.6G 11.4G /d06
PREPRODredoPOOL 7.82G 3.93G 31K none
PREPRODredoPOOL/d04 7.82G 3.93G 7.82G /d04
None of the file systems in the delegated datasets appear in Ops Center for this zone. Are there any suggestions for how I correct this?Do you mean adopt the zone? That requires the zone be halted and it also says something about copying all file systems to the pool created for the zone. Of the 12 zones I have (four on each of three M5000s), seven of them are already in "production" status, and four of those seven now support 7x24 world-wide operations. A do-over is not an option here.
-
How to view crawled documents of file system in the browser using sharepoint search
Hi,
I think this should be pretty obvious. However, somehow I am not able to do it. Here is what i am working on:
I created a folder on my D:\ drive on the SharePoint server. Added a few word documents there.Created a new content source in my Search Service Application and configured it to crawl above folder from file system. Ran a full crawl. All the documents are
crawled successfully. I can see the documents on the search page. Now, the problem is:
How can I open this document directly in the browser? Becuase the path i get is of the folder on the server. If this isn't possible then I think,there is no use of this feature. I just see the title and a short description of the document.
But, I think it should be possible. I just don't know how to do it.
Any ideas will be highly appreciated.Hi Mohan,
According to your description, my understanding is that you want to open the documents in file shares in the browser by clicking the link in search results.
As the files are not stored in SharePoint, so we need to configure Office Web Apps for the documents to open the files in the browser.
Here is a similar thread for you to take a look:
http://social.technet.microsoft.com/Forums/sharepoint/en-US/70541f55-6a66-4d55-8f2c-9f9a356b9654/how-to-open-sharepoint-search-results-from-file-server-using-office-web-apps-server-2013?forum=sharepointsearch
Best regards.
Thanks
Victoria Xia
TechNet Community Support -
Unable to create File System Repository manager
Hello,
I would like to create File System Repository manager to this path: I have mapped a certain Network drive "
main-uni-fs1\pictures$" to the Portal Machine and under this machine it appears as drive "N:\", inside it there is a picture folder which I would like to create the mapping to. I tried setting the "Root Directory" value to "N:\" and to "
main-uni-fs1\pictures$" but I keep receiving an error message at the Repository Component Monitor (see below).
10X,
RoyHello Roy,
1> Create a new path that points to the pictures directory on the system. Enter the path lik
mypc.myorg.com\pictures
Under Sys admin -> Sys Config -> KM -> Content MGT -> Global Services -> Network path
Under the user name and password, enter your username and password
username whould be entered like domainname\username
2>Then create a system in for the mypc.myorg.com server under
Sys admin -> Sys Config -> KM -> Content MGT -> Global Services -> System landscape definitions -> System -> Windows Sytem.
(Assuming u are acessing a windows sytem from a portal running on windows platform).
You need to enter system id and description . Kindly remember the system id, since it would be useful in two places while usermapping and creating the file system repository.
3>create a file system rep mgr under
Sys admin -> Sys Config -> KM -> Content MGT -> Repository Managers > File System Repository
Following properties are most important.
Name , ACL manager cache, Security manager(which you would have to set to W2kSecurity manager)
prefix(this is the folder name that you will identify your repository with) Widws landscape System (this is the sytem id of step 2) and root directory (network path of first step only in this case enter it like this //mypc.myorg.com/pictures)
4>Then under user mapping section, you will find you system (the system id of second step) , enter your name as password.
Initially when you configure your rep manager, you dont need to start ur servlet engine. If you modify any property then you need to re start ur servlet engine.
These are the steps for creating repository
If this still does not get resolved. Then kindly read the limitations of creating the file system repository manager in the link that I had posted in my earlier post.
Thanks and Regards
Pradeep Bhojak -
Every boot requires fsck. Makes file system read-only
Suddenly when I booted my Arch I got a message saying that there was a corruption in my /home and /var directories. The same message suggested to pass fsck as solution. The booting process hanged there without entering to the emergency login. I took a live cd an executed a fsck in all my ext4 file system and “repaired” the problem.
When I could log into the system if, for example, I want to write a file o do any operation (sudo for example) I get the message:
sudo: unable to open /var/db/sudo/lola/0: Read-only file system
If I reboot the PC again, the cycle begins again, this is: errors, asking for fsck, then, read-only FS.
Some info that you may find useful:
fstab
/dev/sda3 / ext4 defaults 0 1
/dev/sda5 /var ext4 defaults 0 1
/dev/sda6 swap swap defaults 0 0
/dev/sda7 /home ext4 defaults,user_xattr 0 1
df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 15G 7.2G 6.6G 52% /
dev 1.9G 0 1.9G 0% /dev
run 1.9G 740K 1.9G 1% /run
/dev/sda3 15G 7.2G 6.6G 52% /
tmpfs 1.9G 536K 1.9G 1% /dev/shm
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 1.9G 80K 1.9G 1% /tmp
/dev/sda7 80G 39G 37G 52% /home
/dev/sda5 6.8G 3.7G 2.8G 58% /var
/dev/sda2 358G 334G 25G 94% /media/windows
systemctl
UNIT LOAD ACTIVE SUB DESCRIPTION
proc-sys-fs-binfmt_misc.automount loaded active running Arbitrary Executable File Formats File System Au
sys-devices-pci...000:00:1b.0-sound-card0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1b.0/sound/card0
sys-devices-pci...0-0000:02:00.0-net-eth0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.0/0000:02:00.
sys-devices-pci...-0000:03:00.0-net-wlan0.device loaded active plugged /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.
sys-devices-pci...-1:0:0:0-block-sda-sda1.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...-1:0:0:0-block-sda-sda2.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...-1:0:0:0-block-sda-sda3.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...-1:0:0:0-block-sda-sda4.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...-1:0:0:0-block-sda-sda5.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...-1:0:0:0-block-sda-sda6.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...-1:0:0:0-block-sda-sda7.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...1:0:0-1:0:0:0-block-sda.device loaded active plugged TOSHIBA_MK5055GSX
sys-devices-pci...5:0:0-5:0:0:0-block-sr0.device loaded active plugged TSSTcorp_CDDVDW_TS-L633C
sys-devices-platform-serial8250-tty-ttyS0.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS0
sys-devices-platform-serial8250-tty-ttyS1.device loaded active plugged /sys/devices/platform/serial8250/tty/ttyS1
tail -100 /var/log/errors.log
Jan 23 14:43:52 localhost kernel: [ 283.449542] ata2.00: error: { UNC }
Jan 23 14:43:52 localhost kernel: [ 283.451224] end_request: I/O error, dev sda, sector 513442820
Jan 23 14:43:52 localhost kernel: [ 283.451228] Buffer I/O error on device sda2, logical block 63796096
Jan 23 14:43:56 localhost kernel: [ 287.394421] ata2.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x0
Jan 23 14:43:56 localhost kernel: [ 287.394426] ata2.00: irq_stat 0x40000001
Jan 23 14:43:56 localhost kernel: [ 287.394430] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 14:43:56 localhost kernel: [ 287.394435] ata2.00: cmd 60/08:00:00:84:9a/00:00:1e:00:00/40 tag 0 ncq 4096 in
Jan 23 14:43:56 localhost kernel: [ 287.394435] res 41/40:08:04:84:9a/00:00:1e:00:00/6e Emask 0x409 (media error) <F>
Jan 23 14:43:56 localhost kernel: [ 287.394438] ata2.00: status: { DRDY ERR }
Jan 23 14:43:56 localhost kernel: [ 287.394440] ata2.00: error: { UNC }
Jan 23 14:43:56 localhost kernel: [ 287.394442] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 14:43:56 localhost kernel: [ 287.394447] ata2.00: cmd 60/00:08:1f:53:4d/01:00:2d:00:00/40 tag 1 ncq 131072 in
Jan 23 14:43:56 localhost kernel: [ 287.394447] res 41/40:02:04:84:9a/00:00:1e:00:00/4e Emask 0x9 (media error)
Jan 23 14:43:56 localhost kernel: [ 287.394449] ata2.00: status: { DRDY ERR }
Jan 23 14:43:56 localhost kernel: [ 287.394451] ata2.00: error: { UNC }
Jan 23 14:43:56 localhost kernel: [ 287.394453] ata2.00: failed command: WRITE FPDMA QUEUED
Jan 23 14:43:56 localhost kernel: [ 287.394458] ata2.00: cmd 61/08:10:7f:6e:b7/00:00:30:00:00/40 tag 2 ncq 4096 out
Jan 23 14:43:56 localhost kernel: [ 287.394458] res 41/40:02:04:84:9a/00:00:1e:00:00/4e Emask 0x9 (media error)
Jan 23 14:43:56 localhost kernel: [ 287.394460] ata2.00: status: { DRDY ERR }
Jan 23 14:43:56 localhost kernel: [ 287.394462] ata2.00: error: { UNC }
Jan 23 14:43:56 localhost kernel: [ 287.394464] ata2.00: failed command: WRITE FPDMA QUEUED
Jan 23 14:43:56 localhost kernel: [ 287.394468] ata2.00: cmd 61/90:18:37:42:31/00:00:35:00:00/40 tag 3 ncq 73728 out
Jan 23 14:43:56 localhost kernel: [ 287.394468] res 41/40:02:04:84:9a/00:00:1e:00:00/4e Emask 0x9 (media error)
Jan 23 14:43:56 localhost kernel: [ 287.394471] ata2.00: status: { DRDY ERR }
Jan 23 14:43:56 localhost kernel: [ 287.394472] ata2.00: error: { UNC }
Jan 23 14:43:56 localhost kernel: [ 287.396122] end_request: I/O error, dev sda, sector 513442820
Jan 23 14:43:56 localhost kernel: [ 287.396125] Buffer I/O error on device sda2, logical block 63796096
Jan 23 14:43:56 localhost kernel: [ 287.396179] end_request: I/O error, dev sda, sector 760042271
Jan 23 14:43:56 localhost kernel: [ 287.396232] end_request: I/O error, dev sda, sector 817327743
Jan 23 14:43:56 localhost kernel: [ 287.396235] Buffer I/O error on device sda7, logical block 1133175
Jan 23 14:43:56 localhost kernel: [ 287.396282] end_request: I/O error, dev sda, sector 892420663
Jan 23 14:44:01 localhost kernel: [ 291.317005] ata2.00: exception Emask 0x0 SAct 0x2 SErr 0x0 action 0x0
Jan 23 14:44:01 localhost kernel: [ 291.317010] ata2.00: irq_stat 0x40000008
Jan 23 14:44:01 localhost kernel: [ 291.317014] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 14:44:01 localhost kernel: [ 291.317019] ata2.00: cmd 60/08:08:00:84:9a/00:00:1e:00:00/40 tag 1 ncq 4096 in
Jan 23 14:44:01 localhost kernel: [ 291.317019] res 41/40:08:04:84:9a/00:00:1e:00:00/6e Emask 0x409 (media error) <F>
Jan 23 14:44:01 localhost kernel: [ 291.317022] ata2.00: status: { DRDY ERR }
Jan 23 14:44:01 localhost kernel: [ 291.317024] ata2.00: error: { UNC }
Jan 23 14:44:01 localhost kernel: [ 291.318643] end_request: I/O error, dev sda, sector 513442820
Jan 23 14:44:01 localhost kernel: [ 291.318647] Buffer I/O error on device sda2, logical block 63796096
Jan 23 14:44:01 localhost kernel: [ 291.546162] Aborting journal on device sda7-8.
Jan 23 14:54:20 localhost [ 7.837917] systemd[1]: Socket service syslog.service not loaded, refusing.
Jan 23 14:54:20 localhost [ 7.838515] systemd[1]: Failed to listen on Syslog Socket.
Jan 23 14:54:21 localhost kernel: [ 10.930157] [drm:intel_panel_setup_backlight] *ERROR* Failed to get maximum backlight value
Jan 23 14:54:34 localhost dhcpcd[564]: dhcpcd not running
Jan 23 14:54:35 localhost dhcpcd[649]: dhcpcd not running
Jan 23 14:54:35 localhost dhcpcd[662]: dhcpcd not running
Jan 23 14:58:04 localhost su: pam_unix(su:auth): conversation failed
Jan 23 17:00:38 localhost systemd[1]: Failed to start Netcfg multi-profile daemon.
Jan 23 17:00:39 localhost [ 7.790297] systemd[1]: Socket service syslog.service not loaded, refusing.
Jan 23 17:00:39 localhost [ 7.790891] systemd[1]: Failed to listen on Syslog Socket.
Jan 23 17:00:39 localhost kernel: [ 11.363432] [drm:intel_panel_setup_backlight] *ERROR* Failed to get maximum backlight value
Jan 23 17:00:55 localhost dhcpcd[612]: dhcpcd not running
Jan 23 17:00:55 localhost dhcpcd[618]: dhcpcd not running
Jan 23 17:00:55 localhost dhcpcd[626]: dhcpcd not running
Jan 23 17:01:23 localhost su: pam_unix(su:auth): conversation failed
Jan 23 17:04:55 localhost kernel: [ 274.346016] ata2.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x0
Jan 23 17:04:55 localhost kernel: [ 274.346022] ata2.00: irq_stat 0x40000008
Jan 23 17:04:55 localhost kernel: [ 274.346025] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 17:04:55 localhost kernel: [ 274.346031] ata2.00: cmd 60/70:08:e8:83:9a/00:00:1e:00:00/40 tag 1 ncq 57344 in
Jan 23 17:04:55 localhost kernel: [ 274.346031] res 41/40:70:04:84:9a/00:00:1e:00:00/6e Emask 0x409 (media error) <F>
Jan 23 17:04:55 localhost kernel: [ 274.346034] ata2.00: status: { DRDY ERR }
Jan 23 17:04:55 localhost kernel: [ 274.346035] ata2.00: error: { UNC }
Jan 23 17:04:55 localhost kernel: [ 274.348151] end_request: I/O error, dev sda, sector 513442820
Jan 23 17:04:55 localhost kernel: [ 274.348154] Buffer I/O error on device sda2, logical block 63796096
Jan 23 17:04:55 localhost kernel: [ 274.348159] Buffer I/O error on device sda2, logical block 63796097
Jan 23 17:04:55 localhost kernel: [ 274.348162] Buffer I/O error on device sda2, logical block 63796098
Jan 23 17:04:55 localhost kernel: [ 274.348164] Buffer I/O error on device sda2, logical block 63796099
Jan 23 17:04:55 localhost kernel: [ 274.348166] Buffer I/O error on device sda2, logical block 63796100
Jan 23 17:04:55 localhost kernel: [ 274.348168] Buffer I/O error on device sda2, logical block 63796101
Jan 23 17:04:55 localhost kernel: [ 274.348171] Buffer I/O error on device sda2, logical block 63796102
Jan 23 17:04:55 localhost kernel: [ 274.348173] Buffer I/O error on device sda2, logical block 63796103
Jan 23 17:04:55 localhost kernel: [ 274.348176] Buffer I/O error on device sda2, logical block 63796104
Jan 23 17:04:55 localhost kernel: [ 274.348178] Buffer I/O error on device sda2, logical block 63796105
Jan 23 17:04:59 localhost kernel: [ 278.257533] ata2.00: exception Emask 0x0 SAct 0x84 SErr 0x0 action 0x0
Jan 23 17:04:59 localhost kernel: [ 278.257539] ata2.00: irq_stat 0x40000008
Jan 23 17:04:59 localhost kernel: [ 278.257543] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 17:04:59 localhost kernel: [ 278.257549] ata2.00: cmd 60/08:38:00:84:9a/00:00:1e:00:00/40 tag 7 ncq 4096 in
Jan 23 17:04:59 localhost kernel: [ 278.257549] res 41/40:08:04:84:9a/00:00:1e:00:00/6e Emask 0x409 (media error) <F>
Jan 23 17:04:59 localhost kernel: [ 278.257551] ata2.00: status: { DRDY ERR }
Jan 23 17:04:59 localhost kernel: [ 278.257553] ata2.00: error: { UNC }
Jan 23 17:04:59 localhost kernel: [ 278.259233] end_request: I/O error, dev sda, sector 513442820
Jan 23 17:05:03 localhost kernel: [ 282.157929] ata2.00: exception Emask 0x0 SAct 0x4 SErr 0x0 action 0x0
Jan 23 17:05:03 localhost kernel: [ 282.157934] ata2.00: irq_stat 0x40000008
Jan 23 17:05:03 localhost kernel: [ 282.157938] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 17:05:03 localhost kernel: [ 282.157943] ata2.00: cmd 60/08:10:00:84:9a/00:00:1e:00:00/40 tag 2 ncq 4096 in
Jan 23 17:05:03 localhost kernel: [ 282.157943] res 41/40:08:04:84:9a/00:00:1e:00:00/6e Emask 0x409 (media error) <F>
Jan 23 17:05:03 localhost kernel: [ 282.157946] ata2.00: status: { DRDY ERR }
Jan 23 17:05:03 localhost kernel: [ 282.157947] ata2.00: error: { UNC }
Jan 23 17:05:03 localhost kernel: [ 282.159582] end_request: I/O error, dev sda, sector 513442820
Jan 23 17:05:03 localhost kernel: [ 282.159588] Buffer I/O error on device sda2, logical block 63796096
Jan 23 17:05:07 localhost kernel: [ 286.069482] ata2.00: exception Emask 0x0 SAct 0x40 SErr 0x0 action 0x0
Jan 23 17:05:07 localhost kernel: [ 286.069487] ata2.00: irq_stat 0x40000008
Jan 23 17:05:07 localhost kernel: [ 286.069491] ata2.00: failed command: READ FPDMA QUEUED
Jan 23 17:05:07 localhost kernel: [ 286.069496] ata2.00: cmd 60/08:30:00:84:9a/00:00:1e:00:00/40 tag 6 ncq 4096 in
Jan 23 17:05:07 localhost kernel: [ 286.069496] res 41/40:08:04:84:9a/00:00:1e:00:00/6e Emask 0x409 (media error) <F>
Jan 23 17:05:07 localhost kernel: [ 286.069499] ata2.00: status: { DRDY ERR }
Jan 23 17:05:07 localhost kernel: [ 286.069501] ata2.00: error: { UNC }
Jan 23 17:05:07 localhost kernel: [ 286.071955] end_request: I/O error, dev sda, sector 513442820
Jan 23 17:05:07 localhost kernel: [ 286.071959] Buffer I/O error on device sda2, logical block 63796096
Waiting for your help.
Thank you.oh god, if my HD dies I will kill myself.
I did the SMART, short and long (following the ArchWiki) and got (will post the long as soon it ends):
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 7967 -
With more details:
smartctl -a /dev/sda
smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.6.9-1-ARCH] (local build)
Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Toshiba 2.5" HDD MK..55GSX
Device Model: TOSHIBA MK5055GSX
Serial Number: 897JC07RT
LU WWN Device Id: 5 000039 1f3c01f05
Firmware Version: FG001M
User Capacity: 500,107,862,016 bytes [500 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 2.6, 3.0 Gb/s
Local Time is: Wed Jan 23 13:46:24 2013 PYST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 249) Self-test routine in progress...
90% of test remaining.
Total time to complete Offline
data collection: ( 120) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 183) minutes.
SCT capabilities: (0x0039) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 1320
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 1807
5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 7
7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0
9 Power_On_Hours 0x0032 081 081 000 Old_age Always - 7967
10 Spin_Retry_Count 0x0033 136 100 030 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1789
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 97
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 33
193 Load_Cycle_Count 0x0032 082 082 000 Old_age Always - 189405
194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 39 (Min/Max 9/50)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 1
197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 9
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
220 Disk_Shift 0x0002 100 100 000 Old_age Always - 87
222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6001
223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0
224 Load_Friction 0x0022 100 100 000 Old_age Always - 0
226 Load-in_Time 0x0026 100 100 000 Old_age Always - 320
240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0
SMART Error Log Version: 1
ATA Error Count: 634 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 634 occurred at disk power-on lifetime: 7967 hours (331 days + 23 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
40 41 02 04 84 9a 6e Error: UNC at LBA = 0x0e9a8404 = 245007364
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
60 08 00 00 84 9a 40 08 03:31:25.735 READ FPDMA QUEUED
ea 00 00 00 00 00 a0 00 03:31:25.714 FLUSH CACHE EXT
61 18 08 07 82 b1 40 08 03:31:25.714 WRITE FPDMA QUEUED
61 08 00 0f 7d 31 40 08 03:31:25.714 WRITE FPDMA QUEUED
ea 00 00 00 00 00 a0 00 03:31:25.682 FLUSH CACHE EXT
Error 633 occurred at disk power-on lifetime: 7967 hours (331 days + 23 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
40 41 02 04 84 9a 6e Error: UNC at LBA = 0x0e9a8404 = 245007364
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
60 08 00 00 84 9a 40 08 03:31:21.832 READ FPDMA QUEUED
61 18 00 27 81 b1 40 08 03:31:21.451 WRITE FPDMA QUEUED
61 18 00 27 77 b1 40 08 03:31:20.425 WRITE FPDMA QUEUED
61 18 00 2f 8c b1 40 08 03:31:19.451 WRITE FPDMA QUEUED
61 18 00 ef 81 b1 40 08 03:31:18.445 WRITE FPDMA QUEUED
Error 632 occurred at disk power-on lifetime: 7967 hours (331 days + 23 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
40 41 0a 04 84 9a 6e Error: UNC at LBA = 0x0e9a8404 = 245007364
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
60 00 00 07 7f 09 40 08 03:16:43.581 READ FPDMA QUEUED
60 08 08 00 84 9a 40 08 03:16:43.559 READ FPDMA QUEUED
60 00 00 a7 5b 09 40 08 03:16:43.552 READ FPDMA QUEUED
ef 10 02 00 00 00 a0 00 03:16:43.551 SET FEATURES [Reserved for Serial ATA]
27 00 00 00 00 00 e0 00 03:16:43.551 READ NATIVE MAX ADDRESS EXT
Error 631 occurred at disk power-on lifetime: 7967 hours (331 days + 23 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
40 41 0a 04 84 9a 6e Error: UNC at LBA = 0x0e9a8404 = 245007364
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
60 00 00 a7 5b 09 40 08 03:16:39.713 READ FPDMA QUEUED
60 08 00 87 b5 25 40 08 03:16:39.705 READ FPDMA QUEUED
60 00 00 8f 60 05 40 08 03:16:39.693 READ FPDMA QUEUED
60 08 00 e7 d4 54 40 08 03:16:39.692 READ FPDMA QUEUED
60 08 08 00 84 9a 40 08 03:16:39.678 READ FPDMA QUEUED
Error 630 occurred at disk power-on lifetime: 7967 hours (331 days + 23 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
40 41 12 04 84 9a 6e Error: WP at LBA = 0x0e9a8404 = 245007364
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
61 08 08 0e 4c b6 40 08 03:16:35.710 WRITE FPDMA QUEUED
61 08 08 0e 4c b6 40 08 03:16:35.710 WRITE FPDMA QUEUED
61 08 08 0e 4c b6 40 08 03:16:35.709 WRITE FPDMA QUEUED
61 98 18 8f 41 31 40 08 03:16:35.709 WRITE FPDMA QUEUED
61 08 08 0e 4c b6 40 08 03:16:35.708 WRITE FPDMA QUEUED
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 7967 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Is there an equivalent to Solaris iostat -En in Linux? Havent found any. Remember that I can't install anything: read-only FS
Last edited by gromlok (2013-01-23 16:47:35) -
Solaris 10:unable to mount a solaris root file system
Hi All,
I am trying to install Solaris 10 X86 on a Proliant DL385 Server it has a Smart array 6i, I have download the driver from the HP web site, on booting up the installation CD 1, adding the device driver, it sees the device but now says it can���t mount the device. Any clues what I need to do?
Screen Output:
Unable to mount a Solaris root file system from the device
DISK: Target 0, Bios primary drive - device 0x80
on Smart Array 6i Controller on Board PCI bus 2, at Dev 4
Error message from mount::
/pci@0,0/pci1022,7450@7/pcie11,4091@4/cmdk@0,0:a: can't open - no vtoc
any assistence would be appreciated.Hi,
I read the Message 591 (Agu 2003) and the problem is quite the same. A brief description: I have aLaptop ASUS with HDD1 60GB and a USB storage HDD (in next HDD2) 100GB. I installed Solaris 10 x86 on HDD2 (partition c2t0d0s0). At the end of installation I removed the DVD and using BIOS features I switched the boot to HDD2. All ok; I received the SUN Blue Screen and I choose the active Solaris option; but at the beginning of the boot I received the following error message
Screen Output:
Unable to mount a Solaris root file system from the device
DISK: Target 0: IC25N060 ATMR04-0 on Board ....
Error message from mount::
/pci@0,0/pci-ide2,5/ide@1/cmdk@0,0:a: can't open
any assistence would be appreciated.
Regards -
SAP ECC 6.0 file system Restore
Dear Friends,
Happy Holi.
I have restore file system backup of our development server to another host having oracle 10g and ECC 6.0.
After restore database has been up successfully.
But listner is not running when I trid to run listner it is giving the error as below.
jkeccbc:oradvr 2> lsnrctl start
LSNRCTL for HPUX: Version 10.2.0.2.0 - Production on 19-MAR-2011 15:18:33
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Starting /oracle/DVR/102_64/bin/tnslsnr: please wait...
TNSLSNR for HPUX: Version 10.2.0.2.0 - Production
System parameter file is /oracle/DVR/102_64/network/admin/listener.ora
Log messages written to /oracle/DVR/102_64/network/log/listener.log
Error listening on: (ADDRESS=(PROTOCOL=IPC)(KEY=DVR.WORLD))
TNS-12557: TNS:protocol adapter not loadable
TNS-12560: TNS:protocol adapter error
TNS-00527: Protocol Adapter not loadable
Listener failed to start. See the error message(s) above...
Regards
Ganesh Datt TiwariHi Mark,
Please find below
cat listener.ora
Filename......: listener.ora
Created.......: created by SAP AG, R/3 Rel. >= 6.10
Name..........:
Date..........:
@(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/LISTENER.ORA#4 $
ADMIN_RESTRICTIONS_LISTENER = on
LISTENER =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = IPC)
(KEY = DVR.WORLD)
(ADDRESS=
(PROTOCOL = IPC)
(KEY = DVR)
(ADDRESS =
(COMMUNITY = SAP.WORLD)
(PROTOCOL = TCP)
(HOST =jkeccbc)
(PORT = 1527)
STARTUP_WAIT_TIME_LISTENER = 0
CONNECT_TIMEOUT_LISTENER = 10
TRACE_LEVEL_LISTENER = OFF
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = DVR)
(ORACLE_HOME = /oracle/DVR/102_64)
===========================================
cat tnsnames.ora
Filename......: tnsnames.ora
Created.......: created by SAP AG, R/3 Rel. >= 6.10
Name..........:
Date..........:
@(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
DVR.WORLD=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = SAP.WORLD)
(PROTOCOL = TCP)
(HOST = jkeccbc)
(PORT = 1527)
(CONNECT_DATA =
(SID = DVR)
(GLOBAL_NAME = DVR.WORLD)
Regards
Ganesh Datt Tiwari -
How increase external disk size used for an existing file system, Solaris10
Configuration:
Server: Sun T5220
S/O: Solaris 10 5/08 s10s_u5wos_10 SPARC
Storage: EMC AX4-5
EMC PowerPath: PowerPath Version 5.2
I have the following scenario:
In AX4-5 storage array, I created two LUNs into RAID Group 10, with Raid Type1/0:
LUN 101: 20Gb
LUN 102: 10Gb
Both LUNs were added to Storage Group (SG1) that includes two Servers (Server1 and Server2); both servers have Operating System Solaris 10 5/8 and Power Path.
The servers detect both LUNs across two paths. With Power Path were created a virtual path (emcpower0a, emcpower1a) to access to each LUNs respectively.
We have mounted two file system /home/tes1 over emcpower0a -> LUN101 and /home/tes2 over emcpower1a -> LUN102.
Filesystem size used avail capacity Mounted on
/dev/dsk/emcpower0a 20G 919M 19G 5% /home/test1
/dev/dsk/emcpower1a 10G 9G 1G 90% /home/test2
I want to increase the space in file system /home/test2, without lost the information that I have stored and using the same LUN, LUN 102. To do this I start with the following steps:
1- Create new LUN, LUN 103 with 15Gb into RAID Group 10. Result: OK, I have available space in RAID Group 10.
2- Add LUN 103 to LUN 102, using concatenation option. Result: OK. This action creates a new metaLUN with the same characteristics of LUN 102, but with new space of 25Gb.
After to do these actions, I want to know how Solaris recognize this new space. What I need to do, to increase the size of file system /home/test2 to 25 Gb. Is that possible?
I reviewed the description of each disk using format command, and the disks not have any change.
Could anyone help me? If you need more detail, please do not hesitate to contact me.
Thanks in advance.Robert, thank a lot for your know how. You helped me to clarify some doubts. To complete your answer, I will add two more details, based on my experience.
After to execute, type -> auto configure and label, the disk was created with different partitions like root, swap, usr, like this:
Volume name = < >
ascii name = <DGC-RAID10-0223 cyl 49150 alt 2 hd 32 sec 12>
pcyl = 49152
ncyl = 49150
acyl = 2
nhead = 32
nsect = 12
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 682 128.06MB (683/0/0) 262272
1 swap wu 683 - 1365 128.06MB (683/0/0) 262272
2 backup wu 0 - 49149 9.00GB (49150/0/0) 18873600
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 1366 - 49149 8.75GB (47784/0/0) 18349056
7 unassigned wm 0 0 (0/0/0) 0
This was not convenient because all information stored appear in the first slice and growfs not work if appear root or swap slice into the disk, for that reason was necesary to recreate manually every slice with partition command. The final result was this:
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 - 49149 9.00GB (49150/0/0) 18873600
1 unassigned wu 0 0 (0/0/0) 0
2 backup wu 0 - 49149 9.00GB (49150/0/0) 18873600
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Now, execute again the label command to save the new information.
Mount file system and execute:
growfs M file_system mount_point to expand the file system to 9Gb. -
Mounting the Root File System into RAM
Hi,
I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself. At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
I have now succeeded, at least it seems, in performing such a task. There are still some issues.
For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
A custom kernel hook is used to (overall):
Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
Copy the non-volatile content into the ramdisk.
Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
To integrate this hook into a initramfs, a preset is needed.
I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
Some prerequisites exist (by default):
Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
Referencing paths:
The variables:
source=
temporary=
...should have the same value in all of these files:
"/etc/initcpio/hooks/ram"
"/root/Documents/rootfs/unmount-root-fs.sh"
"/root/.rsync/exclude.txt" -- Should correspond.
This is needed to sync the RAM disk back to the hard disk.
I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
Here are all the accepted and used parameters:
Parameter Allowed Values Default Value Considered Values Description
root Default (UUID=+,/dev/disk/by-*/*) None Any string The source root
rootfstype Default of "-t <types>" of "mount" "auto" Any string The FS type of the source root.
rootflags Default of "-o <options>" of "mount" None Any string Options when mounting the source root.
ram Any string None "1" If this hook sould be run.
ramfstype Default of "-t <types>" of "mount" "auto" Any string The FS type of the RAM disk.
ramflags Default of "-o <options>" of "mount" "size=50%" Any string Options when mounting the RAM disk.
ramcleanup Any string None "0" If any left-overs should be cleaned.
ramcleanup_source Any string None "1" If the source root should be unmounted.
ram_transfer_tool cp,find,cpio,rsync,unison unison cp,find,cpio,rsync What tool to use to transfer the root into RAM.
ram_unison_fastcheck true,false,default,yes,no,auto "default" true,false,default,yes,no,auto Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
ramdisk_cache_use 0,1 None 0 If unison should use any available cache. Relevant if ram_transfer_tool=unison.
ramdisk_cache_update 0,1 None 0 If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
This is the basic setup.
Optionally:
I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
Of course, use what you need from the provided files.
Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
find+cpio: 2:10s (2:12s on slower hardware)
unison: 3:10s - 4:00s
cp: 4 minutes (31 minutes on slower hardware)
rsync: 4:40s (55 minutes on slower hardware)
Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
These are the remaining issues:
find+cpio option does not create any destination files.
(On some older hardware) When booting up, the source disk is not always detected.
When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
Inconveniences:
Unison needs to perform an update detection at each startup.
initramfs' ash does not parse wild characters to use "cp".
That's about what I can think of for now.
I will gladly try to answer any questions.
I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
Last edited by AGT (2014-05-20 23:21:45)How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
Last edited by AGT (2014-05-20 16:49:35)
Maybe you are looking for
-
What is the roadmap to a new comer in java technology?
Can some one define and explain the roadmap for a new comer who has just formal experience of using computer and keeps only basic knowledge about computers and computer programming? A new comer's mind is boggled with so many unkown terms. Can someone
-
Internal table data to XML file.
Hi All, May I know how can we convert the internal table data to xml file? Regards Ramesh.
-
How can I remove available downloads?
How can I remove available downloads?
-
Transferring music from an ipod classic to another ipod using itunes
help! im not technical minded so please be gentle! I have an ipod classic which I need to transfer all the music content to another ipod which I bought second hand. I cant seem to establish how to do this - im trying to drag them to a playlist in my
-
Printing Problem: iMac G5 2.0G
Everytime that I try to print a document from any possible program, that program quits and no printing results. Printer is a Brother 210C. I have loaded the Printer software and the Brother 210C is displayed and selected in the Printer Utility. I hav