[SOLVED] Measuring defragmentation on ext4

Other than unmounting a filesystem and running fsck, is there a way to get the percentage of defragmentation on an ext4 filesystem?
I'm still running Arch kernel 2.6.28, so I'm not using e4defrag yet.
Thanks.
Last edited by dhave (2009-02-17 21:50:06)

http://bbs.archlinux.org/viewtopic.php?id=65647 :?

Similar Messages

  • Performing OLAP_TABLE on solved measured created in AWM

    DB & AWM version = 10.1.0.4
    My Oracle OLAP skills version = 0.1
    Last week I started to evaluate Oracle OLAP and I've created 2 dimensions(DIM_BUS & DIM_TIME) as well as a cube (CUB_BUS_TIME) with 2 measures (MSR_NET_INJECTION & MSR_NET_INJECTION_AVG). The cube is quite large, about 37 million rows and so far I've relied 100% on AWM since I don't have time right now to get down and dirty with command lines.
    I'm now trying to use OLAP_TABLE to query the cube and I'm having a hard time trying to see which expression should I exactly use in order to get results from the pre-calculated data, rather than Oracle trying to pull all 37 million records (and BTW failing spectacularly in doing so, with Oracle process just grabbing and grabbing more physical memory until the whole instance crashes like a deck of cards). I've tried using something like this:
    SELECT *
    FROM TABLE(
    OLAP_TABLE('binge DURATION session',
    'DIMENSION laaa FROM dim_bus
    MEASURE looo FROM cube_bus_time_msr_net_injection
    ROW2CELL olap_calc'));
    It doesn't do the trick - just keeps running until the instance crashes. Any idea how the "msr_net_injection" measure (that's the name given in AWM!!) should be queried? "dim_bus" is quite a small measure, only about 300 records.
    Thanks

    OLAP_TABLE has a bug in 10.1.0.4.0 and 10.1.0.5.0 (fixed in 10.2) that forces it's row buffer to be non-paged. Since you are trying to select 37 million rows I suspect OLAP_TABLE is eating up all your memory and dying.
    Try using a WHERE clause on your query to limit the dimension values for which data is selected (to something like a single time and business). As far as only selecting your pre-calculated values, based on the objects I see described here, this again would have to be done using some kind of WHERE predicate. There are many techniques for handling sparse data in OLAP cubes which can make it easier to select only those cells with data. The OLAP Application Developer's Guide mentions some of these. Many require you to get "down and dirty" with the command line.
    To make the start of your evaluation easier (until you get the hang of things) I would encourage you to use a smaller number of dimension values yielding a smaller cube.
    Observations:
    1) Just to clarify, dim_bus is defined as a dimension (you call it a measure I suspect this was simply mis-speak). 300 values is not a lot. You must have many, many bus dimension values to produce 37 million rows.
    2) Your limit map (fourth argument in OLAP_TABLE call) only mentions one of your dimensions (bus). It should mention both (bus and time) so that you can query both and use both in a WHERE clause. Having unnamed dimensions also causes the OLAP server to make some default looping decisions which may not be optimal.
    Some other possible solutions to your dilemma:
    1) try increasing your pga_aggregate_target parameter, making it very large. This may solve the memory crash problem but OLAP_TABLE will still buffer the entire result set before returning rows. For 37 million rows this may take a long, long time.
    2) the best solution is to avoid table function overhead and buffers by using the SELECT MODEL wrapper optimization. This is described in the OLAP Application Developer's Guide, Chapter 7 (last time I looked). This will dramatically improve your resource usage and response time. It will avoid the non-paged bug I mentioned earlier and bufferizing time. However, I would still add time to your limit map and use a predicate (WHERE clause) to reduce the number of rows being selected.
    Matt

  • [Solved] Grub Error 13, ext4 and 2.6.28.1

    Hi guys!!!
    I have formatted my laptop disk in new ext4 format, following wiki instructions:
    http://wiki.archlinux.org/index.php/Cre … _Partition
    and all wok fine.
    But after today update (pacman -Syu), my arch don't boot.
    Grub messages (lastest grub version, normal grub no grub2):
    Filesystem type is ext2fs, partition type 0x83
    kernel /boot/vmlinuz26 root=/dev/disk/by-uuid/.............
    Error 13: invalid or unsupported executable format
    No Normal image nor Fallback image boot.
    Any suggestion?
    P.D. Sorry for my englis xD
    Check wiki solution:
    http://wiki.archlinux.org/index.php/Cre … B_Error_13
    Last edited by superchango (2009-01-23 02:19:19)

    from grub's web site:
    13 : Invalid or unsupported executable format
        This error is returned if the kernel image being loaded is not recognized as Multiboot or one of the supported native formats (Linux zImage or bzImage, FreeBSD, or NetBSD).
    I think your vmlinuz26 file is wrong in some ways, or grub can't read ext4 correctly (I have done a fresh install with ext4, but I have made a separate partition for /boot in ext2)
    superchango wrote:following wiki instructions:
    http://wiki.archlinux.org/index.php/Cre … _Partition
    did you create from scratch or did you convert from ext3 ?

  • [SOLVED] gparted performance - creating ext4 filesystem takes forever

    17 hours have passed since i issued creation of new partition(150GB) with ext4 filesystem on it using newest version of gparted. 
    Does it really takes that long? Is it normal?
    Because i have no way of telling whether it does anything at the moment. All i see on  gui is the let's call it "progress bar" swinging back and forth.
    On the details i can see it's issuing mkfs.ext4 command but it's nowhere to be found using ps command.
    Only gparted and gpartedbin are running but they have 0 CPU usage according to top cmd.
    Last edited by deltharac (2013-10-16 19:01:01)

    NotFromBrooklyn wrote:Something must have failed. The largest partition I remember creating manually was 80 GB and it took me less than I need to finish a coffee cup.
    Yep it seemed strange to me that it's THAT slow i just needed someone's confirmation, thanks. I killed the whole process and the partition is there with ext4 filesystem prepared but i quess it won't be a good idea to assume it has been set up properly. There's already 2.54GB marked as used for some reason.
    I'll try to recreate filesystem from some livecd environment.
    graysky wrote:...are you resizing a partition and then making a new one?
    Nah no resizing, only one operation. I was just organizing my 2nd sata II hdd space to serve as backup for my original System from where i was using gparted to create new partitions on that disk.

  • [solved] Measuring how much time a bash function takes to run?

    Hi there,
    a (hopefully) simple question for the bash gods out there :)
    Lets say i have a script looking like this:
    #!/bin/bash
    some_function() {
    doing something
    some_function
    How can i measure the time (preferably in human-readable form: minutes or even hours) the bash function takes for a run?
    I know about "time", but i want to do it directly inside the script...
    Google tells me nothing, apart from one site where i have to pay for a solution :/
    Thanks for any suggestions
    Jan
    Last edited by funkyou (2008-09-01 13:21:04)

    iphitus@laptop:~$ sh test.sh
    Hi
    Bye
    real 0m10.002s
    user 0m0.000s
    sys 0m0.003s
    iphitus@laptop:~$ cat test.sh
    somefunc(){
    echo "Hi"
    sleep 10
    echo "Bye"
    time somefunc
    Time appears to work within bash scripts too

  • [SOLVED] Journal error on EXT4 root device

    I'm getting an error in my dmesg once every boot concerning the journal on my root device:
    dmesg wrote:[  304.383885] EXT4-fs (sda3): error count: 2
    [  304.383897] EXT4-fs (sda3): initial error at 1402036456: ext4_journal_check_start:56
    [  304.383904] EXT4-fs (sda3): last error at 1402036459: ext4_journal_check_start:56
    It doesn't show up immediately upon booting, but shortly after. I assume some kind of journal flush or similar is happening there. I have tried booting from a live USB stick and running fsck, but it comes up clean.
    The device is a Samsung 840 SSD, connected via SATA, with the defaults,noatime,discard mount options. SMART shows no errors, apart from ~800 CRC checksum errors, but that value hasn't changed in over 6 months since I changed the SATA cable, which I assume was defective.
    Is the journal on my root device corrupt?
    E: I tried fsck'ing again, after studying the man page. Re-running fsck and forcing it to check everything even if the filesystem seemed clean, caused a bunch of inode errors to pop up. Studying the files in lost+found afterwards revealed a bunch of source code files that looked like Perl. It must have been a package upgrade that got corrupted during the file transfer from /var to / (they're on separate devices).
    Last edited by KozmoNaut (2014-06-16 22:14:58)

    jjacky wrote:See https://bbs.archlinux.org/viewtopic.php?id=189537
    Thank you. It works. I edited my original post and added the solution.
    I'm curious whether there are any advantages when partitioning a LUKS device vs using LVM, if LVM features won't be used (partitions will have a fixed size). There does not seem to be any, so adding an extra layer of abstraction like LVM seemed unnecessary.
    Last edited by SteveSapolsky (2014-12-05 13:41:01)

  • Solve cube, solve single measures in a cibe

    We are using OWB 10gR2, having an AW cube with two measures, one has the solve option YES, the other has NO (in the Aggregator tab of the cube editor).
    Now we were trying the following: When loading the cube using a mapping with the cube operator, we used for the cube operator option "Solve the cube" YES the first time and NO the second time (cleaning the cube in between, of course). The first time ALL measures have been solved, the second time NONE of them has been solved. What should be the effect of specifying different solve options for measure in a cube? The values of this option seem to be ignored anyway. Is it not possible to solve one measures and not to solve another in the same cube???
    By the way, in beta releases the two option values were "on load" and "on demand", instead of "YES" and "NO" we have in 10gR2. Comparing the 10R2 and the beta releases, has more been changed than the labels? Is the intended semantic still "on load" and "on demand"?
    A lot of questions! Can anybody help on that topic? Thanks!

    With non-compressed cubes it is possible to solve one measure and not another. You will need the latest database patch also for this (10.2.0.2, bug 4550247 has details for 10.1 patch) for it to work properly. In the production release of OWB this should be operating, I think there were issues in the betas.
    The options on measures for solve indicate which measures will be included in the primary solve. The solve indicator on the cube operator in the map however indicates whether this solve will be executed or not. So the map can just load data or load and solve the data. There is a transformation function for executing solves, so solves can be scheduled independently from loading. Its also possible to solve measures independently from each other using this function (WB_OLAP_AW_PRECOMPUTE).
    Hope this helps.

  • Speed issues deleting files on a converted partitons (ext3-- ext4)

    @ loudtiger - I started this topic in reply to your questions posted here since they are off topic and I wanted to help keep the signal-to-noise up for the original thread.
    loudtiger wrote:slightly unrelated, does anyone find that deleting a particularly large file takes a long time? i'm talking about 4-10gb files, on an ext3-> ext4 partition. say, wasn't there supposed to be a defrag tool for extents?
    Ext3 is SLOW to delete large files.  Ext4 is MUCH better but if you converted an existing ext3 partition, you won't benefit from the speed gains.
    graysky wrote:For the speed advantages to be realized, you have format the partition in question to ext4, then copy the files back... that is my understanding.  Otherwise, 'converted' files are still extent-less.
    http://bbs.archlinux.org/viewtopic.php?id=69529
    As to the online defragmenter for ext4, kernel 2.9.30 was rumored to have support for one, but I don't know if it'll make it in there or not.  You can google for this and find more.
    Last edited by graysky (2009-04-28 21:39:51)

    You can use rsync for this if you want.
    Example:
    rsync -avxu --progress /home /media/backup/home

  • Autosolve during cube build

    Hello!
    Several days ago 14th of December 2008 we had run several cube builds. During these builds OLAP engine performs Data loading and Auto solve it was looking like this in XML_LOAD_LOG
    bq. +20:19:45 Running Jobs: AWXML$_2260_2120. Waiting for Tasks to Finish...+ \\ +20:19:45 Started 20 Finished 19 out of 20 Tasks.+ \\ +20:18:25 Finished Auto Solve for Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT11 Partition.+ \\ +20:15:32 Started Auto Solve for Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT10 Partition.+ \\ +20:15:32 Finished Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT10 Partition. Processed 644523 Records. Rejected 1071 Records.+ \\ +20:14:03 Finished Auto Solve for Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT09 Partition.+ \\ +20:12:19 Started Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT10 Partition.+ \\ +20:12:15 Attached AW SALES.SALES3 in MULTI Mode.+
    Now OLAP engine refuses to perform Auto solve during build. So log looks like this:
    bq. +15:28:19 Finished Parallel Processing.+ \\ +15:28:18 Finished Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT10 Partition. Processed 15910 Records. Rejected 0 Records.+ \\ +15:28:12 Finished Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT11 Partition. Processed 12104 Records. Rejected 0 Records.+ \\ +15:28:09 Started Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT10 Partition.+ \\ +15:28:08 Attached AW SALES.SALES3 in MULTI Mode.+ \\ +15:28:08 Finished Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT09 Partition. Processed 15193 Records. Rejected 0 Records.+ \\ +15:28:07 Running Jobs: AWXML$_2801_2412, AWXML$_2801_2413, AWXML$_2801_2414.+ \\ +15:28:07 Started 14 Finished 11 out of 14 Tasks.+ \\ +15:28:03 Finished Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT07 Partition. Processed 21047 Records. Rejected 0 Records.+ \\ +15:28:02 Attached AW SALES.SALES3 in MULTI Mode.+ \\ +15:28:02 Started Load of Measures: CAHTCA, CATCCA, MAPCCA, MCPCCA, QTPVCA, QTUNCA from Cube SALES.CUBE. PRT11 Partition.+
    We did not change anything.
    Cube settings are the same - Use Default Aggregation plan for Cube Aggregation
    Scripts are the same with TrackStatus=true and RunSolve=true -
    bq. declare \\ +     xml_clob clob;+ \\ +     xml_str varchar2(4000);+ \\ +     isAW number;+ \\ +     begin+ \\ +     DBMS_LOB.CREATETEMPORARY(xml_clob,TRUE);+ \\ +     dbms_lob.open(xml_clob, DBMS_LOB.LOB_READWRITE);+ \\ +     dbms_lob.writeappend(xml_clob, 183, ' <BuildDatabase Id="Action6" AWName="SALES.SALES3" BuildType="EXECUTE" RunSolve="true" CleanMeasures="false" CleanAttrs="false" CleanDim="false" TrackStatus="true" MaxJobQueues="0">');+ \\ +     dbms_lob.writeappend(xml_clob, 39, ' <BuildList XMLIDref="SALES.CUBE" />');+ \\ +     dbms_lob.writeappend(xml_clob, 18, ' </BuildDatabase>');+ \\ +     dbms_lob.close(xml_clob);+ \\ +     xml_str := sys.interactionExecute(xml_clob);+ \\ +     dbms_output.put_line(xml_str);+ \\ +     end;+
    Why OLAP engine refuses do Solve measures during build? Of course it is possible to do solve separately, but why it suddenly stops to perform it?
    Thank you for your help.
    Regards,
    Kirill Boyko

    Hello!
    It seems VERY strange but PRSALES performs normally. It does Autosolve.
    22-JAN-09 12.17.34     12:17:34 Finished Parallel Processing.
    22-JAN-09 12.17.34     12:17:34 Completed Build(Refresh) of SALES.SALES3 Analytic Workspace.
    22-JAN-09 12.17.33     12:17:33 Finished Auto Solve for Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT10 Partition.
    22-JAN-09 12.16.49     12:16:49 Finished Auto Solve for Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT11 Partition.
    22-JAN-09 12.14.07     12:14:07 Finished Auto Solve for Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT09 Partition.
    22-JAN-09 12.13.40     12:13:40 Finished Auto Solve for Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT07 Partition.
    22-JAN-09 12.12.30     12:12:30 Finished Load of Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT10 Partition. Processed 1696 Records. Rejected 0 Records.
    22-JAN-09 12.12.30     12:12:30 Started Auto Solve for Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT10 Partition.
    22-JAN-09 12.12.22     12:12:22 Started Load of Measures: CAHTPR, CATCPR, MAPCPR, MCPCPR, QTPVPR, QTUNPR from Cube PRSALES.CUBE. PRT10 Partition.
    22-JAN-09 12.12.22     12:12:22 Running Jobs: AWXML$_2935_2495, AWXML$_2935_2496, AWXML$_2935_2497, AWXML$_2935_2498.
    Regards,
    Kirill Boyko

  • Question about "fast fsck", ext4 and defragmentation status [SOLVED]

    I'm trying to use fsck to do a defacto defragmentation check of an ext4 partition. I'm running fsck from a live cd (SysRescue 1.15) to check one of my ext4 partitions. The ext4 partition is unmounted, of course.
    The check goes amazingly fast, but it doesn't give me any info about the percentage of non-contiguous inodes, which I understand to be the the same as the percentage of defragmentation (true?). I'm thinking this is because of the new "fast fsck" feature of ext4, as detailed below.
    My question: can I force a "slow fsck" in order to get a complete check including the inode-contiguity info? Or is there another way to get at the defragmentation status using fsck?
    Thanks.
    FWIW, here's the info on "fast fsck" from the excellent http://kernelnewbies.org/Ext4 page:
    2.7. Fast fsck
    Fsck is a very slow operation, especially the first step: checking all the inodes in the file system. In Ext4, at the end of each group's inode table will be stored a list of unused inodes (with a checksum, for safety), so fsck will not check those inodes. The result is that total fsck time improves from 2 to 20 times, depending on the number of used inodes (http://kerneltrap.org/Linux/Improving_f … ds_in_Ext4). It must be noticed that it's fsck, and not Ext4, who will build the list of unused inodes. This means that you must run fsck to get the list of unused inodes built, and only the next fsck run will be faster (you need to pass a fsck in order to convert a Ext3 filesystem to Ext4 anyway). There's also a feature that takes part in this fsck speed up - "flexible block groups" - that also speeds up filesystem operations.
    Last edited by dhave (2009-02-17 22:09:49)

    Ranguvar wrote:
    Woot! http://fly.isti.cnr.it/cgi-bin/dwww/usr … z?type=man
    fsck.ext4 -E fragcheck /dev/foo
    Thanks, Ranguvar. I had read the man page for fsck.ext3, but I hadn't run across the page for fsck.ext4. The link was helpful.

  • Create DIM,Fact,Measure cube & OLAP catalog 4 SOLVED LEVEL-BASED Hierarchy

    Is thre any complete example available which explains how to create SOLVED LEVEL-BASED hierarchy and catalog for that?
    I have example mentioned in 9204 ref guide chapter CWM2_OLAP_PC_TRANSFORM. But I want complete exaple with script so I can work fast.
    Thnx in advance
    P

    All CWM2 validation API shows that my all dims, cubes are valid but when I try to create presentation thru JDev it hangs after selecting Parent-Child Measure.

  • [Solved] ext4 partition reported as NTFS

    I was testing a SSD disk before installing and i plugged it to a Wndows machine for benchmarking purposes
    For performing such tests, I created 2 NTFS partitions.
    Afterwards, I plugged the disk to the SATA interface of the computer I would use it in.
    I formatted the two partitions that were in the disk.
    mkfs.ext4 /dev/sda1
    mkfs.ext4 /dev/sda2
    ...and got no errors...
    When I run fdisk -l I get:
    System: HPFS/NTFS/exFAT
    But if i run cfdisk, it's being reported as ext4 - same happens when i open gparted.....
    Why can this be?
    EDIT: Fixed changing the partition type to 83 from cfdisk (prior unmount)
    I didn't lose any data in the process....
    Last edited by Xi0N (2012-10-21 19:54:57)

    Glad to see you solved this yourself.  Please don't forget to mark the post as [Solved] and keep our forums organized (mostly).

  • [SOLVED][ext4]Tuning "bytes-per-inode ratio" ?

    Hi,
    I'd like to format two partitions (under LVM) with ext4, a / (20GB) and a /home (890GB). The usage pattern for the /home partition will be fairly standard (smalls documents, configuration files, a lot of media).
    The wiki mentions something about tuning the "bytes-per-inode ratio" for partition of more than 750GB (https://wiki.archlinux.org/index.php/Ex … filesystem).
    It does however not reference any links / documentation for it.
    The mke2fs man page isn't any more helful toward offering pragmatic advice.
    My question is the following : Should I bother tuning this option for /home (I'm assuming that the default is good enough for /, correct?) and if so, how (range of ratios from "conservative" to "probably too much") ?
    Thank you for reading.
    Last edited by Resistance (2015-05-10 14:48:19)

    1) Removing reserved blocks can result in an immediate gain of up to 5% space (although reducing it to 0 is kind of risky if you plan on actually filling the drive).
    2) The defaults for inode creation is 1 inode for every 16MB of data.  inodes occupy 256 bytes.  That means for every 16M of data you lose 256 bytes, or about 1.5%.  I've confirmed that to be the case using df and df -i  on drives of various sizes and doing the math. 
    The thread linked above showed 22GB for a 1.5T drive which is roughly 1.5%. So you can reduce your inodes and maybe squeeze out another percent or so, but TMK you can't undo it without reformatting.  If you haven't removed reserved blocks yet, that's a more fruitful source of disk space.
    EDIT: Sorry, didn't see your [SOLVED] post until after I posted.  Sounds like you made the right call.
    Last edited by mwillems (2015-05-10 15:06:25)

  • [solved] Ext4 goes read-only after nodealloc

    Hello.
    I've been trying this many times, but never succeed with using my partitions powered by ext4 file system with nodealloc option in fstab. It doesn't matter, if I add this option after defaults or replace defaults with nodealloc only. After reboot, all ext4 partitions (/ and /home) goes read-only, so I can't do anything with Arch, not even change fstab back. There are no problems, if I put other options to fstab (for example: relatime to ext3 file systems).
    Anyone have same problem?
    Last edited by weakhead (2009-04-20 19:32:30)

    Whoa! That explains everything. I will try one more time.
    EDIT: Yeah, this is it. It's nodelalloc, not nodealloc...
    Thanks
    Last edited by weakhead (2009-04-20 19:32:13)

  • [SOLVED] I cannot make mounted ext4 partition writeable.

    I want to make /dev/sdc2 (mounted on /mnt/SteamLinux) writeable, so I could install Steam games on it.
    /etc/fstab
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sdb3
    UUID=7cfc361a-47e1-45b9-8429-054d31302402 / ext4 rw,relatime,data=ordered 0 1
    # /dev/sdb4
    UUID=04b8a615-8738-4d5c-b4de-16a0f92ade4d /home ext4 rw,relatime,data=ordered 0 2
    # /dev/sdc2
    UUID=1501a760-5c07-4279-a2c2-9695e08a6cdd /mnt/SteamLinux ext4 defaults 0 0
    # NTFS drives
    /dev/sda1 /mnt/Blue ntfs-3g defaults 0 0
    /dev/sdc1 /mnt/Black ntfs-3g defaults 0 0
    fsck /dev/sdc2 result:
    fsck from util-linux 2.22.2
    e2fsck 1.42.7 (21-Jan-2013)
    fsck.ext4: Permission denied while trying to open /dev/sdc2
    You must have r/w access to the filesystem or be root
    What should I do to make it writeable without root access?
    Last edited by White girl (2013-03-31 00:10:21)

    Mr.Elendig wrote:uhm, just run fsck as root? And for access as a normal user: chown and chmod
    Ran fsck as root, it went fine. Remounted /dev/sdc2, then ran
    sudo chmod -R +w /dev/sdc2
    I assume it's a command to make this partition writeable for regular user. Steam still threw me as readonly.
    Got it working by using /mnt/SteamLinux instead of /dev/sdc2.
    Steam issue is still not resolved, but it's irrelevant to the thread now... (Steam refuses to download anything to this partition, it returns me as 0MB available space)
    Last edited by White girl (2013-03-30 14:16:23)

Maybe you are looking for

  • Cannot figure out how to open and view a coredump

    I've been having frequent issues on not just Linux but also Windows, where the symptoms are always the same: all the applications notify me that they crash, and for Linux, the screen freezes, while on Windows there's a blue screen in addition. Trying

  • Division wise Purchase Report

    In Import Purchase process how to get the report for division wise purchase, some material from USA some from France, and some other countries we need to get country wise purchase reports. how it will be possible in sap

  • Hosting problem with ear file

    Hi I have hosting machine running linux th java and j2sdkee some files are owned by root and most by uucp 1)what is uucp. 2)I can run the j2ee and i have deployed ear file I can access the deployed application using http://localhost:8000/blah/blah ok

  • Receiver Synchronous and BPM Scenario

    I have a scenario and needs to design the same. Procedure needs to Kick off in the Iseries from XI and then that will generate 3 views which will be view of two tables in Iseries.These views needs to be read by XI and each view will have an idoc gene

  • How to connect to RAC with failover enabled?

    I want to use instant client to connect to a RAC database. How can I accomplish that without tnsnames.ora file?