Newfs on large filesystem

I'm trying to build a filesystem on a 420GB disk. (Actually it's a hardware RAID 5 device consisting of 4 x 146G disks in a Storedge 3510).
When I run newfs to create the fs I get this...
# newfs -N -f 4096 /dev/dsk/c4t40d3s0
Warning: cylinder groups must have a multiple of 16 cylinders with the given
         parameters
Rounded cgsize up to 256
Warning: insufficient space in super block for
rotational layout tables with nsect 127, ntrack 127, and nrpos 8.
Omitting tables - file system performance may be impaired.
Warning: inode blocks/cyl group (1267) >= data blocks (1008) in last
    cylinder group. This implies 16128 sector(s) cannot be allocated.
/dev/rdsk/c4t40d3s0:    858578928 sectors in 53232 cylinders of 127 tracks, 127 sectors
        419228.0MB in 3327 cyl groups (16 c/g, 126.01MB/g, 15808 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 258224, 516416, 774608, 1032800, 1290992, 1549184, 1807376, 2065568,
2323760,
Initializing cylinder groups:
super-block backups for last 10 cylinder groups at:
856013296, 856271488, 856529680, 856787872, 857046064, 857304256, 857562448,
857820640, 858078832, 858337024,So the creation of the fs worked but it's saying that the filesystem performance may be impaired and that 16128 sectors can't be allocated.
Any ideas what I could do to eliminate these warnings?
John.

I'm trying to build a filesystem on a 420GB disk.
(Actually it's a hardware RAID 5 device consisting
g of 4 x 146G disks in a Storedge 3510).
When I run newfs to create the fs I get this...
# newfs -N -f 4096 /dev/dsk/c4t40d3s0
Warning: cylinder groups must have a multiple of 16
cylinders with the given
parameters
Rounded cgsize up to 256
Warning: insufficient space in super block for
rotational layout tables with nsect 127, ntrack 127,
and nrpos 8.
Omitting tables - file system performance may be
impaired.
Warning: inode blocks/cyl group (1267) >= data blocks
(1008) in last
cylinder group. This implies 16128 sector(s)
r(s) cannot be allocated....
So the creation of the fs worked but it's saying that
the filesystem performance may be impaired and that
16128 sectors can't be allocated.
Any ideas what I could do to eliminate these
warnings?
John.According to the Sun 3510 configuration manual, any logical disk
over 253GB needs to have explicit C/H/S values set due to the OS limit of 65535 cylinders.
If you set the options as described in the manual and reboot, you should be able to newfs the disk as desired.
See the relevant manual section at:
http://www.sun.com/products-n-solutions/hardware/docs/html/817-3711-10/ch08_configparam.html#_Toc521744099

Similar Messages

  • How to easily navigate a large filesystem?

    I programmed something that will help navigate a large filesystem.
    First, the program crawls the whole filesystem, then you can simply type "j [query]" and then it will return the top 5 results matching the query.
    https://github.com/mallochine/chestnut
    Is anybody interested?

    tomk wrote:
    kasprosian wrote:An additional problem (at least in my case) is that mlocate is often installed globally so it's difficult to get mlocate just for your own local account.
    Not really:
    locate -A ~ foo
    In other words, pass your home dir as one of the patterns, and tell locate to match all patterns. locate(1) for more details.
    Good work though, nothing beats scratching your own itch.
    Haha thanks
    On the other though....:
    [~]$ locate -A ~ nacl
    locate: warning: database `/usr/software/var/locatedb' is more than 8 days old
    [~]$ updatedb --localpaths="/home/ec2-user"
    /usr/software/bin/updatedb: line 223: /usr/software/var/locatedb.n: Read-only file system
    I guess most users on this forum use Linux on their own desktop, so privileges won't be an issue for the most part. Hmph.
    Ideally, a user should not even have to lookup/know which flags to use. Ideally, it should be "Google like" -- just type in your query, and you got what you want.
    The problem might be that it's not that big of an upgrade, i.e. not a big deal.
    Could you look at https://bbs.archlinux.org/viewtopic.php?id=167567 ? this is a tangential offshoot to what I did here.
    Last edited by kasprosian (2013-08-02 14:22:21)

  • Restore root filesystem problem.

    Dear All,
    when I take backup of root one mounted file system and restart the system
    then Go to failsafe in single user mode and run newfs on root filesystem.
    then go to cd /a
    then i want to restore backup of root on /a
    but showing error
    #ufsrestore ivf /dev/dsk/c0d0s6
    Error: Verify volume and initinlize maps.
    Media blocksize is 126
    Volume is not in DUMP format.
    Please provide solution....

    I have a similar problem and i still have no resolution & here it is:
    "i have a question on the Jet and the configuration is vmware can someone point me to the right direction please:
    nic
    Error Found on S10X_u5w0s_10_x86 vmware prodsau01 solaris server on prodmam3 hardware:
    Solaris 10 5/08 s10x_u5wos_10 X86 was not found on /dev/dsk/c1t0d0s0
    do you wish to have it mounted read-write on /a? (y,n,?)
    And the disk c1 slice is available but not bootable ?? "
    Any help to debug this issue will be appreciated. I have no support from sun on vmware at this point.
    nic

  • Restore filesystem  Problem.

    Dear All,
    when I take backup of root one mounted file system and restart the system
    then Go to failsafe in single user mode and run newfs on root filesystem.
    then go to cd /a
    then i want to restore backup of root on /a
    but showing error
    #ufsrestore ivf /dev/dsk/c0d0s6
    Error: Verify volume and initinlize maps.
    Media blocksize is 126
    Volume is not in DUMP format.
    Please provide solution....

    #ufsrestore ivf /dev/dsk/c0d0s6Is this disk slice where you did your backup, or is it your intended restore destination? This command line should have the location of your backup--such as a tape drive (/dev/rmt/0 is an example).

  • Format / newfs command hang after creating disk mirror

    Hi :
    We are using four 300G disks to create disk mirror. c1t0d0 and c1t1d0 are mirrored , and c1t2d0 and c1t3d0 are palned to set up mirror.
    One sample for the 3rd disk (c1t2d0 ) partition layout :
    Part Tag Flag Cylinders Size Blocks
    0 usr wm 0 - 46850 279.25GB (46851/0/0) 585637500
    1 unassigned wu 0 0 (0/0/0) 0
    2 backup wu 0 - 46872 279.38GB (46873/0/0) 585912500
    3 unassigned wu 0 0 (0/0/0) 0
    4 unassigned wu 0 0 (0/0/0) 0
    5 unassigned wu 0 0 (0/0/0) 0
    6 unassigned wu 0 0 (0/0/0) 0
    7 unassigned wm 46851 - 46872 134.28MB (22/0/0) 275000
    After mirrored c1t0d0 and c1t1d0, we want to use format to see current disk partion , also newfs to setup filesystem on c1t2d0, but these two commands are failed.
    Example : ( we have to use Ctrol+C to stop format because the command is hung )
    format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
    /pci@0/pci@0/pci@2/scsi@0/sd@0,0
    1. c1t1d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
    /pci@0/pci@0/pci@2/scsi@0/sd@1,0
    2. c1t2d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
    /pci@0/pci@0/pci@2/scsi@0/sd@2,0
    3. c1t3d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
    /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Specify disk (enter its number): 2
    selecting c1t2d0
    [disk formatted]
    ^C
    Could you give some clues ? Thanks !

    Updated information :
    I use command " truss -o /tmp/out -d -D -E -fl newfs /dev/rdsk/c1t2d0s0 " to trace system call while newfs disk. The sample of output file is as following. Acturally , according to opensolaris mkfs source code, after mkfs read mnttab, it should invoke create64 to create device,
    4285/1:          8.7587     0.0004     0.0003     open("/etc/mnttab", O_RDONLY)               = 3
    4285/1:          8.7592     0.0005     0.0002     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7594     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7595     0.0001     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7597     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7599     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7601     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7603     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7605     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7606     0.0001     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7608     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7610     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7612     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7614     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7616     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7617     0.0001     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 0
    4285/1:          8.7619     0.0002     0.0000     ioctl(3, MNTIOC_GETMNTENT, 0xFFBFDF44)          = 1
    4285/1:          8.7621     0.0002     0.0000     llseek(3, 0, SEEK_CUR)                    = 0
    4285/1:          8.7623     0.0002     0.0000     close(3)                         = 0
    4285/1:          8.7625     0.0002     0.0000     uadmin(16, 4, 0)                    = 1
    4285/1:          8.7626     0.0001     0.0000     uadmin(16, 2, 161256)                    = 1
    Seems mkfs does not invoke such as "creat64("/dev/md/rdsk/d20", 0666)" to create device. Why ?

  • Java.io.File and links

    Currently, I'm working on a project that does file recursion through large filesystems. I've noticed that when I try to traverse links (shortcuts under windows NT) they simply show up as plain files that are not followed. For the program that I'm writing, this behaviour is good; I don't want to follow links. However, I don't know if this is the intended behaviour for the Java VM. Will this behaviour change in the future? If it does, will it default to following links, or to presenting them as plain files (as it does currently)?
    TIA for any help that you can provide.

    thnx..its started to work now...atleast it does give an output now..but now the prob is tht it duznt remove the html tags!!it simply returns the whole source code and prints it on the console!!

  • Lion(?) appears to have destroyed my drive

    Ok, so last Friday (6 days ago), I installed Lion on my home Mac Pro.  The install went off without a hitch, and everything seemed to be working pretty well. Then today I woke my computer from sleep, and began working on a motion graphics piece in C4D.   I went to edit a texture, and that is when all **** broke loose. First, I got a dialogue that said the resource file for this texture couldn't be found... odd I thought, though probably a permissions issue.  So I thought I would access the directory and check the permissions.  I invoked the command-tab app switcher, which then froze.  I was able to hide C4D, but all Finder operations were in beach ball purgatory.  After waiting for 10 minutes or so, I had to do a hard shut down.
    My next 2 attempts at booting up were met with kernel panics... luckily I had a cloned Snow Leopard drive which I was able to boot into by invoking the option key at startup (I was presented with 2 options.. my SL clone, and the Lion recovery partition).  Once in SL, I opened up Disk Utility.  It could see the drive, but not mount it.  I had an error stating that I had an invalid B-tree node size.  I attempted to repair, and was told after some time that Disk Utility could not repair the drive, and I should pull all the files off that I can, and then reformat the drive.  Great.. I would love to pull the files off, but since it won't mount, this isn't exactly possible... thanks for nothing, Disk Utility.
    So then, I pulled out the big guns.. first Drive Genius 3, which again could see the drive, but do nothing with it, then Diskwarrior 4.3, again useless, and finally Data Rescue 3, which failed on both a Quick Scan, and a Deep Scan. My next step will be a target mode inspection from my laptop, failing that I will attempt Data Rescue's clone function... though I will have to buy a drive to clone too.  Hope appears to be dimming that I will get back the information I had there which I have accrued since my last back-up (shockingly, this is about 300GB worth of data, it's been a busy week).
    Anyway, the drive is not making any clicking sounds, or showing any other signs of hardware failure, and is only about 7-8 months old.  It appears, to me then that there was an extremely deep-level file corruption that just hosed the directory.  I do hope that my case is just an anomaly, and not endemic of a larger filesystem issue.
    If any of you have any additional suggestions for how I can resurrect this drive, I'd appreciate it.
    Thanks,
    Will

    OK, well... I did a restart, invoking the 'option' command, and the Lion Recovery partition did not show up, so I went ahead and booted back into SL, wherein I was given a 'The disk you inserted was not readable by this computer' with a choice of ejecting, ignoring, or initializing.  I chose Ignore, and the drive now just shows up in disk utility as 'media' with no opportunity to interact with it.
    I had the presence of mind to make a Lion DVD before I installed, so I booted with that, and it too was unable to interact with the drive.
    Given the degraded state of this drive, I have no reason to suspect that entering target disk mode and mounting to my MBP would bear any fruit... looks like this sucker is beyond hope.  Yay...
    I will not be re-installing Lion until at least .2 comes out at this point.. I can't afford a wasted day like this again.

  • Export error when creating more than 2 gb file

    Good day,
    There is a problem with the export utility. while exporting a file size more than 2 gb an error occured.
    00002- Error in writing to export file
    my os is Unix 6 SCO and Orale database version is 7.1.3
    knidly look into this issue.
    Thanks & Regards.

    SCO OpenServer 6 now provides users with 1 TeraByte file support. A TeraByte is equal to approximately 1 trillion bytes. This new feature allows SCO OpenServer 6 users to create files that are over the previous 2 gigabyte file limit in older versions of SCO OpenServer. This new feature provides greater flexibility in file size and application usage.
    To showcase this new feature, we will create a file that is larger than 2+ Gigabytes. In order to accomplish this task, we will need to use SCO OpenServer 6 large filesystem-aware commands. These commands can be found in /u95/bin.
    To Create a 2+ Gigabyte File:
    In the KDE desktop environment, click on the UNIX icon. The UNIX terminal window will appear.
    In the terminal window, type
    /u95/bin/dd if=/dev/urandom of=bigfile bs=100k count=40000
    Press Enter. A 4+ Gigabyte file will be created.
    NOTE: This exercise will require approximately 5 minutes to complete. This test also requires approximately 5 Gigaytes of available disk space on your machine.
    You have now successfully created a 2+ Gigabyte file in SCO OpenServer 6. This completes the detailed instructions for the product walkthrough. Feel free to investigate other features within SCO OpenServer 6.

  • Small photo studio needs config help

    We are a small photo studio and we are about to purchase an xSERVE with 4 500GB drives and add drives to the system as time goes on. I need some advice on the best way to set it up and config it.
    We will be using the xSERVE RAID attached via fibre to a new G5 Tower which will be connected to a gigbit switch. The switch has 3 computers connected @ gigabit. The xSERVE RAID will store all photography jobs currently in post-production and after post is complete the jobs will move off this system and be archived using another system to save space on the RAID.
    3 users (computers) will need to access the RAID to edit the RAW files (15MB/each) jobs and work on photoshop files for retouching. We generate a lot of information and can produce as much as 50 GB / day of shooting. For these jobs we can shoot as many as 7 days in a row, so that would be 350 GB just for the RAW files. We then might retouch 150 files from that job or more depending on the client.
    So my main questions would be how to best set up the RAID and different components? I think RAID 5 would be a good solution. But what other setup/config options should I be considering?
    I know this is not an easy answer and there are multiple options. But if you could be as kind to give some different options/scenarios, I would greatly appreciate it.
    I think its neat that the mac community supports these forums and they have been extremely helpful.
    Thank you mac people.
      Mac OS X (10.4.7)   all computers are running OSX 10.4.7

    I like RAID5 for it's ability to tolerate losing a drive without losing data. One thing that you have to account for is that the price for that you pay for surviving a drive failure is 25% of your disk space. In other words, once you take these four drives and make them into a RAID5, you can expect to have ~1.5TB available. So you may want to add a 5th drive. Personally, as cheap as drives are, I'd put the full 7 in.
    You really don't have enough client machines to bother with a lot of the esoteric stuff. A simple RAID5 gives you durability and enough speed that the network will be the speed bottleneck.
    As you get into these larger filesystems, backups and disaster recovery become much more challenging because of the time it takes to handle massive amounts of data.
    Roger

  • SDXC & DROID X

    does anyone know if the droid X can handle SDXC cards, I ask because of the below:
    In the 3.0 specification, the electronic interface of SDHC and SDXC cards is the same. This means that SDHC hosts which have drivers which recognize the newly used capability bits, and have operating system software which understands the exFAT filesystem, are compatible with SDXC cards. The decision to label cards with a capacity greater than 32GB as SDXC and to use a different filesystem is due solely to the limitations in creating larger filesystems in certain versions of Microsoft Windows. Other operating system kernels, such as Linux, make no distinction between SDHC and SDXC cards, as long as the card contains a compatible filesystem.
    http://en.wikipedia.org/wiki/Secure_Digital
      I really would like to have more than 32 gig of storage, if i had 64 gb i could stop carrying a second device with me and seeing as there are sdxc cards at 64 and 128 gb, it would be a great solution without changing devices. I just do not want to drop a hundred bucks without having a better idea

    the microsdxc would be the same size as the microsdhc, their size specifications are the exactly same by the stainers board that controls them, sdhc was originally going to support capacities greater than 32gb (up to 2tb as well), and could, but a decision was made to limit that to sdxc. im not looking for 2tb,  but 64gb which should easily be indexed. i feel the issue is support for exfat, which i can not find any mention of compatibility with android or non compatibility
    sorry for not specifying the micro earlier, i thought it would be assumed since i did not label the sdhd micro either

  • Unable to create filesystem (mkfs.ext4) on large 2TB GPT virtual disk using Linux VM.

    I am unable to create a file system on a large (> 2TB disk) virtual disk for a Linux VM.  I can create the disk, attach it to the VM, partition it with "parted", but I cannot run mkfs.ext4.  Details below.
    Hyper-V 2012 Core (w/ all Windows/Microsoft updates as of 4/19).
    CentOS 6.4 VM w/ 4 virtual processors, 4GB RAM, and 3 dynamic drives: 
    /dev/sda  100GB IDE dynamic vhdx
    /dev/sdb  75GB IDE dynamic vhdx
    /dev/sdc  10TB SCSI dynamic vhdx
    Using parted, created 500GB partition on the 10TB drive (/dev/sdc1). 
    (parted) select /dev/sdc
    Using /dev/sdc
    (parted) print
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdc: 11.0TB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    Number  Start   End    Size   File system  Name                Flags
     1      1049kB  500GB  500GB               production_archive
    then run: mkfs.ext4 /dev/sdc1
    repeating error on console from mkfs.ext4:
    INFO: task mkfs.ext4:2581 blocked for more than 120 seconds
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Runaway error in var/log/messages until my /var system filled up - 25G worth of it:
    -rw-------. 1 root root 25085329408 Apr 19 23:15 messages
    Apr 19 17:39:28 nfs2 kernel: sd 4:0:0:0: [sdc] Sense Key : No Sense [current]
    Apr 19 17:39:28 nfs2 kernel: sd 4:0:0:0: [sdc] Add. Sense: No additional sense information
    Apr 19 17:39:28 nfs2 kernel: hv_storvsc vmbus_0_13: cmd 0x93 scsi status 0x2 srb status 0x6
    Same problem happens when running "mkfs.ext4 -E lazy_itable_init=1 /dev/sdc1"

    Hi,
    Thank you for your post.
    I am trying to involve someone familiar with this topic to further look at this issue.
    Lawrence
    TechNet Community Support

  • Need advice migrating from AIX 7 filesystems to Exadata Linux ASM - Large DBMS

    We are using 11.2.0.3 and 11.2.04 databases on AIX 7.1 using AIX filesystems .   We have some 2TB databases and some much smaller.  About 50 production and 200 non-production databases.   We are migrating to Exadata 4 with Linux .   What is your advice on the method of migrations that should be used.   We may be able to take some outage time to do this.  

    I echo the data pump export/import recommendations. I've used data pump several times to migrate databases to Exadata - including an environment with a few DBs on AIX Power PC to Exadata last year. If you can take downtime, it is the simplest, most flexible and least risky method - and if you can put a little thought and extra effort into it can still be very performant. On Exadata it's good to setup the environment according to Oracle's published best practices - which usually means some configuration changes from your source. Data Pump allows you to set this up first and have it ready to go - then do the migration into a properly configured database. You can also put the source DB into read-only while the migration takes place if that helps the downtime requirements.
    Some suggestions to maximize performance and limit downtime:
    Consider using DBFS file system on the Exadata, and then mount it using NFS to your source DB servers, for the data pump file location. This may take a little longer on the export, but avoids having to do a separate copy of the files over the network afterward and can make up the time. Once on Exadata, importing off the local DBFS can really perform well.
    Use parallelism with data pump to speed up the export and import. The degree will need to be determined based on your CPU capacity, but parallelism will speed up the migration dramatically.
    If you're licensed for compression - use the compression with Data pump to minimize the file size.
    Precreate all your tablespaces first, and possibly even the schemas - this goes back to setting things up according to Exadata best practices. You can potentially use HCC and other things on the Exadata tablespaces if you so choose. You can always use the data pump mapping if you want to change a few things about the tablespace names and such from the source.
    If you're really trying to maximize the performance and minimize downtime, you can spend some time pulling out the DDL for your indexes and constraints from the source - and have them scripted. Then only export the data, not the indexes and constraints, and after the data is imported use your DDL scripts, with high degrees of parallelism, to create indexes and constraints afterward. Don't forget to alter the index objects to remove the parallelism afterward so not to leave a bunch of high parallel indexes in place. This method can usually perform much faster than letting data pump do this.
    Test well, and look for objects that don't migrate correctly or well with data pump and potentially use SQL scripts to bring them over manually.
    Look for opportunities with some objects, for example meta data or DDL that doesn't change, to pre-create on Exadata before taking the downtime and starting the migration.
    HTH,
    Kasey

  • Filesystem under Solaris 8

    I get this every time I put a filesystem on my Sun
    lakshmi root /: newfs -f 4096 /dev/rdsk/c3t1d0s6
    Cylinder groups must have a multiple of 16 cylinders with the given
    parameters
    Rounded cgsize up to 256
    Warning: insufficient space in super block for
    rotational layout tables with nsect 127, ntrack 127, and nrpos 8.
    Omitting tables - file system performance may be impaired.
    /dev/rdsk/c3t1d0s6: 997159296 sectors in 61824 cylinders of 127
    tracks, 127 sectors
    486894.2MB in 3864 cyl groups (16 c/g, 126.01MB/g, 15808 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:Any help would be great

    How large can be a filesystem under Solaris 8???It depends on the file system. UFS is limited to 1TByte.
    SAM-QFS is virtually unlimited. Other file systems may
    be limited by the size of the media (CD-ROM/HSFS etc.)
    -- richard

  • How to create one ISO file large than 2G

    hi,
    here when I create one ISO with 3G(larger than 2G) on solaris 10 machine, I met below error:
    mkisofs: File too large. cannot fwrite 32768*1
    Anyone has suggestion for this problem? How to create one ISO files larger than 2G?
    See the detail logs information:
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    mkisofs -o my.iso -A "Software DVD" -D -L -P "Study "-p CBC/XLE -r -T -V R50 -v iso
    mkisofs: The option '-L' is reserved by POSIX.1-2001.
    mkisofs: The option '-L' means 'follow all symbolic links'.
    mkisofs: Mkisofs-2.02 will introduce POSIX semantics for '-L'.
    mkisofs: Use -allow-leading-dots in future to get old mkisofs behavior.
    mkisofs: The option '-P' is reserved by POSIX.1-2001.
    mkisofs: The option '-P' means 'do not follow symbolic links'.
    mkisofs: Mkisofs-2.02 will introduce POSIX semantics for '-P'.
    mkisofs: Use -publisher in future to get old mkisofs behavior.
    Warning: creating filesystem that does not conform to ISO-9660.
    mkisofs 2.01 (sparc-sun-solaris2.10)
    Scanning iso
    Scanning iso/rac_stage3
    Using RAC_S000 for /rac_stage2 (rac_stage1)
    Using RAC_S001 for /rac_stage1 (rac_stage3)
    Using INSTA000.1;1 for ..
    Writing: The File(s) Start Block 48
    0.42% done, estimate finish Thu Feb 1 16:12:35 2007
    0.84% done, estimate finish Thu Feb 1 16:12:36 2007
    1.26% done, estimate finish Thu Feb 1 16:12:36 2007
    1.69% done, estimate finish Thu Feb 1 16:12:36 2007
    2.11% done, estimate finish Thu Feb 1 16:12:36 2007

    87.65% done, estimate finish Thu Feb 1 16:14:54 2007
    88.07% done, estimate finish Thu Feb 1 16:14:54 2007
    mkisofs: File too large. cannot fwrite 32768*1
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Thanks!

    does it matter "copy protection" ? for the problem that i have?
    my computer is alwyes get stuck while i trying to made a dvd or image
    i made few disks wite menu in the past & i had not problems at all

  • How do I delete a large file without my computer crashing?

    I am playing around with HD video and have a very large Quicktime File (12.5GB) that froze my computer while rendering.  I need to delete it.  (CRASH: screen freezes and dark line slowly comes from top to bottom and once entire screen is slightly darker, a post in the middle pops up and tells me to turn off computer by holding power button).
    After a lot of trial and error, it seems the problem is that the computer can't handle the massive deletion.  Unlike a lot of other similar issues on this forum, this is a single file and not hundreds, so i can't delete portions at a time and be done with it.  I have run through all my ideas and need help on how to delete this file from my computer and have come up with two possible ways:
    1 - Split this video into smaller files.  I dont know how to do this: I have SPLIT-CONCAT but that duplicates (then splits) and does not actually split the original.  I tried "replacing" the video with the same-named smaller file and it crashed the computer.
    2 - Change preferences on this computer somehow to allow this massive deletion. I tried a force empty through terminal and it did the same crash.  I dont know if there is a preference (disk manager) or otherwise to get rid of this file. 
    I am on a 2006 Intel iMac running 10.7.4

    Sun May 13 21:33:15 2012
    panic(cpu 0 caller 0xffffff80003224be): "jnl: transaction too big (1831424 >= 1834496 bytes, bufsize 4096, tr 0xffffff800b327f48 bp 0xffffff807a5b20e0)\n"@/SourceCache/xnu/xnu-1699.26.8/bsd/vfs/vfs_journal.c:262 3
    Backtrace (CPU 0), Frame : Return Address
    0xffffff807ee632b0 : 0xffffff8000220792
    0xffffff807ee63330 : 0xffffff80003224be
    0xffffff807ee63370 : 0xffffff80004d56a3
    0xffffff807ee63390 : 0xffffff800050dc0d
    0xffffff807ee63460 : 0xffffff800050926c
    0xffffff807ee63540 : 0xffffff80005101da
    0xffffff807ee637a0 : 0xffffff8000510294
    0xffffff807ee63860 : 0xffffff8000510d6c
    0xffffff807ee63930 : 0xffffff80004e9d74
    0xffffff807ee639e0 : 0xffffff80004e9fe0
    0xffffff807ee63a40 : 0xffffff80004de46c
    0xffffff807ee63ad0 : 0xffffff80004dea22
    0xffffff807ee63b10 : 0xffffff8000319b82
    0xffffff807ee63b50 : 0xffffff80002ffb53
    0xffffff807ee63b80 : 0xffffff80002ffc3c
    0xffffff807ee63ba0 : 0xffffff800030bc67
    0xffffff807ee63d90 : 0xffffff800030bd20
    0xffffff807ee63f50 : 0xffffff80005cd61b
    0xffffff807ee63fb0 : 0xffffff80002daa13
    BSD process name corresponding to current thread: Finder
    Mac OS version:
    11E53
    Kernel version:
    Darwin Kernel Version 11.4.0: Mon Apr  9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64
    Kernel UUID: A8ED611D-FB0F-3729-8392-E7A32C5E7D74
    System model name: iMac8,1 (Mac-F227BEC8)
    System uptime in nanoseconds: 2062336604249
    last loaded kext at 46331332223: com.apple.filesystems.msdosfs          1.7.1 (addr 0xffffff7f8137a000, size 57344)
    loaded kexts:
    com.seagate.driver.PowSecLeafDriver_10_5          5.1.1
    com.Logitech.Control Center.HID Driver          3.4.0
    com.seagate.driver.PowSecDriverCore          5.1.1
    com.apple.filesystems.msdosfs          1.7.1
    com.apple.driver.AppleHWSensor          1.9.5d0
    com.apple.driver.AppleTyMCEDriver          1.0.2d2
    com.apple.driver.AudioAUUC          1.59
    com.apple.driver.AppleHDAHardwareConfigDriver          2.2.0f3
    com.apple.driver.AppleHDA          2.2.0f3
    com.apple.driver.AppleUpstreamUserClient          3.5.9
    com.apple.driver.AppleMCCSControl          1.0.26
    com.apple.GeForce          7.1.8
    com.apple.filesystems.autofs          3.0
    com.apple.driver.AppleSMCPDRC          5.0.0d0
    com.apple.iokit.IOUserEthernet          1.0.0d1
    com.apple.iokit.IOBluetoothSerialManager          4.0.5f11
    com.apple.Dont_Steal_Mac_OS_X          7.0.0
    com.apple.driver.AudioIPCDriver          1.2.2
    com.apple.driver.ACPI_SMC_PlatformPlugin          5.0.0d0
    com.apple.driver.AppleMuxControl          3.0.16
    com.apple.driver.AppleBacklight          170.1.9
    com.apple.driver.AppleLPC          1.5.8
    com.apple.driver.BroadcomUSBBluetoothHCIController          4.0.5f11
    com.apple.driver.AppleIRController          312
    com.apple.driver.AppleFireWireStorage          3.0.1
    com.apple.driver.initioFWBridge          3.0.1
    com.apple.driver.IOFireWireSerialBusProtocolSansPhysicalUnit          3.0.1
    com.apple.driver.LSI_FW_500          3.0.1
    com.apple.driver.Oxford_Semi          3.0.1
    com.apple.driver.StorageLynx          3.0.1
    com.apple.AppleFSCompression.AppleFSCompressionTypeDataless          1.0.0d1
    com.apple.AppleFSCompression.AppleFSCompressionTypeZlib          1.0.0d1
    com.apple.BootCache          33
    com.apple.iokit.SCSITaskUserClient          3.2.0
    com.apple.driver.XsanFilter          404
    com.apple.iokit.IOAHCIBlockStorage          2.0.3
    com.apple.driver.AppleFWOHCI          4.8.9
    com.apple.iokit.AppleYukon2          3.2.2b1
    com.apple.driver.AppleUSBHub          4.5.0
    com.apple.driver.AirPortBrcm43224          501.36.15
    com.apple.driver.AppleEFINVRAM          1.5.0
    com.apple.driver.AppleAHCIPort          2.3.0
    com.apple.driver.AppleIntelPIIXATA          2.5.1
    com.apple.driver.AppleUSBEHCI          4.5.8
    com.apple.driver.AppleUSBUHCI          4.4.5
    com.apple.driver.AppleHPET          1.6
    com.apple.driver.AppleACPIButtons          1.5
    com.apple.driver.AppleRTC          1.5
    com.apple.driver.AppleSMBIOS          1.8
    com.apple.driver.AppleACPIEC          1.5
    com.apple.driver.AppleAPIC          1.5
    com.apple.driver.AppleIntelCPUPowerManagementClient          193.0.0
    com.apple.nke.applicationfirewall          3.2.30
    com.apple.security.quarantine          1.3
    com.apple.driver.AppleIntelCPUPowerManagement          193.0.0
    com.apple.driver.DspFuncLib          2.2.0f3
    com.apple.nvidia.nv50hal          7.1.8
    com.apple.NVDAResman          7.1.8
    com.apple.kext.triggers          1.0
    com.apple.iokit.IOFireWireIP          2.2.4
    com.apple.driver.AppleHDAController          2.2.0f3
    com.apple.iokit.IOHDAFamily          2.2.0f3
    com.apple.iokit.IOSurface          80.0.2
    com.apple.iokit.IOSerialFamily          10.0.5
    com.apple.iokit.IOAudioFamily          1.8.6fc17
    com.apple.kext.OSvKernDSPLib          1.3
    com.apple.driver.ApplePolicyControl          3.0.16
    com.apple.driver.AppleSMC          3.1.3d8
    com.apple.driver.IOPlatformPluginLegacy          5.0.0d0
    com.apple.driver.AppleGraphicsControl          3.0.16
    com.apple.driver.AppleBacklightExpert          1.0.3
    com.apple.iokit.IONDRVSupport          2.3.2
    com.apple.iokit.IOGraphicsFamily          2.3.2
    com.apple.driver.AppleSMBusPCI          1.0.10d0
    com.apple.driver.IOPlatformPluginFamily          5.1.0d17
    com.apple.driver.AppleFileSystemDriver          13
    com.apple.driver.AppleUSBHIDKeyboard          160.7
    com.apple.driver.AppleHIDKeyboard          160.7
    com.apple.driver.AppleUSBBluetoothHCIController          4.0.5f11
    com.apple.iokit.IOBluetoothFamily          4.0.5f11
    com.apple.iokit.IOUSBMassStorageClass          3.0.1
    com.apple.iokit.IOFireWireSerialBusProtocolTransport          2.1.0
    com.apple.iokit.IOUSBHIDDriver          4.4.5
    com.apple.driver.AppleUSBMergeNub          4.5.3
    com.apple.driver.AppleUSBComposite          4.5.8
    com.apple.iokit.IOSCSIMultimediaCommandsDevice          3.2.0
    com.apple.iokit.IOBDStorageFamily          1.6
    com.apple.iokit.IODVDStorageFamily          1.7
    com.apple.iokit.IOCDStorageFamily          1.7
    com.apple.iokit.IOATAPIProtocolTransport          3.0.0
    com.apple.iokit.IOFireWireSBP2          4.2.0
    com.apple.iokit.IOSCSIBlockCommandsDevice          3.2.0
    com.apple.iokit.IOSCSIArchitectureModelFamily          3.2.0
    com.apple.iokit.IOFireWireFamily          4.4.5
    com.apple.iokit.IOUSBUserClient          4.5.8
    com.apple.iokit.IO80211Family          420.3
    com.apple.iokit.IONetworkingFamily          2.1
    com.apple.iokit.IOAHCIFamily          2.0.8
    com.apple.iokit.IOATAFamily          2.5.1
    com.apple.iokit.IOUSBFamily          4.5.8
    com.apple.driver.AppleEFIRuntime          1.5.0
    com.apple.iokit.IOHIDFamily          1.7.1
    com.apple.iokit.IOSMBusFamily          1.1
    com.apple.security.sandbox          177.5
    com.apple.kext.AppleMatch          1.0.0d1
    com.apple.security.TMSafetyNet          7
    com.apple.driver.DiskImages          331.6
    com.apple.iokit.IOStorageFamily          1.7.1
    com.apple.driver.AppleKeyStore          28.18
    com.apple.driver.AppleACPIPlatform          1.5
    com.apple.iokit.IOPCIFamily          2.6.8
    com.apple.iokit.IOACPIFamily          1.4

Maybe you are looking for