Solaris Files......

Hello,
Can someone explain to me how the following files are used for in Solaris :
/etc/name_to_major
/etc/name_to_sysnum
/etc/path_to_inst
/etc/driver_aliases
/etc/driver_classes
Thanks & Happy Holiday !
Zak

These are used or referenced when device drivers are
loaded. For details, you should start with the Writing
Device Drivers book. http://docs.sun.com/app/docs/doc/816-4854
-- richard

Similar Messages

  • Solaris File System/Device Management Information

    Hi All,
    I a writing a paper about the Solaris operating system and require information about the file system and device management of Solaris 8.
    File Management:
    I am looking for information or links to sites that would help me find more information about the solaris file system architecture, how it uses drive space, and some of the features of the Solaris file system
    Device Management:
    I am also looking for information or website links that cover how Solaris talks to devices (overview) and other device management architecture information.
    any help is appreciated!

    Don't use cp because it converts soft links into hard files. Its even worse when you have cyclic soft links. Use tar or cpio and you should be ok.
    i.e. If you have a soft link to a file of 10G, a cp will result in 20G.

  • Solaris file system space

    Hi All,
    While trying to use df -k command in my solaris box, I am getting output shown as below.
    Filesystem 1024-blocks Used Available Capacity Mounted on
    rpool/ROOT/solaris-161 191987712 6004395 140577816 5% /
    /devices 0 0 0 0% /devices
    /dev 0 0 0 0% /dev
    ctfs 0 0 0 0% /system/contract
    proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    swap 4184236 496 4183740 1% /system/volatile
    objfs 0 0 0 0% /system/object
    sharefs 0 0 0 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1 146582211 6004395 140577816 5% /lib/libc.so.1
    fd 0 0 0 0% /dev/fd
    swap 4183784 60 4183724 1% /tmp
    rpool/export 191987712 35 140577816 1% /export
    rpool/export/home 191987712 32 140577816 1% /export/home
    rpool/export/home/123 191987712 13108813 140577816 9% /export/home/123
    rpool/export/repo 191987712 11187204 140577816 8% /export/repo
    rpool/export/repo2010_11 191987712 31 140577816 1% /export/repo2010_11
    rpool 191987712 5238974 140577816 4% /rpool
    /export/home/123 153686630 13108813 140577816 9% /home/12
    My question here is why /usr/lib/libc/libc_hwcap1.so.1 file system is having same size as that of / root filesystem? and what is the significance of /usr/lib/libc/libc_hwcap1.so.1 file system..
    Thanks in Advance for your help..

    You must have a lot of small files on the file system.
    There are couple of ways, the simplest is to increase the size of the filesystem.
    Or if you can create a new filesystem, but increase the inode count so you can utilize the space and still have enough inodes. Check out the man page mkfs_ufs and the option nbpi=n
    my 2 bits

  • Protect Solaris file system

    Can we protect the file system (folder level protection) in Solaris box using access manager?

    patrickez wrote:
    If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem?Solaris 10 doesn't yet support ZFS for a root filesystem, but it is working in some OpenSolaris distributions.
    You could use Sun Volume Manager to create a mirror for your root filesystem.
    The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too?No. They're separate filesystems.
    or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Yes. (Actually, you can import the ZFS pool you created).
    Should I install solaris 10 on a mirrored set of drives?If you have one, that would work as well.
    Can the root filesystem be zfs too?Not currently in Solaris 10. The initial root support in OpenSolaris will require the root pool be only a single disk or mirrors. No striping, no raidz.
    Darren

  • Solaris File System/ Virtual File system Documentation

    can anybody help me in finding solaris virtual file system documentaion/books ?
    thanks in advance,
    -mayur

    AFAIK, the VFS is not an official (and documented) interface
    and may change from solaris release to solaris release
    (perhaps even with a new kernel patch).
    Other, you can probably get the Solaris 8 Foundation Source,
    and use it as the definitive reference documentation ;-)

  • Solaris file system performance

    I run postmark on solaris and ext3 to compare their performance. I found the Solaris is three times slower than ext3. Is there anything wrong? I use "mount" to display all mount point and can see logging is open for the disk I'm testing. I tried closing the logging by "mount -o nologging" and the performance if even worse. So I think the logging is working. But why the performance is still so bad? My computer has 2G ram and the disk I'm using is SATA disk. I found the white paper from SUN says solaris is much faster than ext3 and they use postmark too. Is there any other parameter?

    postmark is all about metadata operation performance. Which UFS has always been fairly bad at since solaris is paranoid about flushing metadata and syncing the disks religously to avoid filesystem corruption in case of sudden reboot.
    Logging is not primarily a performance optimisation. Its primary function is to avoid having to fsck the disk after an unclean shutdown. So its not surprising that logging doesnt help postmark all that much.
    You can try mounting the filesystem noatime which should help a bit.
    You can also try ZFS inctead of UFS which supposedly has excellent postmark scores.
    You should also be aware that postmark is a very contrived benchmark which only adequately represents the performance of a small subset of programs. Specifically those which manipulate large numbers of small files. Like a mail server.

  • Solaris file descriptor question

    Hi,
    We have an application on Solaris 2.6 and
    the shell in which the server runs has a
    file descriptor limit of 1024. What does
    this mean? Does this mean that every process
    from the shell will have 1024 fds? What
    is the maximum # of fds that a solaris 2.6
    system can provide?
    When I run "sysdef", I get:
    ffffffff:fffffffd file descriptors
    How do I interpret this line?
    Is this 64K - some value?
    If system limit is 64K and if each
    shell has 1024, how are the fds allocated
    to the shells?
    What I mean is:
    say I have 3 shells each with descriptor
    limit of 1024, then is the distribution
    something like 1024-2047 for shell 1,
    2048 - 3071 for shell 2 (i.e. 3072) and
    3072 - 4095 for shell 3?
    Appreciate any explanation of this anyone
    can offer.
    thanks,
    mshyam

    Hi There,
    About File Descriptors and Their Limitations:
    All versions of Solaris (including Solaris 7 64-bit) have a default "soft" limit of 64 and a default "hard" limit of 1024.
    Processes may need to open many files or sockets as file descriptors. Standard I/O (stdio) library functions have a defined limit of 256 file descriptors as the fopen() call, datatype char, will fail if it can not get a file descriptor between 0 and 255. The open() system call is of datatype int, removing this limitation. However, if open() has opened 0 to 255 file descriptors without closing any, fopen() will
    not be able to open any file descriptors as all the low-numbered ones have been used up. Applications that need to use many file descriptors to open a large number of sockets, or other raw files, should be forced to use descriptors numbered above 256. This allows system functions such as name services, to work as they depend upon stdio routines.
    (See p 368 "Performance and Tuning - Java and the Internet").
    There are limitations on the number of file descriptors
    available to the current shell and its descendents. (See the ulimit man page). The maximum number of file descriptors that can be safely used for the shell and Solaris processes is 1024.
    This limitation has been lifted for Solaris 7 64-bit which can be 64k (65536).
    Therefore the recommended maximum values to be added to /etc/system are:
    set rlim_fd_cur=1024
    set rlim_fd_max=1024
    To use the limit command with csh:
    % limit descriptors 1024
    To use the ulimit command with Bourne or ksh:
    $ ulimit -n 1024
    However, some third-party applications need the max raised. A possible recommendation would be to increase rlim_fd_max, but not the default (rlim_fd_cur). Then rlim_fd_cur can be raised on a per-process basis if needed, but the higher setting
    for rlim_fd_max doesn't affect all processes.
    I hope this helps your understanding about systemwide file descriptor max limit in conjunction with shell and per process file descriptor limits.
    ....jagruti
    Deveoper Technical Support
    Sun Microsystems Inc.

  • Installaton on Solaris - file permissoning issues

    Hi all
    After a clean silent installation we found a lot of files/directories with 777 file modes under the installation directory and they pose unacceptable security risks, e.g
    -rwxrwxrwx 1 root root 1823 Oct 12 02:54 /export/opt/jstudio_ent8/ide/bin/runide.sh
    drwxrwxrwx 2 root root 1024 Feb 14 11:45 /export/opt/jstudio_ent8/ide/enterprise1/jakarta-tomcat-5.5.7/bin
    -rwxrwxrwx 1 root root 7325 Jan 20 2005 /export/opt/jstudio_ent8/ide/enterprise1/jakarta-tomcat-5.5.7/bin/catalina.bat
    is that something that should be fixed?
    thanks
    dickson

    Yes, the problem exists and is being addressed in the next release of JSE. For now I can only suggest that you chmod -R the installation directory.
    Sorry for this inconvenience.
    The next release of JSE is due in the MAy timeframe and will contain some major enchancements, such as the underlying NetBeans 5.0.

  • File Name error when creating file in Solaris C locale

    Background Info:
    1.In Java, we can use new File(filename) to create a new file. The filename there is a string denoting the name of the new file.
    2.As mentioned in bug4409965, "The 'C' locale in a 7-bit ASCII locale, the 8bit characters enteredn are not read in properly probably because in that locale, the input byte stream is expected to contain only 7bit characters, anything else is 'garbage'".
    My problem is:
    when I tried to construct a filename containing non-ASCII characters in the Solaris C locale, Java works quite different from C language.
    To be specify, follwing c code works quite well in C locale:
    char * temstring = "\0xd6\0xd0";
    FILE* fd = fopen(temstring, "w+");
    fprint(fd, "test");
    fclose(fd);
    Although the created file can't be viewed in C locale, the filename shows quite well in other suppoted locale , which means the filename doesn't get modified.
    While in Java, the following code
    File f = new File("\0xd6\0xd0");
    fos.println("test");
    try {
    f.createNewFile();
    } catch (IOException e)
    System.err.println(e.toString());
    will generate the file in C locale as ???? as garbage file name. Even in the proper locale, the filename can't be read correctly.
    I doubt the JVM in Solaris (file unixfilesystem.c) has modified the input filename and made the filename unreadable.
    I just wonder if there is some workaround to solve this problem, which means I can correctly generate the filename using Java just as what the C code has done: create the file just as what I denote instead of modifing the filename.
    Thanks

    Hello:
    Any luck with this problem?
    I am facing the same problem and was wondering if you had found a solution....
    Thanks
    Kiran

  • File system for SAP ECC, EP , BI and CRM installation on Solaris/DB2

    Hello,
    We are going to implement SAP ECC 6.0 with EP 7, BI and CRM on Solaris operating system with IBM DB2 database.
    All these applications are going to be installed on single server, as being a basis person, I know this is not all recommended.
    But due to client's requirement and keeping cost factor in mind I need to install all these application on single box.
    Now here I need your help. as I basis person, I know the required Solaris file system for SAP ECC 6.0 but not having any Idea about other application like EP/ CRM and BI .
    If anyone able to help me with required Solaris file system. it will be great help.
    Please let me know if there is any query.
    Thanks.

    > All these applications are going to be installed on single server, as being a basis person, I know this is not all recommended.
    > But due to client's requirement and keeping cost factor in mind I need to install all these application on single box.
    Why not using Solaris zones/container? This decreases the administrative amount tremendeously since you will deal as with "single machines" but they all run together on one box.
    > Now here I need your help. as I basis person, I know the required Solaris file system for SAP ECC 6.0 but not having any Idea about other application like EP/ CRM and BI .
    So you'd need to read the installation guides
    Markus

  • A few more Solaris questions

    I'm sure this isn't the best place to post this, but I currently have a file server running Solaris 11 with a raid-z2 pool and I'm building a new vmware server out of some parts I've managed to get some good deals on might I add.
    I'm building on a supermicro x8dt3-f board that has an LSI controller on board with a pair of xeon 5570's and 48gb ram. The processors and ram actually came from a sun blade that was tossed out for recycling, sadly the box had never been opened, but i got these for pennies on the dollar, so i'm happy
    I'd like to move the current Solaris file server into a VM on the new ESXi 5.1 host and passthrough the LSI controller.
    I've been reading, and reading and reading and i find the more i read, the more questions I have and the less clear some of the answers are getting.
    First, from what I gather, Solaris 11 has only been added to the supported guest list in ESXi with the recent esxi 5.1 version as I understand? So all should be fine here? Can someone confirm?
    second, i've read about issues with LSI controllers under solaris 11. Is this something that has been addressed in 11.1?
    Third, trying to find the best method to conver the physical system to virtual under ESXi
    The first thing I plan on doing is backing up my data from the pool, though it will be scattered across a few systems. Then I plan on exporting the pool to move the disks physically to the new controller in the VM. The question I have here, is will the share flags and permissions be retained when I import the pool? or do I have to redo all that?
    But then, what's the best method to move it to a VM? The one document I see come up the most is moving a physical solaris system into a zone on another system. Can it be transferred to the global zone? Could I do a new install in a VM, and move the existing install to the global zone, import my pool and call it a daY?
    Or do I dd the OS disk, convert the image and drop it into vmware, get the hardware working, and then import the pool?
    Or is there a better way? Has anyone got any online docs in mind that may help specifically with this migration? everything i'm finding is scattered, maybe i'm not looking for the right things but I could use some pointers if anyone has suggestions.
    I suppose I should note, i have it integrated with an Active directory, this is why i'm worried about permissions being retained when i import the pool
    I just want this to go as quickly and smoothly as possible, with as little headache as possible. it's my home setup, so realistically it takes the time it takes as long as things go smooth

    I know this may not be the answer you are looking for, but I think you are making it more difficult than it needs to be.
    One other option is to leave your Solaris Storage server on the bare metal of this new beast of a machine you are piecing together. Then, use VirtualBox 4.2.6 which is support quite well in Solaris to run whatever virtual machines you where intending ESXi to be used for.
    This way you have the fastest possible storage setup without the issues of hardware passthrough. And the fast storage now benefits the VMs running on it. Not to mention the other neat options now of running lzjb compression for the VMs. either using zvols for the VMs, or just virtualbox vdi files sitting on a compressed zfs filesystem.
    As far as the LSI 1068E controller goes, their website only shows drivers for Solaris 10. So unless Solaris 11 has the drivers built in, you may not be able to use that controller. Believe me I feel your pain in this one.. I have the d#$$!est time finding good SAS HBAs for Solaris 11.1 and the few I did find had questionable drivers. Areca 1320 cards seem to work well, as do Adaptec 64xx and 68xx raid cards. LSI has a new line that supposedly works with Solaris 11.
    My advice is to try a baremetal install of Solaris 11.1 on the new machine and see if you can recognize drives on the LSI controller, if not then use the 6 onboard SATA ports if that is enough for the drives. Otherwise purchase an Areca 1320 which is only like 230 bucks for the 8 port version. http://www.newegg.com/Product/Product.aspx?Item=N82E16816151116R

  • File Systems are not shown in CCMS Monitoring

    Hi Gurus,
    We have installed all SAP components in Solaris 10 zones including Solution Manager.
    Solaris File Systems are not shown in CCMS Monitoring under FileSystem. It looks like SAPOSCOL is not sending the OS data to CCMS. It does not show file systems in OS06 also.
    The probles exists on Systems runnig in Solaris 10 zones only and NOT on systems running on individual Servers. [without Solaris zones]
    Thanks,
    Pj

    Hi,
    You need take some special considerations while installing SOLMAN on SOLARIS 10 zones. Check these notes
    Note 870652 - Installation of SAP in a Solaris 10 zone
    Note 724713 - parameter settings for Solaris 10
    Hope this will solve your problem.
    --Ragu

  • Solaris Content Crawler

    All,
    Is there any Crawler for crawling files from Solaris File System, similar to Windows Crawler? I know we can develop one, but just wanted to check if there is one available already.
    Thanks,
    Bharat

    Hi Bharat,
    There is no such crawler for Solaris out-of-the-box. You are right, we can write custom crawlers using IDK.
    The workaround would be share the folders thru Samba and map it as a Windows Drive, then run the NT File crawler against the mapped drive.
    I think SES has the ability to index files on *NIX file system.
    BTW: Sun is introducing ZFS into new version of Solaris, take this into account if you decide to do that.

  • Solaris 10 x86:detect new hard drive

    hi.
    I'm on solaris 10 X86, and I added a hdd today. I can see it in the bios, but nothing appears in my solaris' files: vfstab, mnttab...etc.
    nevertheless, it's in the /var/adm/messages:
    genunix: [ID 846691 kern.info]   model IBM-DTLA-305020I tried touch /reconfigure;reboot and reboot -- -r, but still nothing.
    How could I make solaris detect my new hdd please?

    That's ok now.
    I saw my hdd with the format command. After:
    - partition option in the format command
    - created new fs (with newfs)
    - modified /etc/vfstab
    - mounted /<new fs>
    bye.

  • Solaris 10 Thread/LWP Schduling and I/O

    Hi,
    I am using Solaris 10 on N440, 4CPU machine and java 1.5.
    In my code there is a server socket listener thread.
    On new connection a worker thread is assigned to it.
    This worker thread simply reads data from the connection socket and
    put on a queue. Another Thread known as FileWriterThread which
    encapsulates this
    queue, reads from this queue and writes to its own file. For each
    connection
    there is a dedicated file.
    ~ (Connection 1 Thread) ---> puts on Queue in ~
    (FileWriterThread_1) ---> Writes to Own File 1
    ~ (Connection 2 Thread) ---> puts on Queue in ~
    (FileWriterThread_2 ) ---> Writes to Own File 2
    ~ (Connection 3 Thread) ---> puts on Queue in ~
    (FileWriterThread_3 ) ---> Writes to Own File 3
    ~ (Connection 2 Thread) ---> puts on Queue in ~
    (FileWriterThread_4 ) ---> Writes to Own File 4
    There could be at max 8 such connections.....
    Now the debate is that to have only one FileWriterThread for all
    connection Threads or Queues. Because the Write on Solaris file system
    is not interleaved and even if all 4 FileWriterThread_(1,2,3,4) get
    schd on each CPU since Write in Solaris file system writes one Block
    at a time is not interleaved, three threads out of 4 will be preempted
    and will have context switch.
    Where as if there is only one FileWriterThread then on circular basis
    it can fetch data from Queue and will never go for context swtich
    until its TimeSlice is over. Having one thread will give better
    performance than having 4 threads to write to their own file.
    TIA,
    Indresh MAlik

    Try 'iostat -En' to see if find anything unsual about the disks. It probably seems to be a disk failure.
    -Rai

Maybe you are looking for