Solaris mount point

Can I read solaris mount point size through oracle?

I belive it can be done.You can create a package, which can perform o.s function calls to achive this.
hare krishna
Alok

Similar Messages

  • 9i mount points Solaris

    I have spent the past two nights working on my first Oracle install, on Solaris 5.9. Simple install, one disk on one SUN Blade 100. This install will be used to learn Oracle hands on in preparation for Oracle certification. One of my questions involves the mount points. I understand this is a customary Solaris mount point, a directory. My question is where do I make the directory(s) /u01 /u02 etc. Absolute pathnames names would be most helpful. From my reading on the net it seems I might just need one mount point. Please advise me regarding the mount points. Again this is a non production install that will have little use. Performance is not a concern. Thank you in advance for your help. Please keep in mind this is my first install, verbose responses welcome.
    [email protected]
    Here is the df -k output.
    ->df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/dsk/c0t0d0s0 134335 90144 30758 75% /
    /dev/dsk/c0t0d0s6 1052334 728317 260877 74% /usr
    /proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    fd 0 0 0 0% /dev/fd
    /dev/dsk/c0t0d0s3 57063 48815 Long postings are being truncated to ~1 kB at this time.

    You need to have at least two mount points (ideally four) to do the default install. Under /u01 would be your oracle home and the rest would be for the datafiles. Normally these would be in separate slices and preferably on a different disk from the OS, so if you only have one disk you may need to re-install solaris to allow room for oracle.
    valeyard
    OCP-DBA, SCSA

  • Creating new mount points on pre installed Solaris 2.7

    Creating new mount points on pre installed Solaris 2.7

    Hi,
    Thanks for suggestion.
    I have checked note 1521371, so as per note I have to provide the locations as given below..
    For PI1 (i.e. PI Development)...............
    Oracle Base Location : /oracle/PI1
    Oracle Home Location : /oracle/PI1/112_64
    Inventory Directory : /oracle/PI1
    For PI2 (i.e. PI Production)...............
    Oracle Base Location : /oracle/PI2
    Oracle Home Location : /oracle/PI2/112_64
    Inventory Directory : /oracle/PI2
    Correct me if I am wrong..
    Could you please tell me, does we need to set the ORACLE environmental variables i.e. ORACLE_BASE, ORACLE_HOME and ORACLE_STAGE before start the ORACLE database installation?
    Kindly suggest,
    Thanks and regards,
    Amit....

  • Does Solaris supports non-ASCII mount point?

    Hi,
    Does Solaris 10 supports non-ASCII mount point?
    Kind regards,
    Daniel

    # fstab generated by gen_fstab
    #<file system>   <dir>         <type>      <options>    <dump> <pass>
    none            /dev/pts      devpts      defaults        0     0
    none            /dev/shm      tmpfs       defaults        0     0
    UUID=496E-7B5E /media/STORAGE vfat    defaults,user,users,rw,exec,uid=777,gid=777   0       0
    /dev/sr0     /mnt/sr0_cd  auto     user,noauto,exec,unhide 0     0
    # This would do for a floppy
    #/dev/fd0        /mnt/floppy    vfat,ext2 rw,user,noauto    0     0
    #    +   mkdir /mnt/floppy
    # E.g. for USB storage:
    #/dev/sdb1        /mnt/usb      auto      rw,user,noauto   0     0
    #    +   mkdir /mnt/usb

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Unable to do expdp on NFS mount point in solaris Oracle db 10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dwExport: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd

    Hi Peter,
    Thanks for ur reply.. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>

  • Unable to open and save across mount point

    Why do these forums appear so neglected?
    Here's is my on-going question/problem:
    General input/output error while accessing /home/MyDir/mountedDir/SomeDire/TheFileName
    I experience a "feature" wherein I cannot open and save documents across an NFS mount point from a Linux client.
    This is an error rcv'd on Linux, GNOME 2.8.*, 2.6.10-gentoo-r6
    The mount is accomplished via an entry in the fstab as shown:
    server-hostname:/export/mydir /home/mydir/mountdir nfs tcp,user,rw,rsize=32768 0 0
    server-hostname is a Solaris OS.

    Sounds like you are missing some of the required plugins - possibly an updater failed, or someone moved/deleted the wrong directory.
    Yes, you'll need to reinstall to restore the missing plugins.

  • PI 7.1 Upgrade Mount Point

    Hi.
    Im upgrading a 7.0 PI system to 7.1, and all was going well until the system asked about the mount points.
    I have tried everything - the cds, the downloaded product.
    Nothing seems to recognise the 'SAP Kernel DVD Unicode' folders /cds.
    Its able to find the MID.xml file ok, but doesnt seem to recognise the mount point as a whole.
    Is there anything I can check to make sure that I have the right version?
    Andy

    Olivier,
       The directory structure for SOLARIS Kernel is different:
    +*510332451 NW 7.1 UC-Kernel 7.10 Solaris on SPARC 64bit Upgrade*+_
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_UV_SOLARIS_SPARC  LABELIDX.ASC
    +*510332452 NW 7.1 UC-Kernel 7.10 Solaris on SPARC 64bit*+_ 
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_UI_SOLARIS_SPARC  LABELIDX.ASC
    +*510332455 NW 7.1 Kernel 7.10 Solaris on SPARC 64bit Upgrade*+_
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_NV_SOLARIS_SPARC  LABELIDX.ASC
    +*510332458 NW 7.1 Kernel 7.10 Solaris on SPARC 64bit*+_ 
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_NI_SOLARIS_SPARC  LABELIDX.ASC
    With the above files what do you think a main directory /sapcd/XI_UPGRADE_PI71/UCK shoud like.
    Thanks for your help.
    Thanks
    S.

  • Messaging Server and Calendar Server Mount points for SAN

    Hi! Jay,
    We are planning to configure "JES 05Q4" Messaging and Calendar Servers on 2 v490 Servers running Solaris 9.0, Sun Cluster, Sun Volume Manager and UFS. The Servers will be connected to the SAN (EMC Symmetrix) for storage.
    I have the following questions:
    1. What are the SAN mount points to be setup for Messaging Server?
    I was planning to have the following on SAN:
    - /opt/SUNWmsgsr
    - /var/opt/SUNWmsgsr
    - Sun Cluster (Global Devices)
    Are there any other mount points that needs to be on SAN for Messaging to be configured on Sun Cluster?
    2. What are the SAN mount points to be setup for Calendar Server?
    I was planning to have the following on SAN:
    - /opt/SUNWics5
    - /var/opt/SUNWics5
    - /etc/opt/SUNWics5
    3. What are the SAN mount points to be setup for Web Server (v 6.0) for Delegated Admin 1.2?
    - /opt/ES60 (Planned location for Web Server)
    Delegated Admin will be installed under /opt/ES60/ida12
    Directory server will be on its own cluster. Are there any other storage needs to be considered?
    Also, Is there a good document that walks through step-by-step on how to install Messaging, Calendar and Web Server on 2 node Sun Cluster.
    The installation document doesn't do a good job or atleast I am seeing a lot of gaps.
    Thanks

    Hi,
    There are basically two choices..
    a) Have local binaries in cluster nodes (e.g 2 nodes) ... which means there will be two sets of binaries, one on each node in your case.
    Then when you configure the software ..you will have to point the data directory to a cluster filesystem which may not be neccasarily global. But this filsystem should be mountable on both nodes.
    The advantage of this method is that ... during patching and similar system maintenance activities....the downtime is minimum...
    The disadvantage is that you have to maintain two sets of binaries ..i.e patch twice.
    The suggested filesystems can be e.g.
    /opt for local binaries
    /SJE/SUNWmsgr for data (used during configure option)
    This will mean installing the binaries twice...
    b) Having a single copy of binaries on clustered filesystem....
    This was the norm in iMS5.2 era ...and Sun would recommend this...though I have seen type a) also for iMs 5.2
    This means there should no configuration files in local fs. Everything related to iPlanet on clustered filesystem.
    I have not come accross type b) post SUNONE i.e 6.x .....it seems 6.x has to keep some files on the local filesystem anyway..so b) is either not possible or needs some special configuration
    so may be you should try a) ...
    The Sequence would be ..
    After the cluster framework is ready:
    1) Insall the binaries on both side
    2 ) Install agent on one side
    3) switch the filesytem resource on any node
    4) Configure the software with the clustered FS
    5) Switch the filesystem resource on the other node and useconfig of first node.
    Cheers--

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • ZFS mount point - problem

    Hi,
    We are using ZFS to take the sanpshots in our solaris 10 servers. I have the problem when using ZFS mount options.
    The solaris server we are used as,
    SunOS emch-mp89-sunfire 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-V440Sys
    tem = SunOS
    Steps:
    1. I have created the zfs pool named as lmspool
    2. Then created the file system lmsfs
    3. Now I want to set the mountpoint for this ZFS file system (lmsfs) as "/opt/database" directory (which has some .sh files).
    4. Then need to take the snapshot of the lmsfs filesystem.
    5. For the mountpoint set, I tried two ways.
    1. zfs set mountpoint=/opt/database lmspool/lmsfs
    it returns the message "cannot mount '/opt/database/': directory is not empty
    property may be set but unable to remount filesystem".
    If I run the same command in second time, the mount point set properly and then I taken the snapshot of the ZFS filesystem (lmsfs). After done some modification in the database directory (delete some files), then I rollback the snapshot but the original database directory was not recovered. :-(
    2. In second way, I used the "legacy" option for mounting.
    # zfs set mountpoint=legacy lmspool/lmsfs
    # mount -F zfs lmspool/lmsfs /opt/database
    After run this command, I cant able to see the files of the database directory inside the /opt. So I cant able to modify anything inside the /opt/database directory.
    Please someone suggest me the solution for this problem. or anyother ways to take the ZFS snapshot with mounting point in UFS file system?..
    Thanks,
    Muthukrishnan G

    You'll have to explain the problem clearer. What exactly is the problem? What is "the original database directory"? The thing with the .sh files? Why are you trying to mount onto something with files in it in the first place?

  • Checking the space for /archlog mount point script

    I have the below shell script which is checking /archlog mount point space on cappire(solaris 10) server. When the space usage is above 80% it should e-mail. When i tested this script it is working as expected.
    #!/usr/bin/ksh
    export MAIL_LIST="[email protected]"
    export ARCH_STATUS=`df -k /archlog | awk '{ print $5 }' | grep -v Use%`
    echo $ARCH_STATUS
    if [[ $ARCH_STATUS > 80% ]]
    then echo "archive destination is $ARCH_STATUS full please contact DBA"
    echo "archive destination /archlog is $ARCH_STATUS full on Cappire." | mailx -s "archive destination on cappire is $ARCH_STATUS full" $MAIL_LIST
    else
    exit 1
    fi
    exit
    When i scheduled a cron job it is giving different result. Right now /archlog is 6%, it should exit without e-mailing anything. But, i am getting the below e-mail from cappire server which is strange.
    subject:archive destination on cappire is capacity
    below is the e-mail content.
    6% full
    Content-Length: 62
    archive destination /archlog is capacity 6% full on Cappire.
    Please help me in resolving this issue - why i am getting the above e-mail, i should not get any e-mail with the logic.
    Is there any issue with the cron. Please let me know.

    user01 wrote:
    I have the below shell script which is checking /archlog mount point space on cappire(solaris 10) server. When the space usage is above 80% it should e-mail. When i tested this script it is working as expected.
    #!/usr/bin/ksh
    export MAIL_LIST="[email protected]"
    export ARCH_STATUS=`df -k /archlog | awk '{ print $5 }' | grep -v Use%`
    echo $ARCH_STATUS
    if [[ $ARCH_STATUS > 80% ]]
    then echo "archive destination is $ARCH_STATUS full please contact DBA"
    echo "archive destination /archlog is $ARCH_STATUS full on Cappire." | mailx -s "archive destination on cappire is $ARCH_STATUS full" $MAIL_LIST
    else
    exit 1
    fi
    exit
    When i scheduled a cron job it is giving different result. Right now /archlog is 6%, it should exit without e-mailing anything. But, i am getting the below e-mail from cappire server which is strange.
    subject:archive destination on cappire is capacity
    below is the e-mail content.
    6% full
    Content-Length: 62
    archive destination /archlog is capacity 6% full on Cappire.
    Please help me in resolving this issue - why i am getting the above e-mail, i should not get any e-mail with the logic.
    Is there any issue with the cron. Please let me know.Not a problem with cron, but possibly an issue with the fact that you are doing a string comparison on something that you are thinking of as a number.
    Also, when I'm piping a bunch of stuff together and get unexpected results, I find it useful to break it down at a command line to confirm that each step is returning what I expect.
    df -k /archlog
    df -k /archlog | awk '{ print $5 }'
    df -k /archlog | awk '{ print $5 }' | grep -v Use%
    A common mistake is to forget that jobs submitted from cron don't source the owning user's .profile. You need to make sure the script takes care of setting its environment, but that doesn't look to be the issue for this particular problem.

  • Recover /var mount-point into new boot environment

    I installed Solaris 10 u7 with zfs pool containing / mount-point (root)
    One issue to mention is that - for no compulsory reason - I asked Solaris installer to place /var mount point into different file system (an option of the installer which I did not fully understood)
    I used Live Upgrade to migrate Solaris 10 u7 to U8. First:
    lucreate -c CBE - n NBE
    I assume NBE boot environment was created in the zfs pool (rpool)
    I did not take care of /var at this time.
    I did luupgrade - to apply new Update 8 into NBE - the target boot environment.
    Then did luactivate NBE and "init 6" to reboot.
    At boot time Solaris does start NBE (Solaris 10 U8) - but cannot mount /var - hence forces console into
    Maintenance Mode.
    What is the strategy to use the /var file-system from CBE (old boot environment) ?
    Can I mount file-system of old CBE /var ?
    Or should I mount a spare file system as NBE /var and somehow copy it?
    Please enlist a series of steps (commands), thanks.

    I reinstalled Solaris 10 Update7 - this time with / (root) and /var on same dataset (file system).
    This time lucreate and "luupgrade to Update 8" and luactivate ended OK
    (no problems with mounting /var after "init 6" reboot)

  • Custom mount point in fstab

    Hello,
    I am setting up a nas. I have configured a partition scheme that seems suitable for me. I have a problem though, I can not seem to mount a partiton to a custom mount point.
    The system is up to date, I just installed it last night. Here is my fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    tmpfs /tmp tmpfs nodev,nosuid 0 0
    UUID=4e373785-4fc9-4b97-aadc-adcbc523bbac /boot ext2 defaults 0 1
    UUID=50c83b59-b938-4945-9578-39ab7c40d93b / ext4 defaults 0 1
    UUID=8e87d854-a344-4a3c-950f-9831c0811ea2 /data ext4 defaults 0 1
    UUID=972cce0c-3c35-4d8a-a0d6-719794bc2766 /tm ext4 defaults 0 1
    UUID=d6f940d8-ce03-45c3-bfa9-b7dd83cebbb9 /var reiserfs defaults 0 1
    UUID=f01dc9e8-a182-4b89-8043-f6f73690ad6c swap swap defaults 0 0
    I want /tm for time machine and /data for samba sharing. If I change /data to /home and /tm to /tmp, everything mounts fine. Is there some reason why I shouldn't be able to mount a partition to a custom mount point? I did this two weeks ago on another box that I set up as a test...
    Also, if I run fdisk -l, it tells me that some of my partitions do not start on physical sector boundaries. Is that casue for alarm? I created the partitions with cfdisk and nothing seemed odd when I was doing it...??
    [root@nas ~]# fdisk -l
    Disk /dev/sda: 750.2 GB, 750156374016 bytes
    255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sda1 * 63 192779 96358+ 83 Linux
    Partition 1 does not start on physical sector boundary.
    /dev/sda2 192780 1465149167 732478194 5 Extended
    Partition 2 does not start on physical sector boundary.
    /dev/sda5 192843 5076539 2441848+ 83 Linux
    Partition 5 does not start on physical sector boundary.
    /dev/sda6 5076603 6056504 489951 82 Linux swap / Solaris
    Partition 6 does not start on physical sector boundary.
    /dev/sda7 6056568 15824024 4883728+ 83 Linux
    /dev/sda8 15824088 1090042379 537109146 83 Linux
    /dev/sda9 1090042443 1465149167 187553362+ 83 Linux
    Partition 9 does not start on physical sector boundary.
    Any insight?

    bnb2235 wrote:
    Also, if I run fdisk -l, it tells me that some of my partitions do not start on physical sector boundaries. Is that casue for alarm? I created the partitions with cfdisk and nothing seemed odd when I was doing it...??
    [root@nas ~]# fdisk -l
    Disk /dev/sda: 750.2 GB, 750156374016 bytes
    255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sda1 * 63 192779 96358+ 83 Linux
    Partition 1 does not start on physical sector boundary.
    /dev/sda2 192780 1465149167 732478194 5 Extended
    Partition 2 does not start on physical sector boundary.
    /dev/sda5 192843 5076539 2441848+ 83 Linux
    Partition 5 does not start on physical sector boundary.
    /dev/sda6 5076603 6056504 489951 82 Linux swap / Solaris
    Partition 6 does not start on physical sector boundary.
    /dev/sda7 6056568 15824024 4883728+ 83 Linux
    /dev/sda8 15824088 1090042379 537109146 83 Linux
    /dev/sda9 1090042443 1465149167 187553362+ 83 Linux
    Partition 9 does not start on physical sector boundary.
    AFAIK, if you don't align them it will cause performance issues. Check this site: http://johannes-bauer.com/linux/wdc/?menuid=3

  • Mount Point layout

    Hi all.
    I am about to install Ops Center on a server . Based on the requirments , I know that
    swap = 6GB
    /var = 100 GB ( Local database )
    other than this , what should be the best mount point sizes of /opt , /tmp etc ?
    I have 4 Hard disks ,74 GB each. Around 250GB in total !
    Regards

    n the Oracle® Enterprise Manager Ops Center Installation Guide for Oracle Solaris Operating System you can read:
    Alternate Boot Environment (ABE) is not supported
    for Oracle Solaris 10 zones or for the Enterprise Controller or Proxy Controller systems.
    Reading this really makes me wonder what they are saying.
    "or for the Enterprise Controller". What is it that is not supported,
    running the enterprise controller in an active boot environment on a Solaris 10 host?
    And how about Solaris 11? As usual the OpsCenter documentation is all but clear ...
    Will post up whatever info I get.Please do. I look forward to read what they say.

Maybe you are looking for

  • How to add new class to existing DC (Web Dynpro project)

    Hi, How can I add new class to an existing DC (Web Dynpro project)? I tried adding it using File - New - Other - Java - Class, but after a build of the DC the new class (and its contents    ) was completely removed. Then I created a new DC (java proj

  • Support for Creating Web Service from pl/sql package in JDeveloper 11

    We have been creating all of our web services from pl/sql packages in our Oracle database using JDeveloper 10.1.3.1. I understand that this capability is not supported in Jdev 11. We have been mandated to either move up to JDeveloper 11, or consider

  • Error 4280 - burning failed?

    Hi, I bought a new PC last week, running on Vista now - unfortunately... Every time I try to burn a CD, a message pops up telling me that an unknown error occured - error 4280 (it's unknown but it has got a number??) I've installed itunes again, chec

  • Unix sql script

    hi, I need some help if possible I have the folloiwng script select count(*) from dba_users where account_status !='OPEN' and username not in ( 'PROD_HCUST_HK', 'MDSYS', 'SCOTT', 'WMSYS', 'CTXSYS', 'ANONYMOUS', 'OUTLN', 'MGMT_VIEW', 'SI_INFORMTN_SCHE

  • German umlauts not supported in CSV file export of campaigns

    Hi all, I just customized and tested exporting business partner data out of a marketing campaign to a CSV file. I created a mail form and made the appropriate customizing settings in SPRO. All worked fine, but when opening this file with MS EXCEL, ge