File System on PI 7.0

Dears,
We are going to implement PI 7.0 on AIX 6.1 machine.
For its installation I need to provide file system details that needs to created.
I checked in installation guide but detail is available in very short form.
In Installation guide it is showing to create /sapmnt,      /usr/sap,      /oracle
So please confirm if any other required.
Shivam
Edited by: Shivam Mittal on Aug 12, 2009 2:39 PM

Hi Mittal,
You dont need to create these steps manually. SAP will create automatically. First you copy the softwre to one loaction and then install J2SDK which you get in the DVD  and then add the envirionemnt variables. Then you need to install the database. Go to Oracle folder path where you copied and run sapserver.cmd.  Then it goes to a command prompt and asks the SID for the database. You can give any 3 digits SID and then your database will be installed. After this you can install SAP. I am sorry I am going off track your question.
The answer would be no, because SAP would take care of it automatically. The steps I have given is for PI7.0 on Oracle and on Windows OS. It should be the same for AIX also mittal.
Regards,
---Satish

Similar Messages

  • How to fix file system error 56635 in windows 8.1

    heelo pls help me to fix this problem i cant install any on my laptop.
    file system error 56635 is showing  up wen i install my kaspersky 2015

    Hello Jay,
    The current forum is for developers. I'd suggest asking non-programming questions on the
    Office 2013 and Office 365 ProPlus - IT Pro General Discussions  forum instead.

  • ISE 1.2 VM file system recommendation

    Hi,
    According to http://www.cisco.com/en/US/docs/security/ise/1.2/installation_guide/ise_vmware.html#wp1056074, tabl 4-1 discusses storage requirement for ISE 1.2 in VM environment.  It recommends VMFS.
    What are the implications when using NFS instead?  Is this just a recommendation or an actual requirement?
    At the moment, we use Netapp array which uses NFS for all vApps.  It will be difficult to justify a creation an additional FC HBA just for this one vApp.  Please explain.
    TIA,
    Byung

    If you refer to
    http://www.cisco.com/en/US/docs/security/ise/1.2/installation_guide/ise_vmware.html
    It says :
    Storage
    •File System—VMFS
    We recommend that you use VMFS for storage. Other storage protocols are not tested and might result in some file system errors.
    •Internal Storage—SCSI/SAS
    •External Storage—iSCSI/SAN
    We do not recommend the use of NFS storage.

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • Error message "Live file system repair is not supported."

    System won't boot. Directed to Disk Utility to repair but get error message "Live file system repair is not supported."  Appreciate all help.
    Thanks.
    John

    I recently ran into a similar issue with my Time Machine backup disk. After about 6 days of no backups - I had swapped the disk for my photo library for a media project; I reattached the Time Machine disk and attempted a backup.
    Time Machine could not backup to the disk. Running Disk Utility and attempting to Repair the disk ended up returning the "Live file system repair is not supported" message.
    After much experimentaion with disk analysis softwares, I came to the realization that the issue might be that the USB disk dock wasn't connected directly to the MacBook Pro - it was daisy-chained through a USB Hub.
    Connecting the USB disk dock directly to the MBP and running Disk Utility appears to have resolved the issue. DU ran for about 6 hours and succesfully repaired the disk. Consequently, I have been able to use that Time Machine disk for subsequent backups.

  • How to get access to the local file system when running with Web Start

    I'm trying to create a JavaFX app that reads and writes image files to the local file system. Unfortunately, when I run it using the JNLP file that NetBeans generates, I get access permission errors when I try to create an Image object from a .png file.
    Is there any way to make this work in Netbeans? I assume I need to sign the jar or something? I tried turning "Enable Web Start" on in the application settings, and "self-sign by generated key", but that made it so the app wouldn't launch at all using the JNLP file.

    Same as usual as with any other web start app : sign the app or modify the policies of the local JRE. Better sign the app with a temp certificate.
    As for the 2nd error (signed app does not launch), I have no idea as I haven't tried using JWS with FX 2.0 yet. Try to activate console and loggin in Java's control panel options (in W7, JWS logs are in c:\users\<userid>\appdata\LocalLow\Sun\Java\Deployment\log) and see if anything appear here.
    Anyway JWS errors are notoriously not easy to figure out and the whole technology in itself is temperamental. Find the tool named JaNeLA on the web it will help you analyze syntax error in your JNLP (though it is not aware of the new syntax introduced for FX 2.0 and may produce lots of errors on those) and head to the JWS forum (Java Web Start & JNLP Andrew Thompson who dwells over there is the author of JaNeLA).

  • Zone install file system failed?

    On the global zone, my /opt file system is like this:
    /dev/dsk/c1t1d0s3 70547482 28931156 40910852 42% /opt
    I am trying to install it in NMSZone1 like this config:
    fs:
    dir: /opt
    special: /dev/dsk/c1t1d0s3
    raw: /dev/rdsk/c1t1d0s3
    type: ufs
    options: [nodevices,logging]
    But failed like this:
    bash-2.05b# zoneadm -z NMSZone1 boot
    zoneadm: zone 'NMSZone1': fsck of '/dev/rdsk/c1t1d0s3' failed with exit status 3
    3; run fsck manually
    zoneadm: zone 'NMSZone1': unable to get zoneid: Invalid argument
    zoneadm: zone 'NMSZone1': unable to destroy zone
    zoneadm: zone 'NMSZone1': call to zoneadmd failed
    Please help me. Thanks.

    It appears that the c1t1d0s3 device is already in use as /opt in the
    global zone. Is that indeed the case? If so, you need to unmount
    it from there (and remove or comment out its entry in the global
    zone's /etc/vfstab) file and then try booting the zone again.

  • Verication file system failed to make partition

    I can't make a partition verification file system failed

    Hello,
    In Disk Utility, try Repair Disk first, you need another boot drive or the Install Disc if this is trying on your Boot Drive.
    "Try Disk Utility
    1. Insert the Mac OS X Install disc, then restart the computer while holding the C key.
    2. When your computer finishes starting up from the disc, choose Disk Utility from the Installer menu at top of the screen. (In Mac OS X 10.4 or later, you must select your language first.)
    *Important: Do not click Continue in the first screen of the Installer. If you do, you must restart from the disc again to access Disk Utility.*
    3. Click the First Aid tab.
    4. Select your Mac OS X volume.
    5. Click Repair Disk, (not Repair Permissions). Disk Utility checks and repairs the disk."
    http://docs.info.apple.com/article.html?artnum=106214
    Then try a Safe Boot, (holding Shift key down at bootup), run Disk Utility in Applications>Utilities, then highlight your drive, click on Repair Permissions, reboot when it completes.
    (Safe boot may stay on the gray radian for a long time, let it go, it's trying to repair the Hard Drive.)
    If perchance you can't find your install Disc, at least try it from the Safe Boot part onward.

  • How do I use a NAS file system attached to my router to store iTunes purchases?

    We have four Windows devices networked in our house.  They all run iTunes with the same Apple ID so when any one of them has iTunes running, we can see that computer on our Apple TV.  Two run Windows XP, one runs Vista Business, and the newest runs Windows 7.  Upgrading all to the same Windows software is out of the question.  Our NAS is hung on an "off warranty" Linksys E3000 router which communicates via USB with a 1TB NTFS formatted Western Digital hard drive.  Our plan when we set up this little network, not cheaply, in our 1939 baloon construction tank of a house, was to build our library of digital images, music, and video primarily on that device.  It supports a variant of Windows streaming support, but poorly.  The best streaming support for our environment seems to be our ethernet connected Apple TV which is hung off the big TV in our family room, and has access to the surround sound "Home Theater" in that room.  We've incrementally built up collections of photos, digitized music, and most recently educational materialsd, podcasts, etc. which threaten storage limits on a couple of default C: drives on these windows systems.  iTunes, before the latest upgrades, gave us some feedback on the file system connections it established without going to the properties of the individual files, but the latest one has yet to be figured out.  MP3's and photos behave fairly well, including the recently available connection between Adobe software and iTunes which magically appeared, allowing JPEG files indexed by Adobe Photoshop Elements running on the two fastest computers to show up under control on the Apple TV attached 56' Samsung screen! 
    Problems arose when we started trying to set things up so that purchases downloaded from the iTunes store ended up directly on the NAS, and when things downloaded to a specific iTunes library on one of the Windows boxes caused storage "issues" on that box.  A bigger problem looming in the immediate future is the "housecleaning" effort which is part of my set of "new years resolutions."  How do I get control of all of my collections and merge them on the NAS without duplicate files, or when, for example, we have .ACC and .MP3 versions, only the required "best" option for the specific piece of music becomes a candidate for streaming?
    I envission this consolidation effort as a "once in a lifetime" effort.  I'm 70, my wife is 68 and not as "technical" as I am, so documented procedures will be required. 
    I plan to keep this thread updated with progress and questions as this project proceeds.  Links to "how to" experiences which are well documented, etc. may be appreciated by those who follow it.  I plan to post progress reports and detailed issues going forward.   Please help?

    Step 1 - by trial and error...
    So far, I have been able to create physical files containing MP3 and JPG on the NAS using the Windows XP systems to copy from shared locations on the Vista and Win7 boxes.  This process has been aided by the use of a 600 GB SATA 2 capable hard drive enclosure.  I first attach to Win 7 or Win Vista and reboot to see the local drive spaces formatted on the portable device.  Then I copy files from the user's private directories to the public drive space.  When the portable drive is wired to an XP box, I can use Windows to move the files from the portable device to the NAS without any of the more advanced file attributes being copied to the NAS.  Once the files are on the NAS, I can add the new folder(s) to iTunes on any of the computers and voila, the data becomes sharable via iTunes.  So far, this works for anything that I have completely purchased, or for MP3's I made from the AIC files created when I purchased alblums via iTunes. 
    I have three huge boxes full of vynl records I've accumulated.  The ones that I've successfully digitized via a turntable attached to the sound card on one of my computers and third party software, have found their way to the NAS after being imported into iTunes and using it to bring down available album art work.  In general I've been reasonably well pleased with the sound quality of digital MP3 files created this way, but the software I've been using sometimes has serious problems automatically separating individual songs from the album tracks and re-converting "one at a time" isn't very efficient.

  • Solaris 10  - After installation read only file system

    Dear All,
    I have installed the Solaris 10 on my x86 system with out any problem. The installation was completed successfully. I have followed the installation instructions that are mentioned in the following link.
    http://docs.sun.com/app/docs/doc/817-0544/6mgbagb19?a=view
    I am facing a different problem. I am not able to create even a single file / sub-directory on any of the existing directories. It always says READ ONLY file system can not create firl / directory.
    Please help me how to resolve this issue.
    Thanks in advance.
    Regards,
    Srinivas G

    What do you get for 'svcs -xv' output?
    Darren

  • Crystal Reports XI String [255] limit with the File System Data driver...

    I was trying to create a Crystal Reports XI report to return security permissions of files and folders.  I have been able to successfully connect and return data using the File System Data driver as the Data Source; however the String limit on the ACL NT Security Field is 255 characters.  The full string of data to be returned can be much longer than the 255 limit and I cannot find how to manipulate that parameter. 
    I am currently on Crystals XI and Crystal XI R2 and have applied the latest service packs but still see the issue.  My Crystal Reports Database DLL for File System data ( crdb_FileSystem.dll ) is at Product Version 11.5.10.1263.
    Is it possible to change string limits when using the File System Data driver as the Data Source?  If so, how can that be accomplished.  If not, is there another method to retrieve information with the Windows File System Data being the Data Source?  Meaning, could I reach my end game objective of reporting on the Windows ACL's with Crystal through another method?

    Hello,
    This is a known issue. Early versions you could not create folder structures longer than 255 characters. With the updates to the various OS's this is now possible but CR did not allocate the same space required.
    It's been tracked as an enhancement - ADAPT01174519 but set for a future release.
    There are likely other ways of getting the info and then putting it into an Excel file format and using that as the data source.
    I did a Google search and found this option: http://www.tomshardware.com/forum/16772-45-display-explorer-folders-tree-structure-export-excel
    There are tools out there to do this kind of thing....
    Thank you
    Don
    Note the reference to msls.exe appears to be a trojan: http://www.greatis.com/appdata/d/m/msls.exe.htm so don't install it.
    Edited by: Don Williams on Mar 19, 2010 8:45 AM

  • Print dialog options in case sensitive file system

    Since changing the file system running Lion and Mountain Lion from Mac OS Extended (Journaled) to Mac OS Extended (Case-sensitive, Journaled), certain features in print dialogs have disappeared.
    Particularly the option to print notes with slides in Microsoft Powerpoint are gone. Also, when choosing to print only 1 (or more, but not all) of multiple pages in Microsoft Word, the printer will nevertheless print all pages.
    This problem occurs on printers of different brands, i.e. HP, Lexmark, Brother.
    I was able to determine this problem by reproducing the issue on a cleanly installed Macbook Pro with OS X 10.8 formatted as Mac OS Extended (Case-sensitive, Journaled) vs a cleanly installed Macbook Pro with OS X 10.8 formatted as Mac OS Extended (Journaled), not Case-sensitive.
    Has anyone else had the same problem and maybe a solution?

    I just fixed this on my Mac. It is a bug in Microsoft Office... the Printer Dialog Extension (PDE) for Powerpoint is located in a directory named "Plugins", but PowerPoint is looking for it in "PlugIns". This obviously does not work in a case-sensitive filesystem.
    Here are the steps to fix the issue:
    http://apple.stackexchange.com/a/119974/69562

  • File systems available on Windows Server 2012 R2?

    What are the supported file systems in Windows Server 2012 R2? I mean the complete list. I know you can create, read and write on Fat32, NTFS and ReFS. What about non-Microsoft file systems, like EXT4 or HFS+? If I create a VM with a Linux OS, will
    I be able to acces the virtual hard disk natively from WS 2012 R2, or will I need a third party tool, like the one from Paragon? If I have a drive formated in EXT4 or HFS+, will I be able to acces it from Windows, without any third party tool? Acces it,
    I mean both read and write on them. I know that on the client OS, Windows 8.1, this is not possible natively, this is why I am asking here, I guess it is very possible for the server OS to have build-in support for accesing thoose file systems. If Hyper-V
    has been optimised to run not just Windows VMs, but also Linux VMs, it would make sense to me that file systems like thoose from Linux or OS X to be available using a build-in feature. I have tried to mount the vhd from a Linux VM I have created in HyperV,
    Windows Explorer could not read the hard drive.

    Installed Paragon ExtFS free. With it loaded, tried to mount on Windows Explorer a ext4 formated vhd, created on a Linux Hyper-V vm, it failed, and Paragon ExtFS crashed. Uninstalled Paragon ExtFS. The free version was not supported on WS 2012 R2
    by Paragon, if Windows has no build-in support for ext4, this means this free software has not messed around anything in the OS, I guess.
    Don't mess with third-party kernel-mode file systems as it's basically begging for troubles: crash inside them will make whole system BSOD and third-party FS are typically buggy... Because a) FS development for Windows is VERY complex and b) there are very
    few external adopters so not that many people actually theist them. What you can do however:
    1) Spawn an OS with a supported FS inside VM and configure loopback connectivity (even over SMB) with your host. So you'll read and write your volume inside a VM and copy content to / from host.
    (I personally use this approach in a reversed direction, my primary OS is MacOS X but I read/write NTFS-formatted disks from inside a Windows 7 VM I run on VMware Fusion)
    2) Use user-mode file system explorer (see sample links below, I'm NOT affiliated with that companie). So you'll copy content from the volume as it would be some sort of a shell extension.
    Crashes in 1) and 2) would not touch your whole OS stability. 
    HFS Explorer for Windows
    http://www.heise.de/download/hfsexplorer.html
    Ext2Read
    http://sourceforge.net/projects/ext2read/
    (both are user-land applications for HFS(+) and EXT2/3/4 accordingly)
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Store \ Retrieve files from file system

    Hi to all!
    I would like to implement a solution for storing files uploaded via apex user interface to servers file system. As well I would like this files to be retrievable by apex users. I designed the following solution:
    For upload:
    1. Through file browse item user chooses file to be uploaded
    2. File goes to custom table (as BLOB)
    -- so far i would use apex Upload\Download files tutorial
    3. File(BLOB) would then have to be written to file system to some directory and file id would have to be written to some db table which holds pointers to files on file system
    4. delete file(blob) from custom table (from step 2)
    For download:
    1. user chooses link from some report region(based on table giving file pointers to files residing on file system)
    2. file identified with chosen file pointer is then inserted into blob column of some custom table in db
    3. from custom table with download procedure fie is finally presented to user
    4. delete file(blob) from custom table (from step 2)
    Using apex tutorial for Upload\Download files it is straitforward to get the files from db table or into db table using blobs. But i have not seen any example of using BFILE or migrating files from db to file system and vice versa.
    So some Q arise:
    a) How can I implement step 3 under For upload section above
    b) How can I implement step 2 under For download section above
    c) Is there any way to directly upload file to file system via apex user interface or to directly download file from file system via some report region link column?
    Please help!!!
    Regards Marinero
    Message was edited by:
    marinero

    marinero,
    Here is a procedure that will copy an uploaded file to the file system:
      Procedure BLOB_TO_FILE(p_file_name In Varchar2) Is
        l_out_file    UTL_FILE.file_type;
        l_buffer      Raw(32767);
        l_amount      Binary_Integer := 32767;
        l_pos         Integer := 1;
        l_blob_len    Integer;
        p_data        Blob;
        file_name  Varchar2(256);
      Begin
        For rec In (Select ID
                              From HTMLDB_APPLICATION_FILES
                             Where Name = p_file_name)
        Loop
            Select BLOB_CONTENT, filename Into p_data, file_name From HTMLDB_APPLICATION_FILES Where ID = rec.ID;
            l_blob_len := DBMS_LOB.getlength(p_data);
            l_out_file := UTL_FILE.fopen('UPDOWNFILES_DIR', file_name, 'wb', 32767);
            While l_pos < l_blob_len
            Loop
              DBMS_LOB.Read(p_data, l_amount, l_pos, l_buffer);
              If l_buffer Is Not Null Then
                UTL_FILE.put_raw(l_out_file, l_buffer, True);
              End If;
              l_pos := l_pos + l_amount;
            End Loop;
            UTL_FILE.fclose(l_out_file);
        End Loop;         
      Exception
        When Others Then
          If UTL_FILE.is_open(l_out_file) Then
            UTL_FILE.fclose(l_out_file);
          End If;
      end; And here is a procedure that will download a file directly from the file system:
      Procedure download_my_file(p_file In Number) As
        v_length    Number;
        v_file_name Varchar2(2000);
        Lob_loc     Bfile;
      Begin
        Select file_name
          Into v_file_name
          From UpDownFiles F
         Where File_id = p_file;
        Lob_loc  := bfilename('UPDOWNFILES_DIR', v_file_name);
        v_length := dbms_lob.getlength(Lob_loc);
        owa_util.mime_header('application/octet', False);
        htp.p('Content-length: ' || v_length);
        htp.p('Content-Disposition: attachment; filename="' || SUBSTR(v_file_name, INSTR(v_file_name, '/') + 1) || '"');
        owa_util.http_header_close;
        wpg_docload.download_file(Lob_loc);
      End download_my_file;I could put a sample application on apex.oracle.com, but it wouldn't be able to access the file system on that server.

  • Open library from AFP share - Unsupported File System - Aperture 3

    Testing Aperture 3 - Exported a project as a library to an AFP (10.5) share.
    I then tried to reopen the library and got an error Unsupported File System ?
    Is there possibly a network file system that will support Aperture 3 libraries ?

    Temp solution in this thread - tested it and it works
    http://discussions.apple.com/thread.jspa?threadID=2330306&tstart=30
    I have found an workaround for this issue. Do the following:
    First, upgrade the library to version 3 using Aperture on the computer where you have the disk containing the library; if this is not possible, copy the library on the computer running Aperture and do the update there, then copy the updated library back;
    Then navigate to the network disk where you have the updated library and double-click it: a dialog box will come out saying something like <There was an error opening the database for the library "/Volumes/Media/Pictures/Aperture Library.aplibrary". The library could not be opened because the file system of the library's volume is unsupported> Click OK.
    A new dialog box will appear asking "Which library do you want Aperture to use?" and a list with all libraries; among them, the library situated on the network disk. Select it and click "Choose".
    Voila, after a delay depending on the speed of your network and remote disk, the library will open.
    Note that this "show" will repeat each time you will start-up Aperture. Anyway, it's better than nothing. I hope that Apple will correct this soon.
    And yes, I am using the Trial Version, I will buy a license only after Apple solves this issue! "

  • How to delete file systems from a Live Upgrade environment

    How to delete non-critical file systems from a Live Upgrade boot environment?
    Here is the situation.
    I have a Sol 10 upd 3 machine with 3 disks which I intend to upgrade to Sol 10 upd 6.
    Current layout
    Disk 0: 16 GB:
    /dev/dsk/c0t0d0s0 1.9G /
    /dev/dsk/c0t0d0s1 692M /usr/openwin
    /dev/dsk/c0t0d0s3 7.7G /var
    /dev/dsk/c0t0d0s4 3.9G swap
    /dev/dsk/c0t0d0s5 2.5G /tmp
    Disk 1: 16 GB:
    /dev/dsk/c0t1d0s0 7.7G /usr
    /dev/dsk/c0t1d0s1 1.8G /opt
    /dev/dsk/c0t1d0s3 3.2G /data1
    /dev/dsk/c0t1d0s4 3.9G /data2
    Disk 2: 33 GB:
    /dev/dsk/c0t2d0s0 33G /data3
    The data file systems are not in use right now, and I was thinking of
    partitioning the data3 into 2 or 3 file systems and then creating
    a new BE.
    However, the system already has a BE (named s10) and that BE lists
    all of the filesystems, incl the data ones.
    # lufslist -n 's10'
    boot environment name: s10
    This boot environment is currently active.
    This boot environment will be active on next system boot.
    Filesystem fstype device size Mounted on Mount Options
    /dev/dsk/c0t0d0s4 swap 4201703424 - -
    /dev/dsk/c0t0d0s0 ufs 2098059264 / -
    /dev/dsk/c0t1d0s0 ufs 8390375424 /usr -
    /dev/dsk/c0t0d0s3 ufs 8390375424 /var -
    /dev/dsk/c0t1d0s3 ufs 3505453056 /data1 -
    /dev/dsk/c0t1d0s1 ufs 1997531136 /opt -
    /dev/dsk/c0t1d0s4 ufs 4294785024 /data2 -
    /dev/dsk/c0t2d0s0 ufs 36507484160 /data3 -
    /dev/dsk/c0t0d0s5 ufs 2727290880 /tmp -
    /dev/dsk/c0t0d0s1 ufs 770715648 /usr/openwin -
    I browsed the Solaris 10 Installation Guide and the man pages
    for the lu commands, but can not find how to remove the data
    file systems from the BE.
    How do I do a live upgrade on this system?
    Thanks for your help.

    Thanks for the tips.
    I commented out the entries in /etc/vfstab, also had to remove the files /etc/lutab and /etc/lu/ICF.1
    and then could create the Boot Environment from scratch.
    I was also able to create another boot environment and copied into it,
    but now I'm facing a different problem, error when trying to upgrade.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    s10                        yes      yes    yes       no     -        
    s10u6                      yes      no     no        yes    -        Now, I have the Solaris 10 Update 6 DVD image on another machine
    which shares out the directory. I mounted it on this machine,
    did a lofiadm and mounted that at /cdrom.
    # ls -CF /cdrom /cdrom/boot /cdrom/platform
    /cdrom:
    Copyright                     boot/
    JDS-THIRDPARTYLICENSEREADME   installer*
    License/                      platform/
    Solaris_10/
    /cdrom/boot:
    hsfs.bootblock   sparc.miniroot
    /cdrom/platform:
    sun4u/   sun4us/  sun4v/Now I did luupgrade and I get this error:
    # luupgrade -u -n s10u6 -s /cdrom    
    ERROR: The media miniroot archive does not exist </cdrom/boot/x86.miniroot>.
    ERROR: Cannot unmount miniroot at </cdrom/Solaris_10/Tools/Boot>.I find it strange that this sparc machine is complaining about x86.miniroot.
    BTW, the machine on which the DVD image is happens to be x86 running Sol 10.
    I thought that wouldn't matter, as it is just NFS sharing a directory which has a DVD image.
    What am I doing wrong?
    Thanks.

Maybe you are looking for

  • ABAP/4 Key is not working in BI 7.0 version

    Hello, Some reason my ABAP/4 key is not working on BI 7.0 few months back I created few ABAP programs When i try to create a new program using t-code SE38 and program name as 0201 then create button then system will ask ABAP key when i enter its givi

  • Hi i want to how to use DTO in struts frame work.

    hi all, i want to display content on jsp pages using DTO object. in my application one jsp page for user input. foruser input i am creating one action form for this. and also one DTO class for handing action form . my main aim is elimunate the action

  • Dynamic code downloading

    Could java interpreter execute ANY classes /or maybe automatically download it first/ remotely from network via HTTP, and not only from local disk? If yes, how this could be done?

  • Access server data

    I have a portal using Single Sign On to authenticate users. Then there is an applet that needs to read data from server. Server requires authentication but is �in the portal� so authentication is handled by portal SSO. If I put applet to same server

  • Raw pictures with canon sx50hs

    In the past when I took pictures in raw & uploaded them o the computer I couldn't see the picture unless I opened them in the software for my camera. Today when I pu them on my computer  I could see them in the gallery. I don't know what has changed.