Oracle 10gR2 on Solaris 10-Disk space question

I am going to install Oracle 10gR2 on Solaris 10 (x86). I need some advice on Disk space allocations. Altogether i've 10GB allocated for both Solaris and Oracle in VMware. My current allocation is this:
/ ----------->6177mb
swap---------->1024mb
/tmp ----------> 400mb
/export/home------->2570mb
Is this enough to install a Basic Oracle Database? I am going to use it only for learning purposes.

It is Ok to start, but I suggest you to read the:
Oracle® Database Installation Guide
10g Release 2 (10.2) for Solaris Operating System (x86)
Part Number B15697-01
for further requirements and a complete step by step guide.
~ Madrid.

Similar Messages

  • Why Oracle Linux 5 cannot recover disk space after delete files ?

    Folks,
    Hello. When I create a Virtual Machine for Oracle Linux 5 to install EBS R12, I allocate 300GB to the VM. I download EBS R12 Source Files (45.7GB) and unzip the files (46GB). Total size of stage area directory(EBS_R12) is 91.7GB.
    Because some files are corrupted and cannot install, I move the folder EBS_R12 into trash but cannot empty trash. I move EBS_R12 from trash back to the directory. Then at root user, use command "rm -rf " to delete EBS_R12 completely.
    I shut down Oracle Linux 5 and restart it, 91.7GB disk space cannot recover.
    I download some other files into Oracle Linux files and then delete those files, but disk space cannot be recovered either.
    It seems that OEL5 virtual disk only can expand but cannot be reduced. The VM disk space is less than 200GB now and not enough to install EBS R12.
    Can any folks tell me how to make 91.7GB disk space and some more disk space come up ?

    Folks,
    Hello. Thanks a lot for replying.
    Host OS is Windows 7, Guest OS is Oracle Linux 5 on the top of VMPlayer 3. The VM of Oracle Linux 5 is created on the external USB drive and not on the local hard disk.
    There is a directory /tmp/VMwareDnd/376c7cae/EBS_R12 in Oracle Linux 5 file system. I copy/Paste the folder of EBS_R12 from Windows 7 into Oracle Linux 5. All of zip files and unzipped files are placed in the folder of EBS_R12.
    After EBS_R12 is deleted, the external USB drive(F:) doesn't come up 91.7GB disk space.
    My question is:
    Where to run "boot>linux rescue" command ?
    Does this command work correctly in the external USB drive ?

  • CF8 on Solaris 10 - disk space error

    Trying to install CF8 on a Solaris 10 box (with zones).  The binary installer gives the following error:
    : root:  /var/tmp> ./Coldfusion-8-sol.bin
    Preparing to install...
    WARNING: /tmp does not have enough disk space!             
             Attempting to use / for install base and tmp dir.
    expr: non-numeric argument
    WARNING! The amount of / disk space required to perform
    this installation is greater than what is available.  Please
    free up at least 0 kilobytes in / and attempt this
    installation again.  You may also set the IATEMPDIR environment
    variable to a directory on a disk partition with enough free
    disk space.  To set the variable enter one of the following
    commands at the UNIX command line prompt before running this
    installer again:
    - for Bourne shell (sh), ksh, bash and zsh:
         $ IATEMPDIR=/your/free/space/directory
         $ export IATEMPDIR
    - for C shell (csh) and tcsh:
         $ setenv IATEMPDIR /your/free/space/directory
    : root:  /var/tmp> IATEMPDIR=/var/tmp
    : root:  /var/tmp> echo $IATEMPDIR
    /var/tmp
    : root:  /var/tmp> ls -l
    total 756849
    -rwxr-xr-x   1 root     root     387179372 Nov  5 14:53 Coldfusion-8-sol.bin
    : root:  /var/tmp> df -k
    Filesystem            kbytes    used   avail capacity  Mounted on
    /                          0 2762463 467923523     1%    /

    It is Ok to start, but I suggest you to read the:
    Oracle® Database Installation Guide
    10g Release 2 (10.2) for Solaris Operating System (x86)
    Part Number B15697-01
    for further requirements and a complete step by step guide.
    ~ Madrid.

  • Oracle 10gR2 on Solaris 10 - Zoned Machine getting ORA-27102 Errors

    Getting Memory issues in Solaris 10. I have configured the database to have only 2GB of SGA, but still getting ORA-27102 and also when we look at top command it seems all the memory is not available. The machine is divided into two zones, machine has 32 GB memory and has 8 processors. The shared memory settings I have done is 8 GB max.
    Any help in this regard would be helpful. Is anything special needs to be done in zones ?
    Thanks,
    Atul
    And these are the details of prctl $$
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    process.max-port-events
    privileged 65.5K - deny -
    system 2.15G max deny -
    process.max-msg-messages
    privileged 8.19K - deny -
    system 4.29G max deny -
    process.max-msg-qbytes
    privileged 64.0KB - deny -
    system 16.0EB max deny -
    process.max-sem-ops
    privileged 512 - deny -
    system 2.15G max deny -
    process.max-sem-nsems
    privileged 512 - deny -
    system 32.8K max deny -
    process.max-address-space
    privileged 16.0EB max deny -
    system 16.0EB max deny -
    process.max-file-descriptor
    basic 256 - deny 12615
    privileged 65.5K - deny -
    system 2.15G max deny -
    process.max-core-size
    privileged 8.00EB max deny -
    system 8.00EB max deny -
    process.max-stack-size
    basic 8.00MB - deny 12615
    privileged 8.00EB - deny -
    system 8.00EB max deny -
    process.max-data-size
    privileged 16.0EB max deny -
    system 16.0EB max deny -
    process.max-file-size
    privileged 8.00EB max deny,signal=XFSZ -
    system 8.00EB max deny -
    process.max-cpu-time
    privileged 18.4Es inf signal=XCPU -
    system 18.4Es inf none -
    task.max-cpu-time
    system 18.4Es inf none -
    task.max-lwps
    system 2.15G max deny -
    project.max-contracts
    privileged 10.0K - deny -
    system 2.15G max deny -
    project.max-device-locked-memory
    privileged 1.96GB - deny -
    system 16.0EB max deny -
    project.max-port-ids
    privileged 8.19K - deny -
    system 65.5K max deny -
    project.max-shm-memory
    privileged 7.40GB - deny -
    system 16.0EB max deny -
    project.max-shm-ids
    privileged 128 - deny -
    system 16.8M max deny -
    project.max-msg-ids
    privileged 128 - deny -
    system 16.8M max deny -
    project.max-sem-ids
    privileged 128 - deny -
    system 16.8M max deny -
    project.max-crypto-memory
    privileged 7.84GB - deny -
    system 16.0EB max deny -
    project.max-tasks
    system 2.15G max deny -
    project.max-lwps
    system 2.15G max deny -
    project.cpu-shares
    privileged 1 - none -
    system 65.5K max none -
    zone.max-lwps
    system 2.15G max deny -
    zone.cpu-shares
    privileged 1 - none -
    system 65.5K max none -

    There is no trace files to look at . Alert log does not have entry and bdump & udump does not have any trace file.
    I was more interested to know, if somebody has installed Oracle 10 on Solaris 10 with a zone or container in solaris. What parameter settings was done on unix setting was it only shared-memory defined, if anybody has come accross memory issues in the machine after installing Oracle 10 with Solaris 10, and the machine has been zoned in two or more zones.
    Thanks,
    Atul

  • Error installing Oracle 10gR2 on Solaris Express Developer Edition

    Hi
    I'm trying to install Oracle Db 10gR2 on Solaris Express Developer Edition 1/08 for x86 but it fails with the following link error:
    INFO: /export/home/u01/app/oracle/oracle/product/10.2.0/db_1/bin/genclntsh
    INFO: ld: fatal:
    INFO: symbol `ntcontab' in file /export/home/u01/app/oracle/oracle/product/10.2.0/db_1/lib/libn10.a(ntcontab.o): section [3] .data: size 0x60: symbol (address 0, size 0x70) lies outside of containing section
    INFO: ld:
    INFO: fatal:
    INFO: File processing errors. No output written to
    INFO: /export/home/u01/app/oracle/oracle/product/10.2.0/db_1/lib/libclntsh.so.10.1
    INFO:
    INFO: genclntsh: Failed to link libclntsh.so.10.1
    INFO: *** Error code 1
    INFO: make: Fatal error: Command failed for target `client_sharedlib'
    Any ideas would be appreciated.
    Thanks

    If your version of Solaris is not supported, see the support matrix, you are on your own. This is likely the case.

  • Oracle taking 3 times more disk spaces than MySQL for BLOBs?

    Hello there,
    We have been migrating one big about 8 TB MySQL table into Oracle 11g R2. The table contains biometric data in 18 BLOB columns - it's a kind of storage table for all biometric information in the form of BLOB. The avg. row length is about 140KB. We had a rough estimation - it might take about 12 TB of disk spaces after migrating the thing into Oracle. We planned to put this table in a 32K bock size tablespace. The table contains about 85 million records.
    After we finished loading the data, we discovered that about 20 millions of records already consumed more than 7 TB of disk spaces! In this fashion, we might end up with consuming 30TB of disk spaces after loading the 85 million of records. The only thing at this moment I am thinking about is to reduce the PCTFREE parameter to 2% instead of 10% - the records will be very rarely updated. This will save about 2TB of disk spaces from that 30TB, I guess.
    On top of your head, do you have any quick thoughts about the possible reasons of such abnormal (?) disk consumption - it is taking about 3 times more disk than MySQL database. What would be the best way to debug the problem - where to look at?
    FYI- All the storage parameters of the table has been kept default - just mentioned the tablespace name while created the table. We have been using SQL*Loader with DIRECT mode to load the data.
    Thanks for your cooperation!
    Regards.

    Hi,
    We are using the old BLOB - and not compressed yet.
    The machine is not remotely accessible, I will get the DDL later on.
    FYI, we did not mention any storage option for the table except the tablespace name. Earlier, I looked at the dba_lobs, dba_segments, dba_extents and had the following few information -
    * All the blob columns has "initial_extent" 163840 [from dba_segments]
    * Block allocation gradually increased, not uniform (extent_id 1 -> blocks 2, ... extent_id 14 -> blocks 32... extent_id 77 -> blocks 256... extent_id 938 -> blocks 2048 [from dba_extents]
    * CHUNK is 32768 for all lob segments
    * RETENTION is 900
    * No value for PCTVERSION
    Thanks for your time!

  • ORA-27102: out of memory. Faild to install oracle 10gR2 on Solaris 10

    Hi, I want to install oracle on my solaris machine. I have 2.5G RAM and more than 5G swap file. But the ORA-27102: out of memory error occurred at installing the Oracle Database Configuration Assistant step(Copying database files Creating and starting Oracle instance).
    The only warnings bellow:
    Checking kernel parameters
    Checking for BIT_SIZE=64; found BIT_SIZE=64. Passed
    Checking for shmsys:shminfo_shmmax=4294967295; found no entry. Failed <<<<
    Checking for shmsys:shminfo_shmmni=100; found no entry. Failed <<<<
    Checking for semsys:seminfo_semmni=100; found no entry. Failed <<<<
    Checking for semsys:seminfo_semmsl=256; found no entry. Failed <<<<
    Check complete. The overall result of this check is: Failed <<<<
    Problem: The kernel parameters do not meet the minimum requirements (see above).
    What's the reason and how may I resolve it?
    Thanks!

    I set some kernel parameters like: set shmsys:shminfo_shmmax=4294967295 and so on. Restart the computer and run dbca+ and configure in advance mode. There's a stage require the size of flash_recover_segment of which the default size number is 2048M. WOW, out of memory on my pc of course.
    So I think it's the parameter value during the installation progress bring on the ORA-27102 problem. However, during the first installation, there's no prompt for me to enter the parameter; or I didn't see.

  • Disk Space Question

    Hey all,
    Just had a simple Q. I just recently ran out of disk space. And I have no idea how. Ive even gotten the messege Disk Too Slow. I was just curious if there is a program or even somethin already on the mini that can put everything in order of size. Or even somewhere where i can see whats taking up so much room. And yes Ive already checked the common music, movies and pictures folders, and they dont add up to the 80. And if programs are the case, than thats where i would like to see whats the largest. Who knows...maybe i gotta do it the hard way. (get info..get info...get info. Well Id appreciate any assistance...Thanks!

    Hey Drew,
    Without really knowing the situation, I'd be first asking if you have an external HD to copy your internal to. Then I'd be saying that How did you fill up 80 gig's? I know, I know it can be done easy enough, and let us get real, are you downloading the whole internet? Don't! It will be there tomorrow, just bookmark it.
    Then I'd be asking if you are running the normal cron scripts (daily, weekly, monthly), then have you deleted the languages part of programs that you don't use, then do you keep every version of a document that you create?
    You know that certain programs use your HD for virtual RAM to store temporarily what you are creating as well as the system using a large chunk of the disk for temporary (virtual) memory? OS X is notorious for this. And this still doesn't explain your case.
    I have over 5 days of music on my internal HD as well as a huge collection of Photoshop images that I use for Magazine full page adverts, I have no idea how many e-books of philosophical, biblical, theological, ancient, as well as many more, in the hundreds. AND I still have close to 50 gig's left. Do you copy all your DVD's?
    It doesn't matter to me, you have to choose what is important to you what you want on your HD.
    In other words you have to give us better information to have me or someone here to aid with your dilemma.
    Disk Too Slow message
    Was this running what program? The internal HD of most of the original Mini's is not the best but I have never seen this message.
    Sorry I'm of no help at the moment, but we'll get there, if not by me then by someone with more knowledge, and with more facts from you.

  • Rudimentary disk space question

    Howdy,
    My HD is full but I've got an external hard drive with 500 free gigs attached. Everytime I try to save or render my files in FCP 5, it gives me an "Out of Disk space" error. I've gone into the system settings and cleared my hard drive as a scratch disk (leaving only my external hard drive), but I still can't render or save without getting this same error. How do I tell FCP 5 to treat this external hard drive as my new destination for everything I want to save?
    Thanks in advance.

    Did you do the obvious and empty the Trash?
    Where are your Render Files set to go to?
    You can set the scratch disk destination in FCP by going to the menu bar: Final Cut Pro > System Settings.

  • MacBook Pro Disk Space Question and Time Machine

    Hi all,
    I know this is probably a "noob" question, but I have been backing up my laptop (2009 Mac OS X 10.6.8) onto an external hard drive set up for Time Machine, and now my MacBook Pro only has 1.15 GB available of free space after the many years of usage and saving things onto it (such as images, movies, etc.). I had a question: Are these files backed up onto Time Machine and so can I delete them from my laptop without losing them? I am concerned about losing them and am not sure how Time Machine really works; if I delete these files from my laptop in order to free up space, then backup my computer onto Time Machine, will they be deleted from the Time Machine drive also?? I need to free up a lot of space on my laptop but also don't want to lose any pictures and videos from over the years.
    Thanks in advance.

    570ad 
    My question is, how do I effectively "copy over" files onto my new external hard drive? Is it as easy as connecting the hard drive via usb
    Easy as drag and drop, yes indeed.  Could almost do it with your eyes closed.
    Entire user account, ....no, just get all your files, you created, saved, made, work on ALL VITAL DATA you "dont dare lose" etc.  drag it over, make files in the HD showing where things are, etc.
    There are of course a 1000 ways to organize folders and data on external HD, ..pick what suits you.
    Keep it simple.

  • IPad Disk Space Question

    I have the 64GB 3G version of the iPad. I use it primarily for apps; currently about 3GB worth. I also have about 3GB worth of music on it, no photos and no videos.
    I've noticed that as I download new apps, and updates to my already installed apps, my "Other" category (The orange section) increases.
    I've found that even just updating already installed apps, my "Other" category increases, and not by a lot, depending on the app, it could be as little as 1MB and as much as 10MB. Currently, I'm at about 410MB of "Other".
    Is this normal and why does this happen? Curiosity is striking me.
    Also, just another small, off the wall question... is there a wrong way to close an app? In other words, when im finished in an app, is pressing the "Home" button at any time good? I wasn't sure if closing out in the middle of a game or something is bad for it, much like it could be for a PC program.
    My iPad and I are new to each other, so thank you in advance for your help.
    Justin

    I'm sort of confused here... I've gone ahead and removed two apps and installed two new apps by way of syncing through iTunes just now...
    The apps I installed were smaller in size than the two apps I removed.
    I downloaded my two new apps, hit the black X's on the two apps I wanted to remove, synced it all up, and my "Other" category STILL increased by about 3MB.
    Now, this may be totally normal; or it may not be. I'm trying to better understand if I'm doing this all correctly, and if, indeed, my "Other" category should still be increasing, even though the amount of data I have on here is actually about 2MB less than what I had before... Thanks everyone!

  • Oracle 10gR2 installation problem on Solaris (sparc-64 bit)

    Hi All,
    I am unable to install Oracle 10gR2 on Solaris, I cannot understand what is wrong could u anybody help me that,
    Regards
    Harpreet Singh

    I also have a similar problem, when I insert the key in my car sometimes it does not start. I cannot understand what is wrong could you help me?
    Otherwise, It's been raining lately and our crystal ball's are somewhat foggy; therefore we cannot guess what is you problem.

  • Step 6 error insufficient disk space in oracle linux

    hi am in step 6 installing my oracle oracle in oracle Linux am getting insufficient disk space on this volume for the selected oracle
    i have this dir oracle_base = /u01/app/oracle/product
    sofware location /u01/app/oracle/product/11.2.0/db_1
    database file location /u01/app/oracle/product/oradata
    how can i check space becuase is have enough space
    Edited by: Tshifhiwa on 2012/01/27 2:40 PM
    i type free -m
    total used free shared buffers cached
    Mem: 1988 1844 143 0 36 1365
    -/+ buffers/cache: 443 1545
    Swap: 3999 0 3999
    Edited by: Tshifhiwa on 2012/01/27 2:40 PM
    Edited by: Tshifhiwa on 2012/01/27 3:44 PM
    Edited by: Tshifhiwa on 2012/01/27 4:05 PM

    Tshifhiwa wrote:
    hi am in step 6 installing my oracle oracle in oracle Linux am getting insufficient disk space on this volume for the selected oracle
    i have this dir oracle_base = /u01/app/oracle/product
    sofware location /u01/app/oracle/product/11.2.0/db_1
    database file location /u01/app/oracle/product/oradata
    how can i check space becuase is have enough space
    Edited by: Tshifhiwa on 2012/01/27 2:40 PM
    i type free -m
    total used free shared buffers cached
    Mem: 1988 1844 143 0 36 1365
    -/+ buffers/cache: 443 1545
    Swap: 3999 0 3999
    Edited by: Tshifhiwa on 2012/01/27 2:40 PM
    Edited by: Tshifhiwa on 2012/01/27 3:44 PM
    Edited by: Tshifhiwa on 2012/01/27 4:05 PM<Sigh!>
    http://lmgtfy.com/?q=check+disk+space+in+linux

  • Solaris 10 + Oracle 10gR2 RAC question

    Hello everyone
    Has anyone come across the case where the CRS services of Oracle cause
    the public interface to get turned off and then restored at random
    time intervals? To elaborate, we have a 2 node cluster database.
    Solaris 10, Oracle 10gR2 RAC with patch 10.2.0.3 applied. No SUN
    clustering is involved. When the cluster software is down (nodeapps,
    asm, database instances all down) /var/adm/messages show nothing. When
    we start nodeapps on the 2 nodes(thus initiating some form of
    communication between the nodes), at random time intervals we get
    "interface ce0 turned off and interface ce0 restored" in /var/adm/
    messages. When we check the status of the RAC, we see that one node's
    vip has been assigned to the other. This on/off behaviour of the NIC
    can be eliminated only if we continuously PING it from a another
    client in the network.
    As a matter of fact, the RAC and the RDBMS work perfectly when we keep
    pinging the 2 nodes from an other client on the network. We even
    managed to run a long batch job, distributed on cluster managed
    services on the 2 instances, and it completed after 9 hours without
    any problems.
    Does anyone have a hint on this behaviour? Is there some sort of
    timeout for the network cards? Some power saving features? Googling
    around I came across the new Containers feature available on Solaris
    10. Is there a way that I can verify that either RAC or the RDBMS is
    running in "container" mode ( since the solaris and Oracle
    installation was not performed by me)? Any other ideas?
    Thank you for reading

    Im an Oracle guy - not the SA type -
    But on ours - the SA configured this cluster incorrectly. We use veritas. instead of making ipmp groups for the interfaces - he built the cluster according to veritas docs. That is - he has two publics - on difference interfaces and different privates on different interfaces. oracle can only two interfaces - no matter if its a ipmp group or a device name. one is used for private- the other is used for public - So sure the veritas cluster filesystems will survive - but the Oracle Cluster will not - nodes will reboot -
    Is your system set up incorrectly as i described above? if it is - a quick test would be - turn down the other interfaces - and only leave the two interfaces you mention above up that you configured for Oracle CRS
    This other sharp sa was able to go through the arp table and see duplicate IPs - and the routing was attempted via an interface that oracle doesnt see. You can not define two different interfaces public - and two different interfaces for private -

  • Newb Question - Running out of disk space after Solaris 10 install (x86)

    After installing Solaris 10 on my x86 environment on a 80GB drive, I noticed I'm practically out of disk space. The space seems to be allocated in \devices\pci@0,0 and \devices\isa. There are several fd@8,0:a like devices in ISA at are a ridicuous number (ie. 8589934592.0GB) Block Device. Can someone give background on what is going on here or what I need to do to fix it?
    Thanks.
    -Vinny_C

    Well - in typical newbie fashion - I actually was out of space. Turns out that during the install I did not allocate enough to the \ root install folder as all the space was in my home mount.
    Cheers!

Maybe you are looking for

  • Page includes in html templates

    I see <% include file="header.html" %> in html template working for 'HTML' gallery type web engines. <br /> <br />But, it doesn't work if the gallery type is 'Flash'. It seems the only supported directive includes are %variablename% format. <br /> <b

  • Dual boot, UEFI and partitioning

    Got a new computer which I put my old disks in. After a while I got them to boot properly but I'm going to take the time to go from my current MBR setup to GPT. The idea I have three disks, 480GB SSD, 640GB HDD and 2TB HDD. The idea is to use the SSD

  • Sorting Autosaves! Please Help.

    Hello everybody. ok so i started a project yesterday having just got FCE HD. However today i could not load the project as FCE HD gave me the "Unable to open project files" message. after searching through this forum i found something mentioned about

  • Batch job invoke Sharepoint web service

    Hi all, Can BODS job invoke a SharePoint web service which accepts two parameters? The first parameter is a string and the second one is an XML. Any inputs will be greatly helpful. Thanks and Regards, Prateek

  • Converted HD content suddenly jumpy/jerky

    For some time have been recording HD free to air programs in eyetv and converting the to watch through apple tv. This has worked beautifully until recently. Now all my coverted HD videos play back very jerky, particularly noticeable and annoying on l