Solaris 10 /etc/projects
Added attribute "process.max-address-space" to project group.dba dor user oracle. System was rebooted.
A process that was started via oracle crontab picked up the new value. DB admin started the database after "su - oracle" from root user. The DB processes did not pick up the new value.
Results of testing "su" as relates to project attributes are as follows...
# su - oracle
# id -p
uid=101(oracle) gid=101(dba) projid=101(group.dba)
# prctl -n process.max-address-space -i process $$
process: 10729: -ksh
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-address-space
privileged 4.00GB - deny OLD VALUE -
system 16.0EB max deny -
# exit
# su oracle
# id -p
uid=101(oracle) gid=101(dba) projid=101(group.dba)
# prctl -n process.max-address-space -i process $$
process: 10747: ksh
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-address-space
privileged 5.00GB - deny NEW VALUE -
system 16.0EB max deny
Seems like this is opposite to what I would expect.
Thought your "hint" would solve the issue but same result...
$ projects -l
system
projid : 0
comment: ""
users : (none)
groups : (none)
attribs:
user.root
projid : 1
comment: ""
users : (none)
groups : (none)
attribs:
noproject
projid : 2
comment: ""
users : (none)
groups : (none)
attribs:
default
projid : 3
comment: ""
users : (none)
groups : (none)
attribs:
group.staff
projid : 10
comment: ""
users : (none)
groups : (none)
attribs:
group.dba
projid : 101
comment: ""
users : oracle
groups : (none)
attribs: process.max-address-space=(privileged,5368709120,deny)
project.max-shm-memory=(priv,4294967296,deny)
# user attributes. see user_attr(4)
#pragma ident "@(#)user_attr 1.1 03/07/09 SMI"
adm::::profiles=Log Management
lp::::profiles=Printer Management
postgres::::type=role;profiles=Postgres Administration,All
root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no;min_label=admin_low;clearance=admin_high
oracle::::type=normal;project=group.dba
Similar Messages
-
Problem with /etc/project in Solaris 10
Bonjour,
I need your help to understand and have project work correctly under Solaris 10.
What seems to be prety straight forward on paper doesn't work at all for me and can't figure why so!
Obviously I'm trying to make it work for an Oracle user.
I create a project called "oracle" with id 100.
here is what you can found in my /etc/project :
oracle:100::oracle::process.max-file-descriptor=(priv,1024,deny);project.max-shm-memory=(priv,4294967296,deny)
here is what you can found in my /etc/user_attr :
oracle::::project=oracle
now, once I reboot and login as oracle I do a id -p, here is the output :
uid=3000(oracle) gid=3002(oinstall) projid=0(system)
I do ulimit -a, here is the output :
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) 0
nofiles(descriptors) 256 <------------------- should be 1024 no?
vmemory(kbytes) unlimited
Shouldn't I be part of project oracle (100)???
Now, of what I understant is oracle is NOT part of the "oracle" project and thuss setting are not set up at loging.
What am I doing wrong or what am I missing?
Thanks for your timeI'd start by verifying the settings via valid OS methods, no cat/grep whenever possible
# logins -xl oracle
oracle 3000 oinstall 3002
/export/home/oracle
/bin/sh
PS 051110 -1 -1 -1
# grep project /etc/nsswitch.conf
project: files
# grep oracle /etc/user_attr
oracle::::project=oracle
# projects oracle
default oracle
# projects -l oracle
oracle
projid : 100
comment: ""
users : oracle
groups : (none)
attribs: process.max-file-descriptor=(priv,1024,deny)
project.max-shm-memory=(priv,4294967296,deny)
# ssh -l oracle 0
Password:
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ id -p
uid=3000(oracle) gid=3002(oinstall) projid=100(oracle) -
Determine how much to allocate for /etc/project
Question on how to determine how much resources a particular is actually being used on a solaris 10 server because I don't want to over allocate, esp for max-shm-memory to a particular project.
Current setting in /etc/project
user.test:101:Test project:::project.max-sem-ids=(privileged,1024,deny);project.max-sem-nsems=(privileged,512,deny);project.max-shm-ids=(privileged,512,deny);project.max-shm-memory=(privileged,2048000000,deny)
Thank you
MikeThanks wsanders.
I did ipcs -b and added the values in column 7 of the shared memory section to determine the amount of shared memory used. -
Mapping onto Solaris 10 projects in an MP configuration
I have the scenario where we have a Tuxedo 8.1 MP application running on 3 nodes in a virtual environment. The challenge is to map the user to a Solaris 1o "project" with sufficient kernel parameters. This is not an issue on the master node as we can run "newtask -p <project>" in a script prior to running tmboot. However the remote nodes run out of kernel as I haven't found a way to map the remote node to an appropriate project. I have tried entering "newtask -p <project>" into the ENVFILE but that doesn't seem to work.
SOLVED:
Disabled ECC in the bios (though Windows and Gentoo Linux show now trouble whatsoever with my ECC memory, solaris doesn't like it) -
Solaris 10 resource controls - /etc/system vs /etc/projects
Can someone please explain to me why, if we set the set the max shared memory segment in the /etc/system file using 'set shmsys:shminfo_shmmax=4294967296' we are seeing 800GB instead of 4GB when running 'prctl -n project.max-shm-memory':
/etc/system:
* Oracle 10.2.0 parameters
set shmsys:shminfo_shmmax = 4294967295
set shmsys:shminfo_shmmin = 1
set shmsys:shminfo_shmmni = 200
set shmsys:shminfo_shmseg = 20
set semsys:seminfo_semmni = 100
set semsys:seminfo_semmsl = 260
set semsys:seminfo_semmns = 1024
set semsys:seminfo_semopm = 100
set semsys:seminfo_semvmx = 32767
set rstchown = 0
* Setting in for Oracle 10 upgrade
set noexec_user_stack = 1
With /etc/system populated running prctl produces:
# /bin/prctl -n project.max-shm-memory -i process $$
process: 3428: sh
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 800GB - deny -
What is setting the maximum size for a shared segment to 800GB when using the /etc/system file to set the parameter?
If we remove the settings from /etc/system and use the normal default projects settings we get 1/4 of the physical memory, which is what we would expect to see.
Please note we will be using projects to control resources - I am just curious as to the effect of the /etc/system [set shmsys:shminfo_shmmax=4294967296] setting aboveI have since found the answer - the 800GB is coming from shminfo_shmmax multiplied by shminfo_shmmni. This has been confirmed on another system with different values.
-
Solaris 10 Project Machine from scratch
We learn by doing. For the purpose of learning Solaris, I want to take an unused Dell 400SC, w 2gb of ram, a 40gb IDE drive, and a 500gb sata drive and make a storage container that I can access from my Mac's, and a Dell Laptop running Windows XP, via local network (no domain) to start. This is a project machine with nothing of value on it, so I can afford to make mistakes. The goal here is to learn, and get some fundamentals down before pursuing some in depth training.
So, I am looking for some guidance. First, I am trying to figure out a list of tasks to perform, and an order to perform them in.
I downloaded 10/09, and it installed without a hitch. I haven't a clue what to do next :) Update would be my guess...
So let's hear it. You've all been here.
Your expertise, and kindness is appreciated
Scott
Lancaster, PA USAThanks Xaheer, for replying...
I have been reading the online documentation, to the point of numbness of mind... haha
what I was hoping for here was a very generic reply in terms of an order of progression for my project. some structure to the big picture of the simple task I'm trying to achieve.
For Example: 1. install > 2. update the latest patches via update manager (?) > 3. add the storage drives > 4. Set up users > 5. configure containers > 6. setup shares, or some access for Mac OS X via LAN
"best practices" or something like that... tapping off your experience - then i can set about studying, reading, and practicing each area as i go (knowing i'm not getting ahead of myself, or doing something i'll have to undo later) OR MISSING something important
so tonight, I've re-installed Oracle Solaris 10 (10/09) with the ZFS root file sys option... (instead of UFS default install)
Two things to do next might be:
1. Fix the 'unknown' host name (which I'm reading about now)
2. Patching best Practices (or is this outdated)
a) install the latest patch and package utility patches first
b) make sure Sun Alert patch cluster is up to date
c) Live Update or Update Manager
I don't mind doing the work behind the research and reading, and I'm not looking for you guys to 'do it for me' - simply advice and suggestion to keep me on the right path, or at least not too far off of it :)
Again... thanks for your time, and your advice.
Scott -
Java timezone vs Solaris /etc/TIMEZONE
Hi, I've a perplexing (but rather interesting) problem.
On our old Solaris 6 box, /etc/TIMEZONE contain these lines
TZ=EAT-8
LC_COLLATE=en_US
LC_CTYPE=en_US
LC_MESSAGES=C
LC_MONETARY=en_US
LC_NUMERIC=en_US
LC_TIME=en_US When I type date on the Solaris command line, I get this response
# date
# Thu Oct 14 09:29:24 EAT 2004
When I run a test program I get different date and time
public class DateTest
public static void main(String[] args)
String timeZone[] ={"GMT","GMT-1","GMT-2", "GMT-3", "GMT+8", "EAT", "EAT-8", "Hongkong"};
System.out.println("DEFAULT TIME ZONE");
System.out.println("Date = " + new Date());
for(int kk=0; kk<timeZone.length; kk++)
TimeZone.setDefault(TimeZone.getTimeZone(timeZone[kk]));
System.out.println(" DATE: " + new Date()+" TIME ZONE: "+ timeZone[kk] );
}DEFAULT TIME ZONE
Date = Thu Oct 14 04:29:29 GMT+03:00 2004
DATE: Thu Oct 14 02:29:29 GMT+01:00 2004 TIME ZONE: GMT
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: GMT-1
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: GMT-2
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: GMT-3
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: GMT+8
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: EAT
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: EAT-8
DATE: Thu Oct 14 04:29:29 GMT+03:00 2004 TIME ZONE: Hongkong
One would think that EAT-8 would get me the same result, but no!
How does one get Java to return the same result as the date on command line?I don't pretend to be an expert on this %!&*%# mess they've made of calendars, cates, and times, but I have learned that Dates / the Date class won't play nice with the other classes. I only use Dates in their default values. Here's something that set the timezone as you were trying to do.
import java.text.DateFormat;
import java.util.Date;
import java.util.TimeZone;
public class DateTest
public static void main(String[] args)
Date now = new Date();
System.out.println("DEFAULT Date = " + now);
DateFormat df = DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.FULL);
String[] timeZone = {"America/Los_Angeles", "MST", "JST", "US/Hawaii", "Etc/GMT-4", "CET", "GMT", "Hongkong"};
for (int kk = 0; kk < timeZone.length; kk++)
TimeZone tz = TimeZone.getTimeZone(timeZone[kk]);
df.setTimeZone(tz);
System.out.println(df.format(now));
} -
Solaris 8 projects strange behavior
Hello,
it looks like an offtopic a bit, but I found this forum the most close to my post theme.
I have some strange problems with sol8 projects:
for many hosts that mechanism works OK, but for some users on some boxes, especially for oracle processes, I can see strange things - on the same host there are some processes in oracle project and some of oracle processes are in default project.
I had the same problem with some other java processes on different host.
I had some thoughts about different patchlevels for that boxes, but I cannot see any difference. I cannot see any setproject for that processes too, so I ran out of ideas.
Can anybody remember if there were any issues in sol8 regarding that problem?1. In general, this kind of technique is something I've been using successfully for years. (Ben recently wrote up a very nice treatment of these "Action Engines" as a "Community Nugget.") So I don't start by expecting this to be a bug in the LV execution system.
2. Your description of the problem sounds almost backwards. You say you manually start the 2nd vi ("Config AD") *after* running the 1st vi ("Read AD"). Seems like you'd need to do the Config 1st and then do the Read, right? I kinda suspect you actually did it in the right order, but described it wrong.
3. The next likely scenario is that the Config failed, but you didn't trap the error and were unaware of it. Then it makes sense that the Read would also fail.
4. A couple issues I regularly deal with in these DAQ Action Engines is internal error handling. I often keep a shift register inside to store errors generated inside the Action Engine. But it can get a little tricky doing sensible things with both the internal error and any other error being wired in as input.
I said all that so I can say this: if you have complex nested case statements, or lots of different action cases to handle, double check that the task wire makes it from all the way from left shift register to right. Sometimes they get lost if they go through a case statement, the output tunnel is set to "use default if unwired", and 1 or more of the cases don't wire the output.
-Kevin P. -
Installing Oracle VM SPARC 3.1.1 on T4-4 swith Solaris 5.10 update 11 (1/13) - not having /etc/system updated with values - is this a manual update or is there required patch?
for /etc/system /etc/project for primary/control ldom and all guest ldoms, is there any patch that shold be installed to set memory settings when standing up guest-ldoms or the primary
/controller ldom? Running 8.6.0.b ILOM.
On another system finding shmsys, hires_tick, semsys and a few exclude settings in /etc/system file for ldoms on another T4-4 running 8.5.0.c ILOM.
Hopefully I've just missed a patch.
- JC/etc/system is by default "empty" and contents only commented parameters or instructions.
And patches will not change this file. The shmsys, hires_tick, semsys and the other settings including /etc/project customizations that you have in the other machine was set manually or added during softwares configuration. -
Shminfo_shmmax in /etc/system does not match project.max-shm-memory
If I specified 'shminfo_shmmax' in /etc/system and hava the system default in /etc/project(no change is made), the size of 'project.max-shm-memory' is 10 times larger than 'shminfo_shmmax'.
#more /etc/system // (16MB)
set shmsys:shminfo_shmmax=16000000
#prctl -n "project.max-shm-memory" -i project user.root
=> will display like below.
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 1.49GB - deny -
system 16.0EB max deny
1.49GB is 10 times larger than 'SHMMAX'. If I add more entries /etc/system like below, max_shm_memory will become even larger.
#more /etc/system
set shmsys:shminfo_shmmax=16000000
set semsys:seminfo_semmni=2000
set shmsys:shminfo_shmmni=2000
set msgsys:msginfo_msgmni=2048
After I reboot with the above /etc/system and no change /etc/project(all default, no values added)
# prctl -n "project.max-shm-memory" -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 29.8GB - deny -
system 16.0EB max deny -
Can anyone shed light about this area how to configure SHMAX in /etc/system right?We saw similar behavior and opened a case with Sun.
The problem turns out to be that the (deprecated) /etc/system to (new) project resource limits isn't always one-to-one.
For example process.max-shm-memory gets set to shmsys:shminfo_shmmax * shmsys:shminfo_shmmni.
The logic here is that under the /etc/system tunings you might have wanted the maximum number of segments of the maximum size so the system has to be able to handle that. Make sense to some degree. I think Sun updated one of their info docs on the process at the end of our case to make this clearer. -
Installation Woes on Solaris 10
I have looked thru all the issues posted so far and I have worked past those, my issue occurs during the installation of the Oracle JVM:
The installation is at 36% when I hit this error:
ORA-00604: error occured at recursive SQL level 1
ORA-04031: unable to allocate 4096 bytes of shared memory.("java pool",.....)
ORA-06512: at "SYS.DBMS_JAVA", line 222
ORA-06512: at line 1
I've created a project for the oracle user and assigned a value for max shared memory:
# id -p oracle
uid=102(oracle) gid=1000(dba) projid=100(oracle)
# cat /etc/project | grep oracle
oracle:100::::project.max-shm-memory=(priv,536870912,deny)
I then activated this setting dynamically:
#prctl -n project.max-shm-memory -v 512mb -r -i project oracle
Even though I shouldn't have to reboot after this change, I went ahead an rebooted anyway.
I am trying to install 10.2.0.1 64-bit on Solaris 10 with latest recommended patch cluster as of 13-Jan-05.
Any ideas?
Thx,
CCWell, I am using scripts generated by dbca, surely those allocate enough for itself to function?
Also, under Solaris 10, memory parameters are no longer pulled from /etc/system.
-CC -
Unable to install SUN SMC on solaris 10
Hi,
I am trying to install SMC on solaris 10. es-guisetup is giving errors as given below:
smcdatabase service is always in maintenance mode.
I am running postgres on 5433 port.
Please help me in this regard.
Here is the log file:
# cat core-db-setup_rtisifldhdesk.100929090636.6642
Started /var/opt/SUNWsymon/install/guibased-setup.1285765588453.sh at Wed Sep 29 09:06:36 EDT 2010.
Running on SunOS rtisifldhdesk 5.10 Generic_142900-03 sun4u sparc SUNW,Sun-Fire-V210.
-sh: initdb: not found
bash-3.00# cat gui_setup_rtisifldhdesk.100929090557
The system memory to be compared has mapped to : 8192
Skipping the Disable SNMP panel as a port other than 161 is specified as the agent port
The web server organization is given as : ADP
The web server location is given as : njjjjj
The Setup script will be executed as : /var/opt/SUNWsymon/install/guibased-setup.1285765588453.sh
stty: : Invalid argument
/var/opt/SUNWsymon/install/guibased-setup.1285765588453.sh: /var/opt/SUNWsymon/db/data/SunMC/pg_hba.conf: cannot create
/var/opt/SUNWsymon/install/guibased-setup.1285765588453.sh: /var/opt/SUNWsymon/db/data/SunMC/postgresql.conf: cannot create
SYMON_JAVAHOME is set to: /usr/java
JDK version is: "1.5.0_22-b03"
This script will help you to setup Sun Management Center 4.0.
Following layer[s] are installed:SERVER,AGENT
None of the layers is setup.
Following layer[s] will get setup: SERVER,AGENT
Database will be setup.
Following Addon[s] are installed:
Advanced System Monitoring,ELP Config-Reader Monitoring,Desktop,Service Availability Manager,Sun Fire Entry-Level Midrange System,Netra,DomMonit SPARC Enterprise Mx000,Dom DR SPARC Enterprise Mx000,PltAdmin SPARC Enterprise Mx000,Performance Reporting Manager,Solaris Container Manager,Sun Fire Midrange Systems Domain Administration,Dynamic Reconfiguration for Sun Fire High-End and Midrange Systems,Sun Fire Midrange Systems Platform Administration,Starfire Monitoring,Sun Fire High-End Systems Monitoring,Sun Enterprise 6500-3500 Servers/sun4d DR,Sun Enterprise 6500-3500 Servers/sun4d Config Reader,System Reliability Manager,Workgroup Server,Generic X86/X64 Config Reader.
Checking memory available...
Configuring Sun Management Center DB...
The Port 5433 for Sun Management Center DB has already been used.
Enter another port for Sun Management Center DB listener : : 2521
Initializing SunMC database.
disabled sunmcdatabase service from maintenance state during setup
check the smf database service log to know the reason
Failed to enable service sunmc-database
Database setup failed : db-start failed
Updating registry...
None of the base layers are setup.
No Addon is setup.
Following Addons are not yet setup: Advanced System Monitoring,ELP Config-Reader Monitoring,Desktop,Service Availability Manager,Sun Fire Entry-Level Midrange System,Netra,DomMonit SPARC Enterprise Mx000,Dom DR SPARC Enterprise Mx000,PltAdmin SPARC Enterprise Mx000,Performance Reporting Manager,Solaris Container Manager,Sun Fire Midrange Systems Domain Administration,Dynamic Reconfiguration for Sun Fire High-End and Midrange Systems,Sun Fire Midrange Systems Platform Administration,Starfire Monitoring,Sun Fire High-End Systems Monitoring,Sun Enterprise 6500-3500 Servers/sun4d DR,Sun Enterprise 6500-3500 Servers/sun4d Config Reader,System Reliability Manager,Workgroup Server,Generic X86/X64 Config Reader
Could not finish requested task.
Thanks
SugunakarHi Sugunakar,
There are a few small things I can think of checking:
I am trying to install SMC on solaris 10What version of Solaris 10 are you using? You need 11/06 (aka "Update 3") or later. Also, it looks like this is on a V210: how much memory and swap do you have ([requirements here|http://docs.sun.com/app/docs/doc/820-2215/deployment-72?l=en&a=view])? And have you [edited your /etc/project file|http://docs.sun.com/app/docs/doc/820-2216/chapter2-1000?l=en&a=view] to adjust the shared memory settings?
Have you applied any of the [patch sets|http://forums.halcyoninc.com/showthread.php?t=104]?
smcdatabase service is always in maintenance mode.It would be interesting to see what that service log says. Look in /var/svc/log/application-management-sunmcdatabase:default.log for clues.
/var/opt/SUNWsymon/install/guibased-setup.1285765588453.sh: /var/opt/SUNWsymon/db/data/SunMC/pg_hba.conf: cannot createIs there any reason why SunMC may not be able to write to /var (or /var/opt/SUNWsymon)? Maybe portions of /var are read-only (shared from a global zone, or over NFS)?
I've done a lot of SunMC pilot projects over the years: if you could use some help speeding up your eval just send me an email.
Regards,
[email protected] -
Shared memory: apache memory usage in solaris 10
Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!
a) How or why does solaris choose to share memory
between processes
from the same program invoked multiple times
if that program has not
been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
Simply: if we run pmap / ipcs we can see a
shared memory reference
for our oracle database and ldap server. There
is no entry for apache.
But the total memory usage is far far less than
all the apache procs'
individual memory totted up (all 100 of them, in
prstat.) So there is
some hidden sharing going on somewhere that
solaris(2.9) is doing,
but not showing in pmap or ipcs. (virtually
no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
b) Under solaris 10, each apache process takes up
precisely the
memory reported in prstat - add up the 100
apache memory details
and you get the total RAM in use. crank up the
number of procs any
more and you get out of memory errors so it
looks like prstat is
pretty good here. The question is - why on
solaris10 is apache not
'shared' but it is on solaris 9? We set up
all the usual project details
for this user, (jn /etc/projects) but I'm
guessing now that these project
tweaks where you explicitly set the shared
memory for a user only take
effect for programs explicitly coded to use
shared memory , e.g. the
oracle database, which correctly shows up a
shared memory reference
in ipcs .
We can fire up thousands of apaches on the 2.9
system without
running out of memory - both machines have the
same ram !
But the binary versions of apache are exactly
the same, and
the config directives are identical.
please tell me that there is something really
simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
Darren -
Problem about installing DB 10.2.0.1.0 on Solaris 10 (×86-64)
1.How to set kernel parameters with Solaris 10 (×86-64) for installing DB 10.2.0.1.0?
When I run the OUI, there are errors after product-specific prerequisite checking:
Checking kernel parameters
Checking for BIT_SIZE=64; found BIT_SIZE=64. Passed
Checking for shmsys:shminfo_shmmax=4294967295; found no entry. Failed <<<<
Checking for shmsys:shminfo_shmmni=100; found no entry. Failed <<<<
Checking for semsys:seminfo_semmni=100; found no entry. Failed <<<<
Checking for semsys:seminfo_semmsl=256; found no entry. Failed <<<<
Check complete. The overall result of this check is: Failed <<<<
Problem: The kernel parameters do not meet the minimum requirements (see above).
Checking available swap space requirements ...
Expected result: 4028MB
Actual Result: 1462MB
Check complete. The overall result of this check is: Failed <<<<
Problem: The system does not have the required swap space.
Recommendation: Make more swap space available to perform the install.
I set those parameters referrence with:
http://www.dizwell.com/prod/node/235
3.3 Setting Kernel Parameters
Now, when I issue the following command with oracle user:
prctl -n project.max-shm-memory -i project oracle
project: 101: oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 4.00GB - deny -
system 16.0EB max deny
prctl -n project.max-shm-ids -i project oracle
project: 101: oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 100 - deny -
system 16.8M max deny
prctl -n project.max-sem-ids -i project oracle
project: 101: oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 100 - deny -
system 16.8M max deny
I don't know how to check process.max-sem-nsems???
2. Originally, I want to install DB 10.2.0.2 for simulant testing. After the installing of DB 10.2.0.1.0, how to upgrade to DB 10.2.0.2? what is the number of patch?Hi, damorgan
I do follow Oracle Installation Guide, but the same errors are still existing.
And I think there are some error and unclear places in the document. I'm not sure because I don't familiar with Solaris. Maybe the document gives a general guide, but not suit for my thing. I'm wondering if there are some bugs on my Solaris or I really did wrong process...
I give the detail process as following:
Solaris 10 8/07 Operating System:
sol-10-u4-ga-x86-v1-iso.zip to sol-10-u4-ga-x86-v5-iso.zip
Oracle® Database Installation Guide
10g Release 2 (10.2) for Solaris Operating System (x86-64)
2.6 Configuring Kernel Parameters
http://download.oracle.com/docs/cd/B19306_01/install.102/b15704/pre_install.htm#BABGADGE
Oracle® Database Release Notes
10g Release 2 (10.2) for Solaris Operating System (x86-64)
4 Documentation Corrections and Additions
http://download.oracle.com/docs/cd/B19306_01/relnotes.102/b15703/toc.htm#CHDBAHCD
issue following commands with root:
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/useradd -g oinstall -G dba oracle
# passwd -r files oracle
# id -p
uid=0(root) gid=0(root) projid=1(user.root)
# id -a oracle
uid=100(oracle) gid=100(oinstall) groups=101(dba)
# su - oracle
$ id -p
uid=100(oracle) gid=100(oinstall) projid=3(default)
$ exit
# cat /etc/project
system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
# prctl -n project.max-shm-memory -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 1006MB - deny -
system 16.0EB max deny
# prctl -n project.max-shm-memory -v 4gb -r -i project user.root
# prctl -n project.max-shm-memory -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 4.00GB - deny -
system 16.0EB max deny
# prctl -n project.max-shm-ids -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 128 - deny -
system 16.8M max deny
# prctl -n project.max-shm-ids -v 100 -r -i project user.root
# prctl -n project.max-shm-ids -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 100 - deny -
system 16.8M max deny
# prctl -n project.max-sem-ids -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 128 - deny -
system 16.8M max deny
# prctl -n project.max-sem-ids -v 100 -r -i project user.root
# prctl -n project.max-sem-ids -i project user.root
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 100 - deny -
system 16.8M max deny
# prctl -n project.project.max-sem-nsems -i project user.root
prctl: failed to get resource control for project.project.max-sem-nsems: Invalid argument
OK. Then I start another session with oracle user login in, and begin installing DB. Unfortunately, the errors after product-specific prerequisite checking:
Checking kernel parameters
Checking for BIT_SIZE=64; found BIT_SIZE=64. Passed
Checking for shmsys:shminfo_shmmax=4294967295; found no entry. Failed <<<<
Checking for shmsys:shminfo_shmmni=100; found no entry. Failed <<<<
Checking for semsys:seminfo_semmni=100; found no entry. Failed <<<<
Checking for semsys:seminfo_semmsl=256; found no entry. Failed <<<<
Check complete. The overall result of this check is: Failed <<<<
Problem: The kernel parameters do not meet the minimum requirements (see above).
Checking available swap space requirements ...
Expected result: 4028MB
Actual Result: 1462MB
Check complete. The overall result of this check is: Failed <<<<
Problem: The system does not have the required swap space.
Recommendation: Make more swap space available to perform the install. -
Psrset and psradm vs. projects
I just took over admin of several Solaris 9 systems where the last admin walked out. I am still trying to iron out what is what here. On one system, the admin had left a startup script in /etc/rc3.d that used psrset and psradm to create a processor set and add processors to the set every time it booted. There wasn't a command to bind any processes to the set. At the same time, the system is using /etc/project where an oracle pool was created with a processor set. The 'pbind -q' command returns nothing. The 'poolbind -q' command returns oracle as the pool. The 'mpstat -p' command shows the set to contain the processors that were reported by 'pooladm' and it isn't the same set that is created with the startup script. It appears the system is using the /etc/project instead of the startup script.
My questions (and confusions) are as follows:
First, can you mix and match the /etc/project with psrset and psradm commands?
Second, is there any other bit of information I can use to make the determination that the /etc/project truly has the control and nothing has been left to the psrset and psradm commands at bootup? In other words, is the startup script doing anything to damage the project setup?
Should I just remove the psrset/psradm startup script?
Thanks in advance for any help in clearing up my confusion.Hi SergioT,
You could refer to this thread I met before, I think you could get useful information:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/a82f5a19-240a-487d-942e-130de48e07b2/visual-studio-2013-support-for-windows-cewindows-phone?forum=visualstudiogeneral
Best Regards,
Jack
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.
Maybe you are looking for
-
Hi, We have an application built using HTMl 5 and Javascript running on IPad2, iOS5.1 Safari 5.1. When a file url is clicked instead of asking for Save or Open or Cancel options, the file gets opened on IPad by default. Is this default behaviour whi
-
Short Dump while checking DataSource contents in R3 (Transaction RSA3)
Hello all, I am having trouble with a DataSource (2LIS_11_VAITM) in the R3 Source System (IDES 6.0). When I try to view the contents of this DataSource through Transaction RSA3, I get the following Short Dump Error..... Could someone pls help me with
-
[SOLVED]Access shares in Active Directory env w/o asking for pword
Hello, read this and configured my computer like described. So far so good. When I know use my file manager and want to access smb://fileserver/share he askes me for a password. When I first used Ubuntu there was a software called Likewyse open which
-
Building installer with 8.5
I have built an application with Labview 8.5 and am trying to build an installer with the 8.5 runtime engine for a stand alone program. However the exe builds okay, but when I try to build the installer I get a message saying some components need to
-
Running Windows Vista Home Premium on a dual processor with 4 GB of RAM and a HD with 50% empty. Yesterday downloaded Firefox 4 only to find that Lingoes translation software that must connect to the web site for the translations could not; as some o