Libpam broken on Solaris 7?
Hi!
Sorry, but I could not find more suitable place where bugs should be submitted to, so I post the message to this forum.
While playing with PAM modules on Solaris 7 (sparc/gcc 2.95.3) I found that some stacked modules doesn't work well. It turned out to be a problem with the Solaris libpam library. It seems that libpam is not able to pam_get_item and pam_set_item with the same address reference. Short example follows:
int pam_sm_chauthtok()
char *x; int retval;
retval = pam_set_item(pamh, PAM_OLDAUTHTOK, (const void *) "XY");
fprintf(stderr, "%d ", retval);
retval = pam_get_item(pamh, PAM_OLDAUTHTOK, (const void **) &x);
fprintf(stderr, "%d '%s'\n", retval, x);
retval = pam_set_item(pamh, PAM_OLDAUTHTOK, (const void *) x);
fprintf(stderr, "%d ", retval);
retval = pam_get_item(pamh, PAM_OLDAUTHTOK, (const void **) &x);
fprintf(stderr, "%d '%s'\n", retval, x);
I got:
0 0 'XY'
0 0 ''
I would consider this feature-not-bug, but this only affects PAM_OLDAUTHTOK and PAM_AUTHTOK items, PAM_TTY for example
works well.
Mirek
The Sun Freeware site:
http://sunfreeware.com/
has Perl 5.6.1 listed in the Solaris 7 section and the Solaris 8 section and the Solaris 2.6 section
Similar Messages
-
Root RAID-1 support totally broken in Solaris 10
Has anyone else noticed yet that support for recovery of a RAID-1 on the root partition is totally broken under Solaris 10? While one can set up a set of RAID-1 partitions for /, swap and /export/home under Solaris 10 in the same manner as Solaris 9 and the resulting RAID-1 is functional, its recovery process is not! If you power down the machine and remove one of the IDE drives making up the RAID-1 (each on their own IDE bus of course), the machine will kernel panic as expected on bootup. However unlike Solaris 9 which allows you to still login to delete the metastat database from the missing drive, Solaris 10 immediately reboots the machine upon the kernel panic. This makes the use of RAID-1 not only non-functional but actually dangerous since it doubles the chance that you will have a drive failure which will make the machine unbootable. Back to Solaris 9 for me.
JackI think you've been unfortunately hit by:
6215065 Booting off single disk from mirrored root pair causes panic reset
I'm currently in progress of putting a fix into the current development
release and backporting it to s10 asap which will be available via
a ufs patch eventually
frankB -
E1000g driver broken on solaris 10 u6; how to report this bug correctly?
Question also posted in OpenSolaris forums:
[http://www.opensolaris.org/jive/thread.jspa?messageID=329326]
While upgrading a T2000 server to solaris 10 update 6 I found that my jumbo
interfaces report errors on reboot.
One frequent cause was that the update replaced /kernel/drv/e1000g.conf file
(I changed the MaxFrameSize line to enable jumbo frames). This in-persistency
of the file is annoying but well-known (bonus question: can I make changes to
this file persistent?)
However, the system still refused to set MTU = 9000 on the interfaces, and by
default it assigns an MTU=8978 (instead of 16384 or 10244 as expected from
other systems; we only need 9000 though). Googling showed that a few people
have also discussed this regression.
Copying in the driver file (/kernel/drv/sparcv9/e1000g) from Solaris 10 u4
worked (network goes up, needed MTU is assigned). This doesn't seem like a
suported and "enterprise" solution, so I want this bug to be known and fixed by
Sun in the main tree.
I haven't found any numbered bug report on this matter. How can I submit a bug
for this regression in Solaris 10 (I couldn't reproduce the problem in OpenSolaris)?
Can someone with access and skill please post the bug for us? :)
e1000g driver module versions involved:
sol10u4 (working): Intel PRO/1000 Ethernet 5.1.11
sol10u6 (bad MTU limit): Intel PRO/1000 Ethernet 5.2.8
//Jim KlimovHello again, Mr. Cohen, and thank you for your corrections to my style.
No offense taken, since it makes sense when you put it this way,
and the point is taken - I'll try to be that specific next time. Thanks.
Returning to the problem at hand, however with the abundance of
Sun's tools to submit bugs (including those you pointed out above),
I believed I might not know of some one more bugtracker.
I also thought that "support cases" did differ from "bugs" which arise,
taking my example, when Sun (or Intel?) took a working e1000g driver
and "fixed" so it's no longer working - and then Sun releases it into
the commercial version of the OS this way through all the presumed
Q&A. And wants commercial users to pay for fixing it back. That's
the part of the logic I found flawed somehow ;)
So yes, you can say that I'm "cheap" to pay for Sun fixing something
they broke themselves.
I originally posted this report on OpenSolaris forum in hope someone
would point out my misconfiguration or confirm that the problem exists
for others.
That forum (and/or the bugtracker search for keyword e1000g) also have
a number of posts complaining about the vast number of ways this one
e1000g driver was broken lately in 90s-100s OpenSolaris builds. Some
posters even went as far as to suggest that someone reviews all works
of the engineers and managers who are responsible for these recent
flawed putpacks, or even provide some disciplinary action.
I wouldn't go that far, but I was still saddened to find some other bug
leak into the kinda-stable Solaris.
//Jim on a mobile -
Stat() call broken on Solaris 8 x86
Greetings all...
The stat() function call is broken! Basically what happens is that it does not return when checking for non existant files in the automount directories (/home, /net, /xfn).
Sample code snippet:
stat("/home/nonexistantfile", NULL);
The above should return -1, and works fine on Solaris 8 on SPARC hardware. Problem is it does not return on x86 hardware, it just hangs right there! I'm using the 04/01 release and the gcc 2.95.3 compiler. A search in the forums does not return anything so I'm assuming it's a new bug. What's the bounty for finding a fresh one? :)
BrianTried that....the card was recognized...tried changing the values of the SCSI advanced setting during bootup (CTRL A)...but no avail.
Please advice n thanks. -
TCP connection for DHCP failover frequently are broken in Solaris 10
Hi
We have two dhcp servers which are installed in Solaris 10 and set to a failover pair. Currently, we can find that tcp connection for dhcp failover protocol are frequently broken. It looks like that primary dhcp server initiatively send FIN message to secondary one but in general, this tcp connection should always keep alive. On the other hand, the tcp connection can not completely be closed right now which FIN_WAIT_2 status in Primary one and CLOSE_WAIT status in secondary would last for a long time.
Will Solaris 10 cause this fault? Is it a known bug in OS?
OS info:
-bash-3.00$ cat /etc/release
Solaris 10 5/08 s10s_u5wos_10 SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 24 March 2008
-bash-3.00$
-bash-3.00$
-bash-3.00$ uname -a
SunOS edns1 5.10 Generic_142900-03 sun4v sparc SUNW,Netra-T5220
TCP connection info:
Primary DHCP Server:
2012 08 29 03:41:43
PING 172.25.6.137: 56 data bytes 64 bytes from edns2 (172.25.6.137): icmp_seq=0. time=0.678 ms
remote refid st t when poll reach delay offset disp
==============================================================================
*idns1 195.26.151.151 3 u 45 1024 377 0.75 -0.071 0.05
+idns2 195.26.151.151 3 u 162 1024 377 0.93 0.169 0.08
clusternode1-pr 0.0.0.0 16 - - 1024 0 0.00 0.000 16000.0
+clusternode2-pr idns1 4 u 406 1024 376 0.49 -0.154 15.12
172.25.6.133.647 172.25.6.137.58107 49640 0 49640 0 ESTABLISHED
172.25.6.133.647 *.* 0 0 49152 0 LISTEN
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2012 08 29 03:41:47
PING 172.25.6.137: 56 data bytes 64 bytes from edns2 (172.25.6.137): icmp_seq=0. time=0.535 ms
remote refid st t when poll reach delay offset disp
==============================================================================
*idns1 195.26.151.151 3 u 49 1024 377 0.75 -0.071 0.05
+idns2 195.26.151.151 3 u 166 1024 377 0.93 0.169 0.08
clusternode1-pr 0.0.0.0 16 - - 1024 0 0.00 0.000 16000.0
+clusternode2-pr idns1 4 u 410 1024 376 0.49 -0.154 15.12
172.25.6.133.647 172.25.6.137.58107 49640 0 49640 0 FIN_WAIT_2
172.25.6.133.647 *.* 0 0 49152 0 LISTEN
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Secondary DHCP Server:
2012 08 29 03:41:41
PING 172.25.6.133: 56 data bytes 64 bytes from edns1 (172.25.6.133): icmp_seq=0. time=1.26 ms
remote refid st t when poll reach delay offset disp
==============================================================================
*idns1 195.26.151.151 3 u 450 1024 377 0.92 -0.067 0.06
+idns2 195.26.151.151 3 u 552 1024 377 0.96 0.237 0.08
+clusternode1-pr idns1 4 u 360 1024 377 1.85 -0.528 1.51
clusternode2-pr 0.0.0.0 16 - - 1024 0 0.00 0.000 16000.0
172.25.6.137.647 *.* 0 0 49152 0 LISTEN
172.25.6.137.58107 172.25.6.133.647 49640 0 49640 0 ESTABLISHED
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2012 08 29 03:41:45
PING 172.25.6.133: 56 data bytes 64 bytes from edns1 (172.25.6.133): icmp_seq=0. time=1.36 ms
remote refid st t when poll reach delay offset disp
==============================================================================
*idns1 195.26.151.151 3 u 454 1024 377 0.92 -0.067 0.06
+idns2 195.26.151.151 3 u 556 1024 377 0.96 0.237 0.08
+clusternode1-pr idns1 4 u 364 1024 377 1.85 -0.528 1.51
clusternode2-pr 0.0.0.0 16 - - 1024 0 0.00 0.000 16000.0
172.25.6.137.647 *.* 0 0 49152 0 LISTEN
172.25.6.137.58107 172.25.6.133.647 49640 0 49640 0 CLOSE_WAIT
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thanks!Thanks, but had found a previous discussion with this hint and applied it.
svccfg -s sendmail listprop shows config /local_only = false
Yes, I would really love to fix the fault, but what I would really like is some hints as to how to debug ports under svc control. -
Broken pipe - Solaris 10, Sun T2000
Hi,
I have some problems..
So, there's one system I'm working with. It's made of 2 parts (both written in Java):
- "normal" application, that works as a business logic server,
- web application (build on Turbine framework, running on a Resin Application Server) - it works as a presentation layer,
These 2 parts are connected via TCP/IP socket connection - we use an ObjectOutputStream.
It normally works just fine (it's running on several different systems), but recenlty it has been installed on a new server machine and it's started to throw strange exceptions. Below there are informations about server machine, exceptions and code making connections.
We think, that It may be a problem with configuration of Solaris kernel or TCP/IP stack, but have no idea how to fix it.
SERVER CONFIGURATION:
Machine: Sun T2000
System: Solaris 10
JVM: 1.4 or 1.5 (both were tested)
EXCEPTION:
ERROR 2006-11-28 11:28:59,377 [pl.com.ttsoft.vixen.currentday.server.ClientServiceThread] - IOException while sending data to the client. Closing output stream.
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1682)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1591)
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1401)
at java.io.ObjectOutputStream.writeUnshared(ObjectOutputStream.java:371)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.sendMessageToClient(ClientServiceThread.java:679)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.sendModificationData(ClientServiceThread.java:432)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.serveDataModification(ClientServiceThread.java:308)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.run(ClientServiceThread.java:185)
ERROR 2006-11-28 11:28:59,400 [pl.com.ttsoft.vixen.currentday.server.ClientServiceThread] - IOException while closing client connection.
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1682)
at java.io.ObjectOutputStream$BlockDataOutputStream.flush(ObjectOutputStream.java:1627)
at java.io.ObjectOutputStream.flush(ObjectOutputStream.java:666)
at java.io.ObjectOutputStream.close(ObjectOutputStream.java:687)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.closeClientConnection(ClientServiceThread.java:706)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.sendMessageToClient(ClientServiceThread.java:691)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.sendModificationData(ClientServiceThread.java:432)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.serveDataModification(ClientServiceThread.java:308)
at pl.com.ttsoft.vixen.currentday.server.ClientServiceThread.run(ClientServiceThread.java:185)
STANDALONE APPLICATION CODE:
/* i've simplified it */
/* OPENING CONNECITON */
Socket socket = = new Socket(hostName, hostPort);
ObjectOutputStream outStream = new ObjectOutputStream( socket.getOutputStream() );
ClientDescriptor clientDescriptor =
new ClientDescriptor(socket, outStream, info.getLogin(), info.getSystem(),
info.getDay(), info.getCurrentDaySessionId());
/* SENDING DATA */
private boolean sendMessageToClient( MessageDTO messageDTO, ClientDescriptor clientDescriptor )
throws CurrentDayException {
logger.info( "sendMessageToClient " + messageDTO.getClientId() );
ObjectOutputStream outputStream = clientDescriptor.getOutputStream();
try {
outputStream.writeUnshared( messageDTO ); // THIS WRITE THROWS EXCEPTIONS
outputStream.flush();
} catch ( IOException exc ) {
logger.error( " IOException while sending data to the client. Closing output stream. ", exc );
// close client connection
closeClientConnection( clientDescriptor );
return false;
return true;
private void closeClientConnection( ClientDescriptor clientDescriptor ) {
try {
synchronized ( clientDescriptorMap ) {
clientDescriptor.setState( ClientDescriptor.TO_REMOVE );
clientDescriptor.getOutputStream().close();
clientDescriptor.getSocket().close();
} catch ( IOException exc ) {
logger.error( " IOException while closing client connection. ", exc );
/* WEB APPLICATION */
/* RECIEVING DATA */
/* it routes this data to 3-rd party applet */
SocketChannel vixenSocket = (SocketChannel)key.channel();
Socket clientSocket = (Socket)objClientServers.get(key);
// read the information from the socket....
ByteBuffer buffer = ByteBuffer.allocate(16 * 1024);
while (vixenSocket.read(buffer) > 0) {
buffer.flip();
byte[] bytespassed = new byte[buffer.remaining()];
logger.debug("buffer.remaining() (1)=" + buffer.remaining());
buffer.get(bytespassed, 0, bytespassed.length);
clientSocket.getOutputStream().write(bytespassed);
buffer.compact();
buffer.flip();
while (buffer.hasRemaining()) { // make sure the buffer is fully readed.
byte[] bytespassed = new byte[buffer.remaining()];
logger.debug("buffer.remaining() (2)=" + buffer.remaining());
buffer.get(bytespassed, 0, bytespassed.length);
clientSocket.getOutputStream().write(bytespassed);
buffer.clear();We think, that It may be a problem with configuration of Solaris kernel or TCP/IP stack, but have no idea how to fix it.
Thanks for help,
Ziemek Obel.We solve problem changing tcp/ip window parameters on T2000 server:
ndd -set /dev/tcp tcp_xmit_hiwat 400000
ndd -set /dev/tcp tcp_recv_hiwat 400000
ndd -set /dev/tcp tcp_conn_req_max_q 81920
ndd -set /dev/tcp tcp_conn_req_max_q0 81920
ndd -set /dev/tcp tcp_time_wait_interval 60000
Arkadiusz Malinowski -
Sendmail broken from Solaris 10 11/06 to Solaris 10 8/07 - port 25 broken
I am in the process of building a new solaris 10 8/07 server to replace a solaris 10 11/06 server. Both are running.
Sendmail on both has changes limited to:
correcting /etc/hosts to include mailhost entry
dns server pointing to localhost as mailhost
local-host-names set on each
aliases set up and newaliases run
Otherwise, the configuration files are standard and both are running the as shipped main.cf.
svcadm enable -r network/smtp seems to work fine and the services show as enabled
The 11/06 server has run fine for close to a year.
The 8/07 server has mconnect work fine to localhost but I get connection denied when I try to come in via the ip number.
TCP Wrappers are not running, but even then I have hosts.allow set with lots of variations of ALL: ALL.
No errors in /var/log/syslog.
How do I figure out what is happening on the port and why it is not connecting?
# mconnect localhost
connecting to host localhost (127.0.0.1), port 25
connection open
220 mailhost.molten-rock.com ESMTP Sendmail 8.13.8+Sun/8.13.8; Sun, 4 Nov 2007 16:44:25 +1300 (NZDT)
# mconnect magma
connecting to host magma (192.168.25.250), port 25
connect: Connection refused
# uname -a
SunOS magma 5.10 Generic_127112-02 i86pc i386 i86pc
# svcs | grep smtp
online 15:45:50 svc:/network/smtp:sendmailThanks, but had found a previous discussion with this hint and applied it.
svccfg -s sendmail listprop shows config /local_only = false
Yes, I would really love to fix the fault, but what I would really like is some hints as to how to debug ports under svc control. -
Is "version" broken in Solaris Studio 12.2?
With 12.1, the "{Studio install path}/bin/version" command, when run with no arguments, produces output indicating that Sun Studio 12.1 is installed. With 12.2, there is no output that a script could use to determine which version of Studio is installed. It seems that when verison is invoked with no arguments, it attempts to read from the directory {Studio install path}/inventory. Our 12.1 installation has this directory, but it is not present in our 12.2 installation.
Does anyone have a Studio 12.2 installation on Solaris 10 that does include an inventory directory?
This is what my output from version looks like with 12.2:
Machine hardware: sun4u
OS version: 5.10
Processor type: sparc
Hardware: SUNW,SPARC-Enterprise
The following components are installed on your system:
And this is what it looks like with 12.1:
Machine hardware: sun4u
OS version: 5.10
Processor type: sparc
Hardware: SUNW,SPARC-Enterprise
The following components are installed on your system:
Sun Studio 12 update 1
Sun Studio 12 update 1 C Compiler
Sun Studio 12 update 1 C++ Compiler
Sun Studio 12 update 1 Tools.h++ 7.1
Sun Studio 12 update 1 C++ Standard 64-bit Class Library
Sun Studio 12 update 1 Garbage Collector
Sun Studio 12 update 1 Fortran 95
Sun Studio 12 update 1 Debugging Tools (including dbx)
Sun Studio 12 update 1 IDE
Sun Studio 12 update 1 Performance Analyzer (including collect, ...)
Sun Studio 12 update 1 Performance Library
Sun Studio 12 update 1 Scalapack
Sun Studio 12 update 1 LockLint
Sun Studio 12 update 1 Building Software (including dmake)
Sun Studio 12 update 1 Documentation Set
Sun Studio 12 update 1 /usr symbolic links and GNOME menu item
version of "/opt/sunstudio12.1/bin/../prod/bin/../../bin/cc": Sun C 5.10 SunOS_sparc 2009/06/03
version of "/opt/sunstudio12.1/bin/../prod/bin/../../bin/CC": Sun C++ 5.10 SunOS_sparc 2009/06/03
version of "/opt/sunstudio12.1/bin/../prod/bin/../../bin/f90": Sun Fortran 95 8.4 SunOS_sparc 2009/06/03
version of "/opt/sunstudio12.1/bin/../prod/bin/../../bin/dbx": Sun DBX Debugger 7.7 SunOS_sparc 2009/06/03
version of "/opt/sunstudio12.1/bin/../prod/bin/../../bin/analyzer": Sun Analyzer 7.7 SunOS_sparc 2009/06/03
version of "/opt/sunstudio12.1/bin/../prod/bin/../../bin/dmake": Sun Distributed Make 7.9 SunOS_sparc 2009/06/03Nik - thanks for confirming that we aren't alone in seeing this behavior.
Chris, thanks for explaining that this is the behavior that we should expect. I spent several hours considering if something might have gone wrong during the installation of Solaris Studio. My colleague who actually performed the installation will likely be almost as gratified as I am to review your explanation.
By way of feeback, I would like to offer the following comments.
1) It's regrettable that this functionality was removed without any sort of deprecation notice (at least, none that I could find, and I looked pretty hard). My colleagues and I spent several hours wondering what we did wrong since this wasn't working. Truss shows that the 12.2 bin/version tool still looks for the "inventory" directory and so the fact that it was missing suggested to me that something might have gone wrong with our installation.
2) While the alternative pkginfo command you suggest is, as you described it, "parsable by humans", it's not especially script-friendly. A build script that only has the simple task of warning that you are using an unsupported older version of the Solaris Studio requires many lines of new logic to deal with this change, and one that has the more complex task of choosing different compiler args depending on the version of Sun/Solaris Studio that is being used really has to work overtime just to accommodate the inconsistent methods of identifying which version is in use.
3) It would be really nice if there was some simple, script-friendly, way to discover what version is in use, and if that method was consistently supported from one release to the next, and if that method reported version numbers in ways that lend themselves to interpreation by scripts. I'm sure that there's some korn guru out there who can write one line of korn-shell code that will be able to conclude that pkginfo's value of "SPRO-12u2-cc" is greater than the 12.1 version's report of "Sun Studio 12 update 1", but I'm a "C" guy, not a script guru, so I wind up with a paragraph of korn shell code to pull this off. Can't we just have some program or (ideally) cc argument that spits out a string like "12.2"? It would be espeically sweet if it could do this on the first line of output (or even just a consistent offset from the first line).
Sorry for the rant, I really do appreciate your explanation, and I hope my comments have not offended you. -
MDNS/Bonjour port 0 service registration broken on Solaris 11 Express
I'm using Solaris 11 Express on x86-64, and found a pretty bad bug with registering mDNS/Bonjour services. Most service registration works fine, but registering a service that uses port 0 does not work. It claims to work, but the service never gets registered, and can't be browsed for.
It's easy to reproduce: in one window, run
$ dns-sd -B test.tcp
You should see 'Browsing for test.tcp', and no services found.
Then, in another window, run
$ dns-sd -R solaris test.tcp local. 0
to register a test.tcp service on port 0. It seems to succeed, but the first window doesn't show the new service.
Now, kill the 'dns-sd -R' registration process with ctrl-c, and run
$ dns-sd -R solaris test.tcp local. 1
This time, the first window will show the service. Registering on port 1 works fine, but port 0 does not.
Port 0 service registration works fine on every other OS, and many services use port 0 by convention if they aren't advertising an actual service. For example, netatalk can register a device-info.tcp service to mimic a certain Mac model (so your Solaris server shows up with an Xserve icon). This "service" uses port 0, and doesn't work on S11 Express unless the source is changed to a non-zero port.
Can someone test this out on Solaris 11 EA and other versions, to see if it's been fixed or not?
Edited by: 887058 on Sep 22, 2011 11:57 PMIt is registering fine with port 0. You can test by trying to register the same service on another host in the local network
with the same service name and port. You will see the service name automatically renamed.
For example:
root@estrada:~# dns-sd -R solaris test.tcp local. 0
Registering Service solaris._test._tcp.local. port 0
Got a reply for solaris._test._tcp.local.: Name now registered and active
root@testz:~# dns-sd -R solaris test.tcp local. 0
Registering Service solaris._test._tcp.local. port 0
Got a reply for solaris (2)._test._tcp.local.: Name now registered and active
^C
You can also query and see the SRV record for it:
# dns-sd -Q solaris._test._tcp.local. SRV
Timestamp A/R Flags if Name T C Rdata
23:38:58.949 Add 2 2 solaris._test._tcp.local. 33 1 21 bytes: 00 00 00 00 00 00 07 65 73 74 72 61 64 61 05 6C 6F 63 61 6C 00
It appears to me the service is not seen in the service browse call. These port 0 registrations are used to indicate the service
is not available on the host so this could be by design. I have tested the same on Mac OS X 10.6.8 and observe the same
results.
Rishi -
Solaris 10 root RAID-1 support totally broken!
I have been configuring Sun Blade 1500 with two IDE drives (each on their own IDE bus) as RAID-1 for the /, swap and /export/home partitions. This has worked well and I have always been able to test the RAID-1 recovery ability of such a configuration. However I am very unhappy to report that this functionality to totally broken in Solaris 10. For example, if you disconnect the second IDE drive of the RAID and reboot, the machine kernel panics as expected. However unlikeSolaris 9, which still allows you to login for maintanence and to delete the metastat database entries for the missing drive (thus making the machine bootable without a kernel panic), under Solaris 10 the system simply restarts immediately upon the kernel panic, This makes the RAID-1 system more than just useless but actually dangerous since it now doubles the probabity that you will had a drive failure and end up with an unbootable machine. I am very glad I tested the RAID-1 recovery ability before I installed this machine. Back to Solaris 9 for me!
Jack#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
/dev/md/dsk/d3 /dev/md/rdsk/d3/ /devel ufs 1 no -
/dev/md/dsk/d6 /dev/md/rdsk/d6/ /prod ufs 1 no -
/dev/md/dsk/d9 /dev/md/rdsk/d9/ /export/home ufs 1 no -
Perhaps the problem is the mount at boot flag set to no for the /devel /prod and /export/home filesystems? If your system is up try mounting these filesystems, if that works, set the mount at boot flag to yes.
--a -
Solaris 8 and PCMCIA on Toshiba Tecra 520 CDT
I am having problems with my pcmcia services with X86 on my Toshiba Tecra 520 CDT laptop. It will not recognize the network card(s) or any PCMCIA cards for that matter. Do I need to load a specific driver or turn on the PCMCIA service or something to that affect for the PCMCIA slot to work? I have heard that the Toshiba laptop is the most supported laptop model for X86. Could anyone give me some advice or point me to the right website or news group. Please reply or contact me directly at [email protected] Much appreciation!
I believe that PCMCIA is broken in Solaris 8.
I can't seem to get networking to work on my Dell either.
Good Luck,
Mike -
Deal-breakers for real use of Solaris 11 Express
I run Solaris 10 U9 for my home 12TB NAS box - based on Supermicro H8SSL-i2 motherboard (ServerWorks HT1000 Chipset and Dual-port Broadcom BCM5704C) and their 8-port SATA2 PCI-X card (AOC-SAT2-MV8). It's a great (but aging) platform and a rock solid OS with the unbeatable ZFS volume manager/filesystem.
However, despite my willingness to run Solaris 11 Express in this role, I can't because of these deal-breakers:
1) Lack of a full-featured installer that allows me to lay out or preserve existing partitions the way I want. Making /var a separate file system is a must. Ideally, I'd be able to run multiple versions of Solaris on the same box by customizing grub, and use my ZPOOLs on either Solaris 10 or 11 Express while I learn the new OS.
2) Lack of support for the Broadcom BCM5704C dual-port gigabit NIC (and others), which work wonderfully under Solaris 10, but are badly broken under Solaris 11 Express. I know I could disable the on-board Broadcom NICs and go buy an Intel card - but why the need for this? Won't there be a fix for Broadcom NICs?
3) Lack of support for modern, generic, server-class motherboards and PCI-e multi-port SATA/SAS cards. I wonder about the future for Solaris without support for modern, affordable x64 server hardware.
Maybe I'm missing the point and Solaris 11 Express is only intended to be run as a virtual machine under VBox or VMware. But it would sure be nice to be able to run it on my real hardware - even if it is just a small hobbyist rig. Any suggestions?
Regards,
MikeIn Solaris 11, you get a separate /var by default. If you update from Solaris 11 Express to Solaris 11, this transition doesn't happen automatically. If you decide to tackle it on your own, you need to be sure that it is done in a way that beadm, pkg, and other consumers of libbe will handle properly. I would recommend something along the lines of the following. This is untested and may break your system - prove it out somewhere unimportant first.
Do the work in a new boot environment so you reduce the likelihood that you will break things in an unrecoverable way.
# beadm create sepvar
# beadm mount sepvar /mntFigure out the name of the root dataset of the new boot environment, then create a var dataset as a child of that.
# rootds=$(zfs list -H -o name /mnt)
# zfs create -o mountpoint=/var -o canmount=noauto $rootds/varMount this new /var and migrate data
# mkdir /tmp/newvar
# zfs mount -o mountpoint=/tmp/newvar $rootds/var
# cd /mnt/var
# mv $(ls -A) /tmp/newvarUnmount, remount
# umount /tmp/newvar
# beadm unmount sepvar
# beadm mount sepvar /mntAt this point /mnt/var should be a separate dataset than /mnt. The contents of /mnt/var should look just like the contents of /var, aside from transient data that has changed while you were doing this. Assuming that is the case, you should be ready to activate and boot the new boot environment.
# beadm activate sepvar
# beadm unmount sepvar
# init 6 -
Question: WebLogic 4.0 Clustering on Solaris 2.6
I've read through several of the postings about Clustering Weblogic Servers
but there are still a couple of points that aren't clear. We have a Java application
that is going to use two Weblogic Application servers to go against a Sybase
database. We decided to use two Single CPU Weblogic servers in a clustered
configuration for load balancing and in the event one goes down or needs to
be taken off line. Each server currently has two NIC's. One NIC is connected
to the LAN where our client PC's are located and the second NIC is going to
connect to an isolated Network for the Multicast communication and an NFS
mount for the Weblogic servers. Eventually we might add a third NIC in each
server that will connect to the network that the Database servers reside on but
for now they will use the same NIC as the Client PC's.
The part that isn't clear is the portion about the Multicast addresses. In some
messages people try to assign and bind the Mutlicast address to the NIC and
in others they just use a normal IP address and it looks as if the Mutlicast
address is just configured in the Weblogic properties file. I would like to use
the second NIC in the servers for the NFS mount and the Multicast communication
between the servers. Is there anything special to configure this on a SUN solaris
server? If you have any ideas please let me know.
Regards,
Robert
There is a bug report that makes me think this is broken in Solaris 2.6, but
in theory the following should work:
Get rid of '/usr/sbin/route add -interface -netmask "240.0.0.0" "224.0.0.0"
"$mcastif"' in /etc/init.d/inetsvc which sets the "standard" multicast
address space starting at 224.0.0.0 with a mask of 240.0.0.0.
Add to your taste:
/usr/sbin/route add -interface -netmask "255.0.0.0" "235.0.0.0"
"host1.foo.com"
/usr/sbin/route add -interface -netmask "255.0.0.0" "236.0.0.0"
"host2.foo.com"
/usr/sbin/route add -interface -netmask "255.0.0.0" "237.0.0.0"
"host3.foo.com"
Where host1.foo.com, host2.foo.com, etc. are in your local hosts file and
are set to the local IP address of the interface you want to bind each
multicast class C address space to.
netstat -r should give you the visual confirmation that things are
configured correctly...
Try snoop on a different machine to see if things are working on the wire:
snoop -d hme0 multicast | grep -v ETHER
Where hme0 is the ethernet interface in the proper Ethernet layer 2 domain
that you want to analyze. Make sure you see the traffic you expect.
Jim Hayes
mailto://[email protected]
Vinny Carpenter <[email protected]> wrote in message
news:[email protected]...
> Hi Robert. If you wish to keep the mutlicast traffic off your main
network, why don't you just move the
> WebLogic servers in their own network segment. We keep all of our
WebLogic servers in their own network
> segment and it works great. Hope this helps
>
> --Vinny
>
> "Robert L. Doerr" wrote:
>
> > I got a message from our Local Weblogic rep and he said (according to
his inside support)
> > that what I am trying to do can not be done. Apparently the Mutlicast
traffic has to use the
> > same NIC that the clients use to access the WebLogic Server. There is
no way to use an
> > isolated LAN for the Mutlicast packets like I originally wanted to do.
I can still use the second
> > NIC to go directly to the Database servers on the back end though. If
anyone does know of
> > a way to make the Muticast packets use a specific NIC please let me
know.
> >
> > Regards,
> >
> > Robert
> >
> > "Robert L. Doerr" wrote:
> >
> > > Do you mean that the WLS looks for Multicast boradcasts on the same
port it expects to
> > > get requests from the clients? Without clustering enabled the WLS is
listening on port
> > > 7003 on the Primary NIC for communication with our Java application on
the client. I thought
> > > that the Muticast traffic was on a different port. If it uses the
same port then how can we
> > > tell the WLS to use the Primary NIC for communication and the second
NIC strictly for
> > > the Multicast traffic. I'm still not clear on this issue.
> > >
> > > Regards,
> > >
> > > Robert
> > >
> > > Sazi Temel wrote:
> > >
> > > > Hi Robert, you can use two or more NICs per server or you can bind
multiple address to the same
> > > > NIC. Every WLS server that is member of the cluster (regardless a
server has single or multiple
> > > > NICs, regardless if a WLS server has its own IP bind to its own NIC
or a NIC used by multiple WLS
> > > > servers) should listen the same port, you cannot have servers in a
cluster listening different
> > > > ports. Once the servers are configured being part of the cluster
they will use the multicasting
> > > > to communicate with each other. Note also that you should have
license for the cluster
> > > > configuration, you cannot combine multiple license files to make a
cluster license file. Hope
> > > > this will help.
> > > >
> > > > Regards,
> > > > --Sazi
> > > >
> > > > "Robert L. Doerr" wrote:
> > > >
> > > > > I should clarify this a little better:
> > > > >
> > > > > Each Weblogic server has a NIC that it will use to communicate
with all of the clients
> > > > > on the LAN. The address of that NIC matches the one in the
license file and it is
> > > > > expecting requests to come in on port 7003. By default it looks
as if the WebLogic
> > > > > server looks for Multicasts on port 7001 of that same NIC. Since
don't want that
> > > > > Multicast traffic on the regular Network with the clients (It has
no reason being there)
> > > > > we want all the Multiport traffic to go in and out of another NIC
with a different Network
> > > > > address. Unless that second address is in the License file can we
do this? I haven't
> > > > > seen any messages or notes relating to this issue. It sounds like
most people only
> > > > > use one NIC per server.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Robert
> > > > >
> > > > > "Robert L. Doerr" wrote:
> > > > >
> > > > > > Thanks, for the response. Can you control what NIC's that
Multicast braodcasts
> > > > > > will use? We would like to keep all of the Multicast traffic
off the normal LAN and
> > > > > > use the extra NIC's for this.
> > > > > >
> > > > > > Robert
> > > > > >
> > > > > > Sazi Temel wrote:
> > > > > >
> > > > > > > Your server should bind to a "normal" IP address... Use
multicast address for clustered
> > > > > > > servers communications, in most case you should do nothing for
multicast address since if
> > > > > > > you do not assign one WLS will use the default one
(237.0.0.1).
> > > > > > >
> > > > > > > --Sazi
> > > > > > >
> > > > > > > "Robert L. Doerr" wrote:
> > > > > > >
> > > > > > > > I've read through several of the postings about Clustering
Weblogic Servers
> > > > > > > > but there are still a couple of points that aren't clear.
We have a Java application
> > > > > > > > that is going to use two Weblogic Application servers to go
against a Sybase
> > > > > > > > database. We decided to use two Single CPU Weblogic servers
in a clustered
> > > > > > > > configuration for load balancing and in the event one goes
down or needs to
> > > > > > > > be taken off line. Each server currently has two NIC's.
One NIC is connected
> > > > > > > > to the LAN where our client PC's are located and the second
NIC is going to
> > > > > > > > connect to an isolated Network for the Multicast
communication and an NFS
> > > > > > > > mount for the Weblogic servers. Eventually we might add a
third NIC in each
> > > > > > > > server that will connect to the network that the Database
servers reside on but
> > > > > > > > for now they will use the same NIC as the Client PC's.
> > > > > > > >
> > > > > > > > The part that isn't clear is the portion about the Multicast
addresses. In some
> > > > > > > > messages people try to assign and bind the Mutlicast address
to the NIC and
> > > > > > > > in others they just use a normal IP address and it looks as
if the Mutlicast
> > > > > > > > address is just configured in the Weblogic properties file.
I would like to use
> > > > > > > > the second NIC in the servers for the NFS mount and the
Multicast communication
> > > > > > > > between the servers. Is there anything special to configure
this on a SUN solaris
> > > > > > > > server? If you have any ideas please let me know.
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > >
> > > > > > > > Robert
> > > > > >
> > > > > > --
> > > > > > ------------------------------------------------------------
> > > > > > Robert L. Doerr (MCNE, MCP, A+)
> > > > > > 26308 Cubberness
> > > > > > St. Clair Shores, MI 48081
> > > > > > Tel: (810) 777-1313
> > > > > > e-mail: [email protected]
> > > > > > WEB Site: http://www.robotswanted.com
> > > > > > "Keeping Personal Robots alive!"
> > > > > > Heathkit HEROS (Jr, 1, & 2000), Androbots, & MAXX STEELE.
> > > > > > ------------------------------------------------------------
> > > > >
> > > > > --
> > > > > ------------------------------------------------------------
> > > > > Robert L. Doerr (MCNE, MCP, A+)
> > > > > 26308 Cubberness
> > > > > St. Clair Shores, MI 48081
> > > > > Tel: (810) 777-1313
> > > > > e-mail: [email protected]
> > > > > WEB Site: http://www.robotswanted.com
> > > > > "Keeping Personal Robots alive!"
> > > > > Heathkit HEROS (Jr, 1, & 2000), Androbots, & MAXX STEELE.
> > > > > ------------------------------------------------------------
> > >
> > > --
> > > ------------------------------------------------------------
> > > Robert L. Doerr (MCNE, MCP, A+)
> > > 26308 Cubberness
> > > St. Clair Shores, MI 48081
> > > Tel: (810) 777-1313
> > > e-mail: [email protected]
> > > WEB Site: http://www.robotswanted.com
> > > "Keeping Personal Robots alive!"
> > > Heathkit HEROS (Jr, 1, & 2000), Androbots, & MAXX STEELE.
> > > ------------------------------------------------------------
> >
> > --
> > ------------------------------------------------------------
> > Robert L. Doerr (MCNE, MCP, A+)
> > 26308 Cubberness
> > St. Clair Shores, MI 48081
> > Tel: (810) 777-1313
> > e-mail: [email protected]
> > WEB Site: http://www.robotswanted.com
> > "Keeping Personal Robots alive!"
> > Heathkit HEROS (Jr, 1, & 2000), Androbots, & MAXX STEELE.
> > ------------------------------------------------------------
>
-
Trusted Extensions not installing correctly - Solaris 10 x86 11/06
I've installed Solaris 10 11/06 x86 selecting the "development" install, and I assigned 7GB to the root slice and 3GB to /export/home. The install goes fine, so I then install Trusted Extensions. After installing Trusted Extensions and rebooting, the Trusted Extensions CDE desktop is not listed as a login option. The JDS Trusted Extensions desktop is listed, but attempting to login to it gives the error "Your X Server has not been set up with SUN_TSOL extension to login to Trusted JDS. Select ordinary JDS to login", and the login fails. Are there any required steps that are not listed in the documentation, or are Trusted Extensions broken in Solaris 10 version 11/06? Any tips at all are greatly appreciated.
It took me a few days of headaches, but I finally figured it out! I'm working under VMWare, and installing VMWareTools in Solaris 10 seems to somehow break the Trusted Extensions. There are no errors, the Trusted Extension features just don't work. When I installed the Trusted Extensions first I did get the Trusted CDE desktop and it logged in just fine, but after installing VMWareTools the desktop option disappeared from the login menu. I hope this helps someone else.
-
DLINK 660CT (PCMCIA) Driver - Please Help!
Does any one know where I can find dlink 660ct driver for solaris 8 06/00 intel?
I have search the web, but have no luck. or is there a work around for unsupported NIC?
Thanks in advance.
Regards Dat.
[email protected]You should first make sure it's your dlink driver rather than the PCMCIA driver which is known to be broken in Solaris 8 for x86. You can get a good PCMCIA driver for FREE from XI Graphics who provides it at their website: http://www.xig.com and then work on locating a driver for the specific card if you are still having trouble. If you know what chipset the card uses you will stand a much better chance of loacting a driver. You may need to set it up to load by either a forceload statement in /etc/system of by altering the /boot/solaris/devicedb/master file to account for the new device IDs
DBessee
Maybe you are looking for
-
No APPS user in Oracle 11i instalation
I installed Oracle Applications 11i (11.5.0) with FRESH DATABASE (No VISION) and after I ran the query from FND_USER, I do not see any user name APPS. I want to start Concurrent server service and that requires user/password. Any idea?
-
My phone won't connect to my Mac anymore what do I do?!?!?
Hi, I have an Xperia U phone (since 2012), and the first time I connected it to my MacBook Pro 13.3' retina display laptop (via USB) it was all good, and I was able to download photos (one month ago), and I downloaded the sonybridge for it, and when
-
Why datafile size not reduced ?
Hi, I do not see the datafile size reduced after I drop tables in the tablespace of that datafile although the free table space reduced. Question: How to reduce the datafile without using alter datafile resize? James.
-
Help.I lost my iphone 4s in Bangladesh.I have imei number and serial number .how find my iphone 4s?
-
I'm trying to scan all my old photo's into a digital archive using my HP Photosmart 5520 e and my MacBook Pro OSX 10.9.4 I want to scan them to TIFF, but unfortunately all my pictures end up as TIFF with 8 bit RGB. The specifications state that it ca