Netbackup with Solaris non-global zone!
Hi,
How to install and configure netbackup into Solaris 10 non-global zone? what steps need to follow?
Thanks
Tanvir
I agree with running from the global zone. The added benefit is that if you backup the root of all zonepaths, then when you add any new non-global within that path, the new server will be automatically backed up.
We had been installing the client on each server both global and non-global in the past. On our non-global zones, /usr is not writeable but /opt is. We would symlink /usr/openv to /opt/openv from the global and then remotely install the client software from the backup master via
"/usr/openv/netbackup/bin/install_client_files ssh <client>"
Similar Messages
-
How to create a separate /var partition on solaris non-global zone
Hi
I found no simple way to create a separate /var partition in solaris non-global zone.
I am using solaris 10 u9 and my root pool is zfs. My zone's zonepath is also separate zfs fs.
But, I do not know how to make the /var as a mountpoint of another zfs dataset since /var is not empty.
I also do not know if there is a way to install a zone with /var as a separate (outside '/') partition.
That will be really useful.
Any suggestion?
Thanks
Edited by: vadud3 on Sep 20, 2010 12:16 PMI meant a separate zfs fs with mountpoint '/var' in a non-global zone.
I am insisting, because I do not want /var to fill up the '/' on non-global zone.
With default non-global zone installation, you cannot avoid that.
My zonepath itself is a zfs fs. I also have a zfs dataset provisioned to the non-global zone.
I cannot create a zfs fs out of that dataset and mount it as '/var' becasue by then non-global zone
already installed content on '/var'
I want the '/var' as a separate dir or mountpoint, the same reason global zone gives you that option during install.
Thanks -
Using a Fibre Channel HBA with a non-global zone.
I am trying to let a non-global zone use a dual port HBA. Please note the goal is to use the HBA including the SAN devices, not just a device on the SAN. Does anyone know if and how this can be done?
[root@global:/]# more /etc/release
Solaris 10 8/07 s10s_u4wos_12b SPARC ...
[root@global:/]# zonecfg -z localzone info
zonename: localzone
zonepath: /zones/localzone
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
net:
address: x.x.x.x/24
physical: qfe0
device
match: /dev/fc/fp[0-1]
device
match: /dev/cfg/c[1-2]
device
match: /dev/*dsk/c[1-2]*
[root@global:/]# fcinfo hba-port
HBA Port WWN: 210000e08b083b41
OS Device Name: /dev/cfg/c1
Manufacturer: QLogic Corp.
Model: QLA2342
Firmware Version: 3.3.24
FCode/BIOS Version: No Fcode found
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b083b41
HBA Port WWN: 210100e08b283b41
OS Device Name: /dev/cfg/c2
Manufacturer: QLogic Corp.
Model: QLA2342
Firmware Version: 3.3.24
FCode/BIOS Version: No Fcode found
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08b283b41
[root@localzone:dev]# ls fc
fp0 fp1
[root@localzone:dev]# ls cfg
c1 c2
[root@localzone:dev]# ls dsk | grep s0
c1t500601613021934Dd0s0
c1t500601693021934Dd0s0
c1t50060482D52D5608d0s0
c1t50060482D52D5626d0s0
c2t500601613021934Dd0s0
c2t500601693021934Dd0s0
c2t50060482D52D5608d0s0
c2t50060482D52D5626d0s0
[root@localzone:dev]# ls rdsk | grep s0
c1t500601613021934Dd0s0
c1t500601693021934Dd0s0
c1t50060482D52D5608d0s0
c1t50060482D52D5626d0s0
c2t500601613021934Dd0s0
c2t500601693021934Dd0s0
c2t50060482D52D5608d0s0
c2t50060482D52D5626d0s0
[root@localzone:dev]# fcinfo hba-port
No Adapters Found.You cannot present devices directly to the NGZ ( What a mouth/handful of words to say/type...sheesh! What's wrong with local zones, sun?)
You can present filesystems and/or ZFS pools but not HBAs or other devices directly (AFAIK) -
Can't do traceroute or DNS queries withing a non-global zone.
I'll start by outlining my servers and their roles
they are all on the same network, behind the same gateway, plugged into the same switch.
secure1 = a freebsd server running bind. It's a recursive DNS server. works perfectly.
secure2 = a solaris 10 server.
zone1 = a zone that was setup before i inherited this env.
zone2 = a zone i tried to create, and it mostly worked.
The problem:
From zone2 I cannot do DNS queries. And traceroutes past the gateway don't work. At first I suspected the firewall, but everything that doesn't work on zone2, works fine on zone 1.
What does work on zone2
I can ssh into it
I can ssh out of it
I can ping it
I can ping from it
I can trace route from it to secure1
I can ssh to other hosts out on the internet.
What doesn't work
I can't do any DNS queries, whether the DNS server is inside of my network or outside of it.
I can't traceroute past my gateway, tho I can from zone1.
Finally here's what happens when I do a dns query
zone2# /usr/sbin/host google.com 66.48.78.91
;; connection timed out; no servers could be reached
Oh, I diffed the zone1.xml and zone2.xml files in /etc/zones and except for things like ip addresses they are the same.
Any suggestions would be muchly appreciated. Thanks folks.ifconfig -a and netstat -rn from the zone that isn't working properly would help.
Off the top of my head, my guess is that your default route isn't valid for zone 2. -
How to install Oracle 10g on Solaris 10 non-global zone.
Hi,
I want to install Oracle 10g on Solaris non global zone with ASM.
I need complete doc for doing the same.
REgards
S.AliCheck Oracle® Database Installation Guide 10g Release 2 (10.2) for Solaris Operating System.
-
Problem to migrate a non-global zone to a different machine.
Hi, recently, I had try to migrate a non-global zone to a different machine but its doesnt work.
1. First, this is the structure of my machine with my non-global zone:
host1# uname -a
SunOS testsolaris 5.11 snv_101b i86pc i386 i86pc
host1# zfs list
NAME USED AVAIL REFER MOUNTPOINT
big-zone 1.71G 1.64G 20K /big-zone
big-zone/export 1.71G 1.64G 22K /big-zone/export
big-zone/export/big-zone 1.67G 1.64G 21K /big-zone/export/big-zon e
big-zone/export/big-zone/ROOT 1.67G 1.64G 18K legacy
big-zone/export/big-zone/ROOT/zbe 1.67G 1.64G 1.66G legacy
big-zone/export/zonetest 41.8M 1.64G 21K /big-zone/export/zonetes t
big-zone/export/zonetest/ROOT 41.8M 1.64G 18K legacy
big-zone/export/zonetest/ROOT/zbe 41.8M 1.64G 1.66G /big-zone/export/zonetes t/root
rpool 8.35G 7.28G 72K /rpool
rpool/ROOT 6.86G 7.28G 18K legacy
rpool/ROOT/opensolaris 6.86G 7.28G 6.73G /
rpool/dump 575M 7.28G 575M -
rpool/export 375M 7.28G 21K /export
rpool/export/home 18K 7.28G 18K /export/home
rpool/export/small-zone 375M 7.28G 21K /export/small-zone
rpool/export/small-zone/ROOT 375M 7.28G 18K legacy
rpool/export/small-zone/ROOT/zbe 375M 7.28G 375M legacy
rpool/swap 575M 7.78G 56.8M -
2. In second, I had detach my non-global zone zonetest whit this commands :
host1# zoneadm z zonetest halt
host1# zoneadm z zonetest detach
3. In third, I had move my zonepath to my new host.
host1# cd /big-zone/export
host1# tar cf zonetest.tar zonetest
host1# sftp jay@new-host
host1# put zonetest.tar
Uploading .
host1# quit
4. Unpack my .tar file
host2# cd /big-zone/export
host2# tar xf zonetest.tar
So, after this, I think that my zonepath is transfert to my new host.
This is the structure of my new host :
jay@alien:~$ uname -a
SunOS alien 5.11 snv_101b i86pc i386 i86pc Solaris
jay@alien:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 18.3G 73.3G 72K /rpool
rpool/ROOT 2.98G 73.3G 18K legacy
rpool/ROOT/opensolaris 2.98G 73.3G 2.85G /
rpool/dump 1023M 73.3G 1023M -
rpool/export 13.3G 73.3G 19K /export
rpool/export/home 13.3G 73.3G 19K /export/home
rpool/export/home/jay 13.3G 73.3G 13.3G /export/home/jay
rpool/swap 1023M 73.9G 321M -
zdata 10.7G 80.8G 9.65G /zdata
zdata/zones 1.08G 80.8G 18K /zdata/zones
zdata/zones/zonetest 1.08G 80.8G 1.08G /big-zone/export/
*I have a mountpoint to /big-zone/export
5. I had try to configure my zone on my new host and I receive and error message:
host2# zonecfg -z zonetest
zonetest: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zonetest> create -a /big-zone/export/zonetest
invalid path to detached zone
zonecfg:zonetest>And my new big-zone (on the second host) show this in the /big-zone/export/zonetest folder :
jay@alien:/zdata/zones# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 23.5G 68.0G 72K /rpool
rpool/ROOT 6.31G 68.0G 18K legacy
rpool/ROOT/opensolaris 6.31G 68.0G 6.18G /
rpool/dump 1023M 68.0G 1023M -
rpool/export 15.2G 68.0G 19K /export
rpool/export/home 15.2G 68.0G 19K /export/home
rpool/export/home/jay 15.2G 68.0G 15.2G /export/home/jay
rpool/swap 1023M 68.6G 361M -
zdata 11.6G 79.9G 10.7G /zdata
zdata/zones 921M 79.9G 18K /zdata/zones
zdata/zones/web 921M 79.9G 21K /zdata/zones/web
zdata/zones/web/ROOT 921M 79.9G 18K legacy
zdata/zones/web/ROOT/zbe 921M 79.9G 921M legacy
zdata/zones/zonetest 54K 79.9G 18K /big-zone/export/zonetest
zdata/zones/zonetest/ROOT 36K 79.9G 18K legacy
zdata/zones/zonetest/ROOT/zbe 18K 79.9G 18K legacy
jay@alien:/zdata/zones/zonetest# pwd
/zdata/zones/zonetest
jay@alien:/zdata/zones/zonetest# ls -ls
total 6
3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
3 drwxr-xr-x 16 root root 19 Feb 8 2009 root
jay@alien:/zdata/zones/zonetest# cd root
jay@alien:/zdata/zones/zonetest/root# ls -ls
total 52902
1 lrwxrwxrwx 1 root root 9 Feb 1 20:29 bin -> ./usr/bin
3 drwxr-xr-x 13 root sys 15 Feb 8 2009 dev
11 drwxr-xr-x 55 root sys 168 Feb 8 2009 etc
3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 home
15 drwxr-xr-x 9 root bin 241 Feb 4 2009 lib
3 drwxr-xr-x 2 root sys 2 Jan 22 16:23 mnt
3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 net
3 drwxr-xr-x 4 root sys 4 Jan 24 15:26 opt
3 dr-xr-xr-x 2 root root 2 Jan 22 16:23 proc
3 drwx------ 3 root root 7 Feb 6 2009 root
5 drwxr-xr-x 2 root sys 47 Jan 22 16:24 sbin
3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
3 drwxrwxrwt 2 root sys 2 Feb 8 2009 tmp
5 drwxr-xr-x 30 root sys 42 Feb 6 2009 usr
3 drwxr-xr-x 32 root sys 32 Feb 6 2009 var
52835 -rw-r--r-- 1 root root 42882560 Jan 22 16:35 webmin-1.441.pkg
jay@alien:/zdata/zones/zonetest/root#
I think my problem is there ...
jay@alien:/big-zone/export/zonetest# pwd
/big-zone/export/zonetest
jay@alien:/big-zone/export/zonetest# ls -ls
total 8
2 ---------- 1 root root 114 Dec 31 1969 @LongLink
3 drwxr-xr-x 2 root root 2 Feb 1 21:10 root
3 drwx------ 4 root root 4 Feb 1 21:10 zonetest
jay@alien:/big-zone/export/zonetest# cd zonetest/
jay@alien:/big-zone/export/zonetest/zonetest# ls -ls
total 6
3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
3 drwxr-xr-x 4 root root 5 Feb 1 21:10 root
jay@alien:/big-zone/export/zonetest/zonetest# cd root
jay@alien:/big-zone/export/zonetest/zonetest/root# ls -ls
total 7
1 lrwxrwxrwx 1 root root 9 Feb 1 21:10 bin -> ./usr/bin
3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
3 drwxr-xr-x 23 root sys 28 Feb 1 21:11 usr
I think I have a problem with my zfs mountpoint but I don't how to resolve this.
Edited by: jaymachine on Feb 26, 2009 6:16 PM -
11gR2 scan and scan listeners are not starting in Solaris non globa zones
Hi,
my configuration is a 2 node RAC. each node is a solaris container/ zones on two phyiscally different machines.
(SunOS 5.10 Generic_147440-07 sun4u sparc SUNW,SPARC-Enterprise).
after installation of GI, root.sh was successful but failed when trying to start the following resources
ora.scan1.vip
ora.scan2.vip
ora.scan3.vip
the scans are not in the host file but on DNS, the GI home has been patched with January 2012 GI PSU.
Has any one successfully installed 11gR2 RAC on Solaris non global zones? if you have tell me how you got past the scan issue.
***** rootcrs_db01.log ******
PRCR-1079 : Failed to start resource ora.scan1.vip
CRS-5017: The resource action "ora.scan1.vip start" encountered the following error:
Action for VIP aborted
CRS-2674: Start of 'ora.scan1.vip' on 'db01' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.scan2.vip
CRS-5017: The resource action "ora.scan2.vip start" encountered the following error:
Action for VIP aborted
CRS-2674: Start of 'ora.scan2.vip' on 'db01' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan2.vip' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.scan3.vip
CRS-5017: The resource action "ora.scan3.vip start" encountered the following error:
Action for VIP aborted
CRS-2674: Start of 'ora.scan3.vip' on 'db01' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan3.vip' on that would satisfy its placement policy
[main] [ 2012-04-17 11:25:38.497 WAT ] [CRSNative.internalQueryResources:1536] About to doQueryResources: eType resource, nodeName null, filter (TYPE == ora.scan_vip.type)
[main] [ 2012-04-17 11:25:38.532 WAT ] [CRSNative.internalQueryResources:1544] found 3 resources
[main] [ 2012-04-17 11:25:38.532 WAT ] [CRSNative.internalQueryResources:1546] ora.scan1.vip
[main] [ 2012-04-17 11:25:38.533 WAT ] [CRSNative.internalQueryResources:1546] ora.scan2.vip
[main] [ 2012-04-17 11:25:38.533 WAT ] [CRSNative.internalQueryResources:1546] ora.scan3.vip
[main] [ 2012-04-17 11:25:38.564 WAT ] [CRSNative.isEntityRegistered:734] entity: ora.scan1.vip, type: 1, registered: true
[main] [ 2012-04-17 11:25:38.565 WAT ] [ScanVIPImpl.<init>:164] ordinal number is 1
[main] [ 2012-04-17 11:25:38.587 WAT ] [CRSNative.isEntityRegistered:734] entity: ora.scan1.vip, type: 1, registered: true
[main] [ 2012-04-17 11:25:38.609 WAT ] [CRSNative.isEntityRegistered:734] entity: ora.scan2.vip, type: 1, registered: true
[main] [ 2012-04-17 11:25:38.610 WAT ] [ScanVIPImpl.<init>:164] ordinal number is 2
[main] [ 2012-04-17 11:25:38.631 WAT ] [CRSNative.isEntityRegistered:734] entity: ora.scan2.vip, type: 1, registered: true
[main] [ 2012-04-17 11:25:38.653 WAT ] [CRSNative.isEntityRegistered:734] entity: ora.scan3.vip, type: 1, registered: true
[main] [ 2012-04-17 11:25:38.654 WAT ] [ScanVIPImpl.<init>:164] ordinal number is 3
[main] [ 2012-04-17 11:25:38.674 WAT ] [CRSNative.isEntityRegistered:734] entity: ora.scan3.vip, type: 1, registered: true
[main] [ 2012-04-17 11:25:38.675 WAT ] [CRSNative.internalStartResource:373] About to start resource: Name: ora.scan1.vip, force:true node: null, options: 0, filter null
[main] [ 2012-04-17 11:25:38.702 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan1.vip false CRS-2672: Attempting to start 'ora.scan1.vip' on 'ojesmdb01'
[main] [ 2012-04-17 11:26:39.079 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan1.vip true CRS-5017: The resource action "ora.scan1.vip start" encountered the following error:
Action for VIP aborted
[main] [ 2012-04-17 11:26:42.730 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan1.vip true CRS-2674: Start of 'ora.scan1.vip' on 'ojesmdb01' failed
[main] [ 2012-04-17 11:26:42.731 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan1.vip false CRS-2679: Attempting to clean 'ora.scan1.vip' on 'ojesmdb01'
[main] [ 2012-04-17 11:26:42.743 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan1.vip false CRS-2681: Clean of 'ora.scan1.vip' on 'ojesmdb01' succeeded
[main] [ 2012-04-17 11:26:42.746 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan1.vip true CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
[main] [ 2012-04-17 11:26:42.750 WAT ] [CRSNativeResult.addComp:162] add comp: name ora.scan1.vip, rc 223, msg CRS-0223: Resource 'ora.scan1.vip' has placement error.
[main] [ 2012-04-17 11:26:42.752 WAT ] [CRSNative.internalStartResource:386] Failed to start resource: Name: ora.scan1.vip, node: null, filter: null, msg CRS-5017: The resource action "ora.scan1.vip start" encountered the following error:
Action for VIP aborted
CRS-2674: Start of 'ora.scan1.vip' on 'db01' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
[main] [ 2012-04-17 11:26:42.754 WAT ] [CRSNative.internalStartResource:373] About to start resource: Name: ora.scan2.vip, force:true node: null, options: 0, filter null
[main] [ 2012-04-17 11:26:42.778 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan2.vip false CRS-2672: Attempting to start 'ora.scan2.vip' on 'ojesmdb01'
[main] [ 2012-04-17 11:27:43.132 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan2.vip true CRS-5017: The resource action "ora.scan2.vip start" encountered the following error:
Action for VIP aborted
[main] [ 2012-04-17 11:27:46.805 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan2.vip true CRS-2674: Start of 'ora.scan2.vip' on 'ojesmdb01' failed
[main] [ 2012-04-17 11:27:46.806 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan2.vip false CRS-2679: Attempting to clean 'ora.scan2.vip' on 'ojesmdb01'
[main] [ 2012-04-17 11:27:46.820 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan2.vip false CRS-2681: Clean of 'ora.scan2.vip' on 'ojesmdb01' succeeded
[main] [ 2012-04-17 11:27:46.822 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan2.vip true CRS-2632: There are no more servers to try to place resource 'ora.scan2.vip' on that would satisfy its placement policy
[main] [ 2012-04-17 11:27:46.826 WAT ] [CRSNativeResult.addComp:162] add comp: name ora.scan2.vip, rc 223, msg CRS-0223: Resource 'ora.scan2.vip' has placement error.
[main] [ 2012-04-17 11:27:46.827 WAT ] [CRSNative.internalStartResource:386] Failed to start resource: Name: ora.scan2.vip, node: null, filter: null, msg CRS-5017: The resource action "ora.scan2.vip start" encountered the following error:
Action for VIP aborted
CRS-2674: Start of 'ora.scan2.vip' on 'db01' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan2.vip' on that would satisfy its placement policy
[main] [ 2012-04-17 11:27:46.828 WAT ] [CRSNative.internalStartResource:373] About to start resource: Name: ora.scan3.vip, force:true node: null, options: 0, filter null
[main] [ 2012-04-17 11:27:46.855 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan3.vip false CRS-2672: Attempting to start 'ora.scan3.vip' on 'ojesmdb01'
[main] [ 2012-04-17 11:28:47.212 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan3.vip true CRS-5017: The resource action "ora.scan3.vip start" encountered the following error:
Action for VIP aborted
[main] [ 2012-04-17 11:28:50.881 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan3.vip true CRS-2674: Start of 'ora.scan3.vip' on 'ojesmdb01' failed
[main] [ 2012-04-17 11:28:50.883 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan3.vip false CRS-2679: Attempting to clean 'ora.scan3.vip' on 'ojesmdb01'
[main] [ 2012-04-17 11:28:50.895 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan3.vip false CRS-2681: Clean of 'ora.scan3.vip' on 'ojesmdb01' succeeded
[main] [ 2012-04-17 11:28:50.898 WAT ] [CRSNativeResult.addLine:106] callback: ora.scan3.vip true CRS-2632: There are no more servers to try to place resource 'ora.scan3.vip' on that would satisfy its placement policy
[main] [ 2012-04-17 11:28:50.902 WAT ] [CRSNativeResult.addComp:162] add comp: name ora.scan3.vip, rc 223, msg CRS-0223: Resource 'ora.scan3.vip' has placement error.
[main] [ 2012-04-17 11:28:50.902 WAT ] [CRSNative.internalStartResource:386] Failed to start resource: Name: ora.scan3.vip, node: null, filter: null, msg CRS-5017: The resource action "ora.scan3.vip start" encountered the following error:
Action for VIP aborted
thanks in advance
Samuel KHi,
Put here
$ srvctl config scan
$ nslookup <scan_name>
$ ping <all scan vip>Regards,
Levi Pereira -
Unexpected behavior: Solaris10 , vlan , ipmp, non-global zones
I've configured a System with several non-global zones.
Each of them has ip - connection via a seperate vlan (1 vlan for each nonglobal zone). The vlans are established by the global zone. They are additionally brought under control of ipmp.
I followed the instructions described at:
http://forum.sun.com/thread.jspa?threadID=21225&messageID=59653#59653
to create the defaultrouters for the non-global zones.
In addition to that, I've created the default route for the 2nd ipmp-interface. (to keep the route in the non-global Zone in case of ipmp-failover)
ie:
route add default 172.16.3.1 -ifp ce1222000
route add default 172.16.3.1 -ifp ce1222002Furthermore, i' ve put the 172.16.3.1 in the /etc/defaultrouter of the global zone, to ensure it will be the 1st entry in the routing table (because it's the defaultrouter for the global zone)
Here the unexpected:
Tried to reach a ip-target ouside the configured subnets, say 172.16.1.3 , via icmp. The router 172.16.3.1 knows the proper route to get it. The 1st tries (can't remember the exact number) went through ce1222000 and associated icmp-replies travelled back trough ce1222000. But suddenly the outgoing interface changed to ce1322000 or ce1122000 ! The defaultrouters configured on these vlans are not aware of the 172.16.1.3 (172.16.1.0/24), and there was no answer. The defaultroutes seemed to be "cycled" between the configured.
Furthermore the connection from the outside to the nonglobal-zones (wich do have only 1 defaultrouter configured: the one of the vlan the non-global Zone belongs to) was broken intermittent.
So, how to get the combination of VLAN ,IPMP, diff. defaultrouters, non-global Zones running?
Got the following config visible in the global zone:
(the 172.13.x.y are sc3.1u4 priv. interconnect)
netstat -rn
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
172.31.193.1 127.0.0.1 UH 1 0 lo0
172.16.19.0 172.16.19.6 U 1 4474 ce1322000
172.16.19.0 172.16.19.6 U 1 0 ce1322000:1
172.16.19.0 172.16.19.6 U 1 1791 ce1322002
172.31.1.0 172.31.1.2 U 1 271194 ce5
172.31.0.128 172.31.0.130 U 1 271158 ce1
172.16.11.0 172.16.11.6 U 1 8715 ce1122000
172.16.11.0 172.16.11.6 U 1 0 ce1122000:1
172.16.11.0 172.16.11.6 U 1 7398 ce1122002
172.16.3.0 172.16.3.6 U 1 4888 ce1222000
172.16.3.0 172.16.3.6 U 1 0 ce1222000:1
172.16.3.0 172.16.3.6 U 1 4236 ce1222002
172.16.27.0 172.16.27.6 U 1 0 ce1411000
172.16.27.0 172.16.27.6 U 1 0 ce1411000:1
172.16.27.0 172.16.27.6 U 1 0 ce1411002
192.168.0.0 192.168.0.62 U 1 24469 ce3
172.31.193.0 172.31.193.2 U 1 651 clprivnet0
172.16.11.0 172.16.11.6 U 1 0 ce1122002:1
224.0.0.0 192.168.0.62 U 1 0 ce3
default 172.16.3.1 UG 1 1454
default 172.16.19.1 UG 1 0 ce1322000
default 172.16.19.1 UG 1 0 ce1322002
default 172.16.11.1 UG 1 0 ce1122000
default 172.16.11.1 UG 1 0 ce1122002
default 172.16.3.1 UG 1 0 ce1222000
default 172.16.3.1 UG 1 0 ce1222002
127.0.0.1 127.0.0.1 UH 41048047 lo
#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
zone Z-BTO1-1
inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
zone Z-BTO1-2
inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
zone Z-ITR1-1
inet 127.0.0.1 netmask ff000000
lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
zone Z-TDN1-1
inet 127.0.0.1 netmask ff000000
lo0:5: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
zone Z-DRB1-1
inet 127.0.0.1 netmask ff000000
ce1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500
index 10
inet 172.31.0.130 netmask ffffff00 broadcast 172.31.0.255
ether 0:3:ba:f:63:95
ce3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
inet 192.168.0.62 netmask ffffff00 broadcast 192.168.0.255
groupname ipmp0
ether 0:3:ba:f:68:1
ce5: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500
index 9
inet 172.31.1.2 netmask ffffff00 broadcast 172.31.1.127
ether 0:3:ba:d5:b1:44
ce1122000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
index 2
inet 172.16.11.6 netmask ffffff00 broadcast 172.16.11.127
groupname ipmp2
ether 0:3:ba:f:63:94
ce1122000:1:
flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
mtu 1500 index 2
inet 172.16.11.7 netmask ffffff00 broadcast 172.16.11.127
ce1122002:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 3
inet 172.16.11.8 netmask ffffff00 broadcast 172.16.11.127
groupname ipmp2
ether 0:3:ba:f:68:0
ce1122002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
mtu 1500 index 3
inet 172.16.11.10 netmask ffffff00 broadcast 172.16.11.255
ce1122002:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
mtu 1500 index 3
zone Z-ITR1-1
inet 172.16.11.9 netmask ffffff00 broadcast 172.16.11.255
ce1222000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
index 4
inet 172.16.3.6 netmask ffffff00 broadcast 172.16.3.127
groupname ipmp3
ether 0:3:ba:f:63:94
ce1222000:1:
flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
mtu 1500 index 4
inet 172.16.3.7 netmask ffffff00 broadcast 172.16.3.127
ce1222002:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 5
inet 172.16.3.8 netmask ffffff00 broadcast 172.16.3.127
groupname ipmp3
ether 0:3:ba:f:68:0
ce1222002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
mtu 1500 index 5
zone Z-BTO1-1
inet 172.16.3.9 netmask ffffff00 broadcast 172.16.3.255
ce1222002:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
mtu 1500 index 5
zone Z-BTO1-2
inet 172.16.3.10 netmask ffffff00 broadcast 172.16.3.255
ce1322000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
index 6
inet 172.16.19.6 netmask ffffff00 broadcast 172.16.19.127
groupname ipmp1
ether 0:3:ba:f:63:94
ce1322000:1:
flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
mtu 1500 index 6
inet 172.16.19.7 netmask ffffff00 broadcast 172.16.19.127
ce1322002:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 7
inet 172.16.19.8 netmask ffffff00 broadcast 172.16.19.127
groupname ipmp1
ether 0:3:ba:f:68:0
ce1322002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
mtu 1500 index 7
zone Z-TDN1-1
inet 172.16.19.9 netmask ffffff00 broadcast 172.16.19.255
ce1411000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
index 12
inet 172.16.27.6 netmask ffffff00 broadcast 172.16.27.255
groupname ipmp4
ether 0:3:ba:f:63:94
ce1411000:1:
flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
mtu 1500 index 12
inet 172.16.27.7 netmask ffffff00 broadcast 172.16.27.255
ce1411002:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 13
inet 172.16.27.8 netmask ffffff00 broadcast 172.16.27.255
groupname ipmp4
ether 0:3:ba:f:68:0
ce1411002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
mtu 1500 index 13
zone Z-DRB1-1
inet 172.16.27.9 netmask ffffff00 broadcast 172.16.27.255
clprivnet0:
flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu
1500 index 11
inet 172.31.193.2 netmask ffffff00 broadcast 172.31.193.255
ether 0:0:0:0:0:2 -
Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2
Dears,
I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
Appreciate your helpHi,
I am not sure whether this document:
http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
has been on the list of docs you found already.
If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
Regards
Hartmut -
With the standard patching process(installcluster), it takes a looong time since each zone needs veridated. Any option that I can apply the patchset to the global zone only, then later upgrade the non-global zones?
If possible, I'd like to use LU.You can use LU but it will depend of your system config. There are instructions in the README of the patchset to install it on an alternate boot environment (previously created using lucreate).
If you plan to use LU, read the following docs first to avoid common issues:
Solaris Live Upgrade Software Patch Requirements(Doc ID 1004881.1)
List of currently unsupported Live Upgrade (LU) configurations (Doc ID 1396382.1)
You can also use Parallel Patching feature to improve performance :
https://blogs.oracle.com/patch/entry/zones_parallel_patching_feature_now
Solaris 10 10/09: Zones Parallel Patching to ReducePatching Time (System Administration Guide: Oracle Solaris Containers…
What you can't do is patch the global zone only and the non-global zones later (unless the zones are detached). It's a requirement that the global and non-global stay synchronize at all time (considering that they are sharing the same kernel). -
PHP in Solaris 10 and Non-Global Zones: Problem of performance?
Hi friends
We are feeling a poor performance with applications developed with PHP in Solaris 10, with non-global and global zones, while Intel platform (Xeon and Pentium), performance is very good. Difference between both platforms is about 200% aprox, one second in Intel to 9, 12 or 20 seconds in Solaris depending of model.
Our tests were developed in:
1. SF T2000 server Solaris 10 global zone
2. SF T2000 server Solaris 10 non-global zone
3. SF280R server Solaris 10 non-global zone
4. V240 server with 1 GB memory, 1*US III-i 1.0 GHz and Solaris 9 (really this version for test and comparisons)
5. V240 server with 8GB memory, 2*US III-i 1.5Ghz and Solaris 9 (really this version for test and comparisons too)
Intel platforms were:
1. Intel Pentium 4 2GHz 2GB memory, Linux Fedora and PHP 4.4.4
2. Intel Xeon 2 core, 2.33GHz 2GB memory, Linux Fedora and PHP 4.4.3
Versions of products are:
1. Solaris 9 or Solaris 10
2. PHP 4.4.7 downloaded from http://www.php.net/downloads.php
3. Apache 2.0.59
4. MySQL 4.1.15-log
Our php compilation and installation were:
./configure --prefix=/usr/local/php-4.4.7 \
--with-pear \
--with-openssl=/usr/local/ssl \
--with-gettext \
--with-ldap=/usr/local \
--with-iconv \
--enable-ftp \
--with-dom \
--with-mime-magic \
--enable-mbstring \
--with-zlib \
--enable-track-vars \
--enable-sigchild \
--disable-ctype \
--disable-overload \
--disable-tokenizer \
--disable-posix \
--with-gd \
--with-apxs2=/usr/local/apache2.0.53/bin/apxs \
--with-mysql \
--with-pgsql \
--with-oci8=/oracle/product/9.2.0 \
--with-oracle=/oracle/product/9.2.0 \
--with-png-dir=/usr/local \
--with-zlib-dir=/usr/local \
--with-freetype-dir=/usr/local \
--with-jpeg-dir=/usr/local
make
make install
Questions:
Is there any problem of PHP with SunFire T2000 servers or 64-bits platforms?
Is there any flag of PHP would be use to compilarion PHP in 64-bits or multithread?
I wait for any comments or suggestions about our problem with PHP compilation and performance in Solaris 10. Thanks a lot.
Sergio.I presume you compiled php on the Sun server, was this done using gcc or the Sun One C compiler.
If the latter then you can also use the flag: --enable-nonportable-atomics when you run configure -
Problem with exporting devices to non-global zone
Hi,
I've problem with exporting devices to my solaris zones (i try do add support to mount /dev/lofi/* in my non-global zone).
A create cfg for my zone.
Here it is:
$ zonecfg -z sapdev info
zonename: sapdev
zonepath: /export/home/zones/sapdev
brand: native
autoboot: true
bootargs:
pool:
limitpriv: default,sys_time
scheduling-class:
ip-type: shared
fs:
dir: /sap
special: /dev/dsk/c1t44d0s0
raw: /dev/rdsk/c1t44d0s0
type: ufs
options: []
net:
address: 194.29.128.45
physical: ce0
device
match: /dev/lofi/1
device
match: /dev/rlofi/1
device
match: /dev/lofi/2
device
match: /dev/rlofi/2
attr:
name: comment
type: string
value: "This is SAP developement zone"
global# lofiadm
Block Device File
/dev/lofi/1 /root/SAP_DB2_9_LUW.iso
/dev/lofi/2 /usr/tmp/fsfile
I reboot the non-global zone, even reboot global-zone, and after that, in sapdev zone, there is no /dev/*lofi/* files.
What i do wrong? Maybe I reduce my sol 10 u4 sparc instalation too much.
Can anybody help me?
Thanks for help,
MarekI experienced the same problem on my system Sol 10 08/07.
Normally, when the zone enters the READY state during boot, it's zoneadmd will run devfsadm -z <zone>. In my understanding this is to create the necessary device files in ZONEPATH/dev.
This worked well until recently. Now only the directories are still created.
It seems as if devfsadm -z is broken. Somebody should issue a call to sun.
As a workaround you can easily copy the device files into the zone. It is important not to copy the symbolic link but the target.
# cp /dev/lofi/1 ZONEPATH/dev/lofi
Hope this helps,
Konstantin Gremliza -
Jdk 1.6 + solaris 10 (global + non global zones)
How to execute or fetch jdk1.6 in global zone when it is installed in the non global zone of solaris 10?
kindly explain in step-by-step procedure
thanks,
SamojitI have a similar problem with pkgadd. When installing an auditing application we get the following error:
pkagadd: ERROR: unable to make temporary directory </var/tmp//installtEaBs/install9_aG1u>
No changes were made to the system.
This is a package that is being installed remotely using an automated process and the package does install using the pkgadd command on the server. The automation has produced the same or a similar error on several servers while producing no errors and successfully installing on others. We cannot duplicate the error across the board. The packge has been reviewed and has been found to have no errors, the automation process has been evaluated and has done successful jobs on other Solaris 10 servers since failing on the ones with this error. The exact same automated job produces successful and unsuccessful jobs on various servers.
There are 2 patch levels that we have identified 118833-26 and 118833-30. Both the 26 and 30 versions have had successful and unsuccessful installations of the SE agent. There are various subnets involved and all of them have had successful and unsuccessful installations of the SE agent.
If anyone out there has run into this problem before please help me out. This installation is automated on many Solaris 10 servers, and is successful on about 2 out of 3. It is of course the failed jobs that I have to focus on, until we can get it resolved. -
Sun cluster 3.20, live upgrade with non-global zones
I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
Any assistance gratefully receivedHi,
I am not sure whether this document:
http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
has been on the list of docs you found already.
If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
Regards
Hartmut -
Lucreate not working with ZFS and non-global zones
I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
# lucreate -n patch20130408
Creating Live Upgrade boot environment...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
Current boot environment is named <s10s_u10wos_17b>.
Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <patch20130408>.
Source boot environment is <s10s_u10wos_17b>.
Creating file systems on boot environment <patch20130408>.
Populating file systems on boot environment <patch20130408>.
Temporarily mounting zones in PBE <s10s_u10wos_17b>.
Analyzing zones.
WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
Mounting ABE <patch20130408>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <patch20130408>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <s10s_u10wos_17b>.
Making boot environment <patch20130408> bootable.
Population of boot environment <patch20130408> successful.
Creation of boot environment <patch20130408> successful.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 16.6G 257G 106K /rpool
rpool/ROOT 4.47G 257G 31K legacy
rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
rpool/dump 1.55G 257G 1.50G -
rpool/export 63K 257G 32K /export
rpool/export/home 31K 257G 31K /export/home
rpool/h 2.27G 257G 2.27G /h
rpool/security1 28.4M 257G 28.4M /security1
rpool/swap 8.25G 257G 8.00G -
tank 12.9G 261G 31K /tank
tank/swap 8.25G 261G 8.00G -
tank/zones 4.69G 261G 36K /zones
tank/zones/DB 1.30G 261G 1.30G /zones/DB
tank/zones/DB@patch20130408 1.75M - 1.30G -
tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
tank/zones/APP 3.34G 261G 3.34G /zones/APP
tank/zones/APP@patch20130408 2.39M - 3.34G -
tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
You were hijacking that other person's discussion from 2012 to ask your own new post.
You have now properly asked your question and people can pay attention to you and not confuse you with that other person.
Maybe you are looking for
-
We are working on a product release by end of July 2008 - We would like to utilize fileupload data read capability of Flash 10 beta 3. Would like to know the release timeline for Flash 10 player so that we can make some design decisions to make Flash
-
Secure Empty Trash & Trash Cache
Am old Mac addict home user with Apple from day one but not a guru. Still have Apple IIE that runs like a clock. With 17 Macs in between, years ago when G3 would slow down, get sluggish and tired, I discovered that setting trash cache and deleting ma
-
Play iTunes Movie on BlackBerry Torch 9810
There is definitely no shortage of M4V to MP4 converter on the Internet. One of the best M4V to MP4 converter is called Tune4mac iTunes M4V Converter Plus, which is able to remove DRM control from iTunes M4V movie rentals and purchases and convert M4
-
Colorsync Utility: How to delete no longer used printers?
I have a problem with the colorsync utility. In the device window some printers appear which are already deleated. I would like to only have the active printers shown but don´t know how to delete the printers within the device menu. I tried to put th
-
Opening vi's in an older verions of Lab View
Hello I am programming my vi's on a Windows XP machine with labVIEW 7.0 but have to run my tests on a linux machine with labVIEW 6.0. Is it possible to open a LV 7.0 vi in LV 6. I have tried doing this on the windows machine and it tells me that thi