Tseries and zones
Does anyone know if I can run just global/non zones on a T series server without using LDOMS?
thanks.
There is no requirement that you run LDOM's and no reason you can't run zones. So yes, you can.
Similar Messages
-
Patchadd and zones??
I feel dumb asking this question especially since I think its due to my lack of knowledge with zones. I am using Solaris 10 with only a global zone, at least thats what I think? I ran a "zoneadm list" and only listed "global".
I have downloaded uwc [yes I posted something similar on the JMS forum] patch 118540-42 and have tried to add it with patchadd. I get an error saying " Package SUNWuwc from directory SUNWuwc in patch 118540-42 is not installed on the system" and the patch does not install [I can not see it with showrev -p nor in /var/sadm/patch]. I also tried the "-G" option but no difference.
Then I tried using Solaris Management Console 2.1 using its patch tool. Action -> add patch and I get an error saying "Some or all of the software packages that patch 118540-42 patches are not installed on the target host.".
So I decided to look for SUNWuwc with pgkinfo and there is no listing, in fact none of the Java comm suite is found [nor most of the stuff in /opt and /usr/local ]. Yet I know JES is all in /opt and is working fine. I have never created a zone on this system and thought I had installed everything to just a global zone. I have had this system running for over a year and some updates have happened [just not sure what changed].
So I am wondering is there another way to add patches with Sol 10? another flag or utility? Is there a way to know what zone one is working in? or has installed stuff to. Is there a way at this point to stop using zones altogether?
Maybe I am missing a Sol 10 patch?
I had also run into a file /var/sadm/install/gz-only-packages and sure enough all my installed packages were listed here [all the JES stuff and web server etc]. I thought this meant "global zone only" and I only have a global zone??? so why can I not patch??
Apologies for the confusion [and lack of more details], although I have worked with Solaris for years this new 10 version and zones is a bit confusing for me. Plus the problem might not be zones related at all :^)
Thanks in Advance
-Jamestry zoneadm list -vc
this shows all zones on the system, whether they are configured, running or not running.
Next, try to get the 10-recommended patches from sunsolve.sun.com and add those, but do it in single-user mode.
/tony -
EBS 7.4 with ZFS and Zones
The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
Edited by: neilnewman on Apr 3, 2008 6:42 AMThe EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
Edited by: neilnewman on Apr 3, 2008 6:42 AM -
IPMP configuration and zones - how to?
Hello all,
So, I've been thrown in at the deep end and have been given a brand new M4000 to get configured to host two zones. I have little zone experience and my last Solaris exposure was 7 !
Anyway, enough of the woe, this M4000 has two quad port NICs, and so, I'm going to configure two ports per subnet using IPMP and on top of the IPMP link, I will configure two v4 addresses and give one to one zone and one to the other.
My question is, how can this be best accomplished with regards to giving each zone a different address on the IPMP link.
IP addresses available = 10.221.91.2 (for zone1) and (10.221.91.3 for zone2)
So far, in the global zone I have
ipadm create-ip net2 <-----port 0 of NIC1
ipadm create-ip net6 <-----port 0 of NIC2
ipadm create-ipmp -i net2,net6 ipmp0
ipadm create-addr -T static -a 10.221.91.2/24 ipmp0/zone1
ipadm create-addr -T static -a 10.221.91.3/24 ipmp0/zone2
the output of ipmpstat -i and ipmpstat -a is all good. I can ping the addresses from external hosts.
So, how now to assign each address to the correct host. I assume I'm using shared-ip?
in the zonecfg, do I simply (as per [this documentation|http://docs.oracle.com/cd/E23824_01/html/821-1460/z.admin.task-54.html#z.admin.task-60] ):
zonecfg:zone1> add net
zonecfg:zone1:net> set address=10.221.91.2
zonecfg:zone1:net> set physical=net2
zonecfg:zone1:net> end
and what if I have many many addresses to configure per interface... for example zone1 and zone2 will also require 6 addresses on another subnet (221.206.29.0)... so how would that look in the zonecfg?
Is IPMP the correct way to be doing this? The client wants resilience above all, but these network connections are coming out of different switches thus LACP/Trunking is probably out of the question.
Many thanks for your thoughts... please let me know if you want more information
Solaris11 is a different beast altogether.
Edited by: 913229 on 08-Feb-2012 08:03
added link to the Solaris IPMP and zones docThanks for the reply....
It still didn't work... but you pointed me in the right direction. I had to remove the addresses I had configured on ipmp0 and instead put them in the zonecfg. Makes sense really. Below I have detailed my steps as per your recommendation...
I had configured the zone as minimally as I could:
zonepath=/zones/zone1
ip-type=shared
net:
address: 10.221.91.2
physical=ipmp0
but after it is installed, I try and boot it and I get:
zone 'zone1': ipmp0:2: could not bring network interface up: address in use by zone 'global: Cannot assign the requested address
So, I changed the ip-type to exclusive and I got:
WARNING: skipping network interface 'ipmp0' which is used in the global zone.
zone 'zone1': failed to add network device
which was a bit of a shame.
So, finally, I removed the addresses from ipmp0
ipadm delete-addr ipmp0/zone1
ipadm delete-addr ipmp0/zone2
and set the address in zonecfg together with the physical=ipmp0 as per your recommendation and it seems to be working.
So, am I correct in taking away from this that if using IPMP in shared-ip zones, don't set the address in the global zone, but stick it in the zone config and everyone is happy?
I think this was the only way to achieve multiple IP addresses on one interface but over two ports?
Lastly, why oh why is the gateway address in netstat -rn coming up as the address of the host?
Anyway, thanks for your help.
;) -
Default gateways and zones in a multihomed system
We do have some problems concerning default routes and zones in a multihomed system.
I found several posts in this forum, most of them referring to a domument of meljr, but my feeling ist that the paper is either not correct or not applicable to our situation?! Perhaps somebody can give me a hint.
Let me sketch our test environment. We have a multihomed Solaris 10 system attached to three different DMZ's using three different network adapters. We set up two local zones with IP's of the DMZ's of adapter 1 and 2, leaving adapter 0 for the IP of the global zone.
Now we set up default routes to ensure that network traffic from the local zones is routed in the corresponding DMZ's. That makes three different default routes on the global zone. On startup of the local zones, netstat reports the expected default routes to the correct DMZ gateways inside each zone.
Now what happens... My ssh to the global zone sometimes breaks. When this happens, no pings are possible to the IP of the global zone. Meanwhile, pings from other machines in our network (even from different subnets) might produce replies, some don't. By now, I can't tell you if there's is anything deterministic about it... More interesting: the local zone connections aren't affected at all!
So we did some more testing. Binding an IP address to the DMZ interfaces where the zones are tied to makes no difference (we tried both, with or without dedicated addresses for the adapter in the global zone). So the setup we're using right now is made of 5 IP addresses.
IP1, subnet 1: adapter 0, global zone
IP2, subnet 2: adapter 1, global zone
IP3, subnet 2; adapter 1. local zone 1
IP4, subnet 3; adapter 2, global zone
IP5, subnet 3; adapter 2, local zone 2
In the global zone there are three default gateways defined, one in each DMZ subnet. Inside the local zones, at startup you'll find the corresponding gateway into the DMZ. Everything looks fine...
I opened five ssh connections to the different IP's. Now what happened... After approx. half an hour, the connections to two IPs of the global zone (adapter 0 and adapter 1) broke down, while the connections to all other IP's were still open. This behaviour can be reconstructed!
So perhaps anybody has a explaination for this behaviour. Or perhaps anybody can answer me some qustions:
1. How are the three default gateways handled? Is there still some kind of "round robbin" implemenation? How can I guarantee that network traffic from outside isn't routed inside the DMZ's without preventing the local zones from talking to each other (actually we only need to communicate on some ports, but the single IP-stack concept only gives us all or nothing...).
2. If I do a ping from local zone 1 to the default gateway of local zone 2, this route is added as additional default gateway inside local zone 1! So does this mean, the routing decision is made only inside the global zone not taking into account where the packet is sent from?
3. After all, how are the IP packets routed from the different zone and the global zone, and how are they routed back to calling systems from the various DMS's and other networks, routed via these DMS's???
The scenario seems to be covered by http://meljr.com/~meljr/Solaris10LocalZoneDefaultRoute.html, but configuring the machine like stated in the paper leaves me with the problems described.
I'd be happy for any helpful comment!you can have multiple gateway entries in deafultrouter file but the default gateway for global zone can be only one but you can specify different gateways for different zones..
using this default gateway, you should be able to connect via different network...! -
SunMC 3.6.1 Agent and zones
Hi folks,
I'm wondering how to handle zones with SunMC 3.6.1. We're planning to use zones in the future, mostly whole root zones. As far as I understood, I need to install the agent in the global zone and in the local zone afterwards. Is this correct?
Thanks in advance1. How does the container manager link with SunMC.
Does monitor info go to the sunMC server ?SCM is one of the optional components you're prompted to add when you're installing SunMC Server and Agents. The way it works on the Server is that SCM adds an extra web interface, and it relies on the SunMC for host lists and a place to store it's data (i.e the graphs shown in the SCM web interface are actually coming from the SunMC Oracle database).
On the Agent side, all that happens when you say "y"es to SCM during install is that it adds a hidden module to the Agent that collects container/zone-type data, and can do the bidding of the SCM web interface (i.e. make/destroy Zones, alter CPU pools and container shares etc)
Do you only have to install the container manager
on the global zone ?Yes, it's only required for the SunMC Agent in each global zone. It's that Agent that can "see everything" about the containers and zones that are running.
Regards,
[email protected]
http://www.HalcyonInc.com -
LSMW and "Zone GL_EMPL-USERNAME" error
Hi all,
I built an LSMW project (type Idoc) to create Business Partners (under CRM 5.0). When I try to create a "person", I can't find any field in Structure relation (and Field Mapping) to fill the BP's "Username" field (under the "Identification" tab).
We try to fill it via LSMW batch input, CATT and e-catt, all these attempts failed by a "Zone GL_EMPL-USERNAME (no input allowed)" error message.
Could you give me a hint to solve this?
Thanks in advance,
Message was edited by:
Damien LardenaisHi Damien,
i guess the reason is u r trying to upload the value into the transaction which is not recorded r which is gray field(not an input field ) .
its just guess(because i face this problem before).
Reward points to helpful answers.
Thanks
Naveen khan -
Hello,
I am attempting to setup a geographic cluster to failover an Informix application to our disaster recovery (BCP) site. I have a Sun Fire V440 in each location running Solaris 10 08/07 update. The application is currently running on a Solaris 8 02/04 server and must continue to do so. The catch is that the server in the BCP site is also used as a QA server. My thought was to create on the Solaris 10 server, two Solaris 8 containers, one for failover from the home office and the other used for QA. At the home office site, the server would run one Solaris 8 container. We are using EMC SRDF for replication and storage of the Informix database. The container on the home office server would failover to the BCP container on the server in the BCP site. My questions are: 1) Is this scenario possible, and 2) How would I configure the clustering on the servers? Should I be using the the data services for Containers and for Informix? I have so far created one node clusters in each site and was in the process of configuring the resource groups but was unsure how to proceed. Thanks for any help anyone can give. Thanks.I work for the Sun Cluster Geographic Edition (SCGE) team so I hope I can give some definitive answers...
First, I'm slightly confused as to whether this is a single cluster with geographically split nodes or two single node clusters joined together with SCGE. From re-reading your posting it looks like it is the latter, which although far from optimal, is possible. The point to make is that any failure on the primary site is probably going to have to be treated as a disaster. You will need to decide whether the primary site node will be back up any time soon and if not, take-over the service on the remote (DR) site. Once you've taken over the service, the reverse re-sync is going to be quite expensive. If you'd had a local cluster at the primary site, then only site failures would have forced this decision.
Back to the configuration. You'll need to install single node Solaris Clusters (3.2 01/09) at each site. You would then create an HA-container agent resource group/resource to hold you Solaris 8 brand zone. You'd then put your Informix database in this container. You'd do the same at the remote site. Your options for storage are raw DIDs devices or file system. You can't use SVM with SRDF yet and I don't think there is a supported way to map VxVM volumes into the HA-container (though I may be wrong). Personally, I'd use UFS on raw DID (or VxVM) in the global zone mounted with forcedirectio and map that into the HA-container. (http://docs.sun.com/app/docs/doc/820-5025/ciagbcbg?l=en&a=view)
I don't know off-hand whether the the Informix agent will run in with an HA-container agent with a Solaris 8 brand container. I'll ask a colleague.
If you need any more information, it might be more helpful to contact me directly at Sun. (First.Last)
Regards,
Tim
--- -
Solaris 10 6/06 ZFS and Zones, not quite there yet...
I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
DaveI was all excited to get ZFS working in our
environment, but alas, a warning appeared in the
docs, which drained my excitement! :
http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
view
essentially it says that ZFS should not be used for
non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
Sigh.. Maybe in the next release (I'll assume ZFS
will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
Darren -
Recommended Patch Clusters and Zones
Good Afternoon,
Ran into a problem earlier this week and wanted to get other views on this. Our current configuration is as followed:
Global zone installed on Local Disk (ZFS)
5 - Non-global whole root zones installed on SAN disk (ZFS)
No Live Upgrade (Will eventually get to this)
Never have had any problems until I attempted to install the latest Patch Cluster because of Comms Suite 7. I shutdown all of the non-global zones and shutdown the Global zone to init S. I then started my Patch Cluster install. The Patch Cluster appears to start all of the zones up in an Administrative Mode for patching. The problem was that when it go to the kernel patch, 141414-10, it appeared to install in the Global but none of the non-global zones were updated. The Patch then stopped on 141414-10 (Patch 109 of 155). I did finally get the patch cluster to install after some work and a support call.
My question is this the proper way to install the Patch Clusters? I've been told that you have to "mount" the zones manually but wouldn't that defeat the purpose of being single-user?
Any help is appreciated.
DougI tend to use LU to create a new BE and then patch that. Then activate the new BE and reboot.
Saves a lot of pain and gives a safe fall back option. -
Saposcol, Solaris 10 and zones
We are running a bunch of instances on Solaris 10 x86_64 in zones. Everything works great (including ZFS) but one issue is there:
saposcol always "sees" the full machine, means e. g. CPU usage is displayed not on a per-zone basis but only once for all zones.
Are there any plans on extending saposcol to support zone-specific data (such as IBM has recently done with AIX-micropartitioning and/or LPARS)? It would greatly improve resource management on those systems.
MarkusYup i do see em at unix lvl.
Filesystem size used avail capacity Mounted on
/ 20G 6.8G 13G 35% /
/archivos 1.9G 2.0M 1.9G 1% /archivos
/dev 20G 6.8G 13G 35% /dev
/dvds 39G 6.6G 32G 17% /dvds
/sapdb/PRD/db 962M 167M 737M 19% /sapdb/PRD/db
/sapdb/PRD/logbackups
4.9G 5.0M 4.9G 1% /sapdb/PRD/logbackups
/sapdb/data 1.9G 62M 1.8G 4% /sapdb/data
/sapdb/programs 962M 208M 696M 24% /sapdb/programs
/sapmnt/PRD 1.9G 686M 1.2G 36% /sapmnt/PRD
/usr/sap/PRD 7.9G 754M 7.1G 10% /usr/sap/PRD
/usr/sap/trans 1.9G 414M 1.5G 22% /usr/sap/trans
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
swap 31G 296K 31G 1% /etc/svc/volatile
mnttab 0K 0K 0K 0% /etc/mnttab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
20G 6.8G 13G 35% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
20G 6.8G 13G 35% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 31G 0K 31G 0% /tmp
swap 31G 56K 31G 1% /var/run
and saposcol is running with root user.
Message was edited by:
Daniel Esteban Rajmanovich -
We do a backup daily via a script of the our zone using ufsdump.Sometimes the backup goes through each file system and other times the entire process is repeated three times.
I have also checked that there is no instruction entry specified in the script to try again if the backup should fail for some reason. Any help will do.It depends on what file systems you are backing up. Perhaps you have a backup from the root file system on down, which would include the zones. Or, say for example you use /export/home for your zones and you backup /export/home on a regular basis; that job would also include the zones underneath.
ufsdump is not zone-aware in the sense that it will read zonepath settings and treat them as separate filesystems. It might simply be that some of your jobs are redundant. -
I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
The steps I have taken are listed below:
Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
Completed a backup.
Destroyed the zone and host server.
Installed the OS and re-created a zone with the same configuration.
Added the ZFS filesystem and made this available within the zone.
Installed EBS and carried out a complete restore.
Logged into the zone and installed the EBS client software then carried out a complete restore.
After a server reload this leaves the ZFS filesytem corrupt.
status: One or more devices could not be used because the the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
p_1 UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 insufficient replicas
c0t8d0 FAULTED 0 0 0 corrupted data
c2t1d0 FAULTED 0 0 0 corrupted dataI finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
Instructions On How To Restore A Server With A Zone Installed
Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
Carry out a restore using:
If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
cd /usr/sbin/nsr
Use scanner -B -im <device>
to get the ssid number and record number
scanner �B -im /dev/rmt/0hbn
cd /usr/sbin/nsr
Enter: ./mmrecov
You will be prompted for the SSID number followed by the file and record number.
All of this information is on the Bootstrap report.
After the index has been recovered:
Stop the backup demons with: �/etc/rc2.d/S95networker stop�
Copy the original res file to res.org and then copy res.R to res.
Start the backup demons with: �/etc/rc2.d/S95networker start�
Now run: nsrck �L7 to reconstruct the indexes.
You should now have your backup indexes intact and be able to perform standard restores.
If the system is using ZFS:
cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
To restore the whole system:
Shutdown any sub zones
cd /
Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
Enter �recover�
At the Recover prompt enter: �force�
Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
If the system is using ZFS:
cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
Reboot the server
The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems. -
folks,
a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
we haven't looked at zone clusters at this stage if for no other reason than time....
or is there a better way?
thanks muchly,
nelsoni must be missing something...any ideas what and where?
nelson
devsun012~> zpool import Zbob
devsun012~> zfs list|grep bob
Zbob 56.9G 15.5G 21K /export/zfs/bob
Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
devsun012~> zpool export Zbob
devsun012~> zoneadm -z bob list -v
ID NAME STATUS PATH BRAND IP
1 bob running /opt/zones/bob native shared
devsun013~> zoneadm -z bob list -v
ID NAME STATUS PATH BRAND IP
16 bob running /opt/zones/bob native shared
devsun012~> clrt list|egrep 'oracle_|HA'
SUNW.HAStoragePlus:6
SUNW.oracle_server:6
SUNW.oracle_listener:5
devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
root@devsun012 > bob-has-rs
clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
clrs: (C891200) Failed to create resource "bob-has-rs". -
I am having an issue after creating a zone from the Solaris 9 sample container. When editing a file using the vi editor it is showing only two thirds of the screen. Also when editing a file the edits are not saved, very strange.
Thanks.Hmmm...
From my WinXP desktop I just ssh'd into one of my local sparse zones (SPARC Sol10u6) using PuTTY 0.60 (Configuration: Connection->Data->Terminal Details- Terminal-type string: xterm) and set my TERM=vt100. No problem with vi. Guess that's not it, eh? It certainly sounds like a TERM problem... until you throw the "can't write the file" into the mix.
Are you root in your zone? Can you create a file in your zone in that same directory (touch foo; ls -l foo)?
Does this work in other zones or is this your first zone? What's your zone config look like?
Maybe you are looking for
-
What to do if your iphone is stolen and find my iphone is disabled in your phone. Is there a possible chance to lock my iphone? How? Thanks. Uhm Good day i have a problem because my iphone 5s is stolen Can you give me possible ways to lock my iphone?
-
Document is invalid.No grammar found.
Hi I am getting the following error while creating Document object as follows: Document is invalid.No grammar found. Java code: String DEFAULT_SAX_DRIVER_CLASS = "org.apache.xerces.parsers.SAXParser"; org.jdom.input.SAXBuilder builder = new SAXBu
-
How can i/adobe tell which licence (student or normal) things are created in?
How can i/adobe tell which licence (student or normal) things are created in? Basically i want to know how i can find out, (in an exported jpg for instance) where it would say it was created in the student edition or in the normal edition. I dont wan
-
Hello Gurus.. I have a page that has the Create button and Reset button...I want to reset the values of my fields and i want to write a code for it.So please help with the code....
-
OS X could not be installed on your computer File system verify or repair failed
Downloaded OS X Yosemite update through Apple Store and attempted to install it on my mom's 2013 21" iMac running OS X 10.9.5. However, a few minutes into the process I received this message: OS X could not be installed on your computer File system v