Zone private area

I use Sol-10 b54. As for now non-global zone can share some directories with
global zone with read-only permission.
I would like to know:
1. Will it be proposed read-write technology of such share ?
2. What about directories share not only with global zone but with non-global
also ?
3. Now I can create non-global zone with the similar set of packages as in
global one. I don't like it. Is there any work around ideas? I'd like
to see possibility of non-global zone creation with predefined package set.
4. Is ZFS is the answer of all aforementioned questions?

The "inherit-pkg-dir" resource can be used to specify a read-only file system
that is shared or exported by the global zone into another zone.
1. Will it be proposed read-write technology of such share You can do that today by defining a "fs" resource of type "lofs". But be aware
that be doing so you're setting up a channel where the local zone can potentially
affect the global zone (for example, by exhausting the space on the file system).
2. What about directories share not only with global zone but with non-global
also ?Yes, the global admin can set up such a file system, again by creating "fs"
resources of type "lofs" into one or more zones. But again, it does mean that
one zone can potentially affect another zone if such a file system exists in
both read-write.
With respect to #3, we're looking at methods of specifying a subset of
packages to copy into a zone when it's installed. For now, you can specify
some of the affected directories, like /opt, as "inherit-pkg-dir" resources or
in some caes, you can use pkgrm(1M) if they packages came from an
unbundled product.
As for ZFS and zones, stay tuned as there should be some very nice
synergy between these two technologies.

Similar Messages

  • Can you determine which secure zone you are logged into?

    Hi all,
    Is there a tag that tells you which secure zones you are logged into and display it on the screen. Or even javascript.
    Thanks

    Thanks Liam.
    I kind of worked out the you need to pull out the page id out of the URL and display it that way, but it fails if you have more than one page (you could check across all pages that are part of the secure zone I guess).
    Any other ideas would be appreciated.
    Anyway thanks again.

  • Are PL/SQL Package Body Constants in Shared Area or Private Area

    Based on this it not clear to me if PL/SQL Package Body Constants are stored in shared area or private area.
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/memory.htm
    "PL/SQL Program Units and the Shared Pool
    Oracle Database processes PL/SQL program units (procedures, functions, packages, anonymous blocks, and database triggers) much the same way it processes individual SQL statements. Oracle Database allocates a shared area to hold the parsed, compiled form of a program unit. Oracle Database allocates a private area to hold values specific to the session that runs the program unit, including local, global, and package variables (also known as package instantiation) and buffers for executing SQL. If more than one user runs the same program unit, then a single, shared area is used by all users, while each user maintains a separate copy of his or her private SQL area, holding values specific to his or her session.
    Individual SQL statements contained within a PL/SQL program unit are processed as described in the previous sections. Despite their origins within a PL/SQL program unit, these SQL statements use a shared area to hold their parsed representations and a private area for each session that runs the statement."
    I am also curious what are the fine grained differences from a memory and performance perspective (multi-session) for the two examples below. Is one more efficient?
    Example 1.
    create or replace
    package body
    application_util
    as
    c_create_metadata constant varchar2(6000) := ...
    procedure process_xxx
    as
    begin
    end process_xxx;
    end application_util;
    vs.
    Example 2.
    create or replace
    package body
    application_util
    as
    procedure process_xxx
    as
    c_create_metadata constant varchar2(6000) := ...
    begin
    end process_xxx;
    end application_util;

    >
    What i am asking is fairly granular, so here it is again, let's assume latest version of oracle..
    In a general sense, is the runtime process able to manage memory more effectively in either case, one even slightly more performant, etc
    ie does example 1 have different memory management characteristics than example 2.
    Specifically i am talking about the memory allocation and unallocation for the constant varchar2(6000)
    Ok, a compiler's purpose is basically to create an optimized execution path from source code.
    The constant varchar2(6000) := would exist somewhere in the parse tree/execution path (this is stored in the shared area?).
    I guess among the things i'm after is
    1) does each session use space needed for an additional varchar2(6000) or does runtime processor simply point to the constant string in the parse tree (compiled form which is shared).
    2) if each session requires allocation of space needed for an additional varchar2(6000), then for example 1 and example 2
    at what point does the constant varchar allocation take place and when is the memory unallocated.
    Basically does defining the constant within the procedure have different memory characteristics than defining the constant at the package body level?
    >
    Each 'block' or 'subprogram' has a different scope. So the 'constant' defined in your example1 is 'different' (and has a different scope) than the 'constant' defined in example2.
    Those are two DIFFERENT objects. The value of the 'constant' is NOT assigned until control passes to that block.
    See the PL/SQL Language doc
    http://docs.oracle.com/cd/E14072_01/appdev.112/e10472/fundamentals.htm#BEIJHGDF
    >
    Initial Values of Variables and Constants
    In a variable declaration, the initial value is optional (the default is NULL). In a constant declaration, the initial value is required (and the constant can never have a different value).
    The initial value is assigned to the variable or constant every time control passes to the block or subprogram that contains the declaration. If the declaration is in a package specification, the initial value is assigned to the variable or constant once for each session (whether the variable or constant is public or private).
    >
    Perhaps this example code will show you why, especially for the second example, a 'constant' is not necessarily CONSTANT. ;)
    Here is the package spec and body
    create or replace package pk_test as
      spec_user varchar2(6000);
      spec_constant varchar2(6000) := 'dummy constant';
      spec_constant1 constant varchar2(6000) := 'first constant';
      spec_constant2 constant varchar2(6000) := 'this is the second constant';
      spec_constant3 constant varchar2(6000) := spec_constant;
      procedure process_xxx;
      procedure change_constant;
    end pk_test;
    create or replace package body pk_test as
    procedure process_xxx
    as
      c_create_metadata constant varchar2(6000) := spec_constant;
    begin
      dbms_output.put_line('constant value is [' || c_create_metadata || '].');
    end process_xxx;
    procedure change_constant
    as
    begin
      spec_constant := spec_constant2;
    end change_constant;
    begin
      dbms_output.enable;
      select user into spec_user from dual;
      spec_constant := 'User is ' || spec_user || '.';
    end pk_test;The package init code sets the value of a packge variable (that is NOT a constant) based on the session USER (last code line in package body).
    The 'process_xxx' procedure gets the value of it's 'constant from that 'non constant' package variable.
      c_create_metadata constant varchar2(6000) := spec_constant;The 'change_constant' procedure changes the value of the package variable used as the source of the 'process_xxx' constant.
    Now the fun part.
    execute the 'process_xxx' procedure as user SCOTT.
    SQL> exec pk_test.process_xxx;
    constant value is [User is SCOTT.].Now execute 'process_xxx' as another user
    SQL> exec pk_test.process_xxx;
    constant value is [User is HR.].Now exec the 'change_constant' procedure.
    Now exec the 'process_xxx' procedure as user SCOTT again.
    SQL> exec pk_test.process_xxx;
    constant value is [this is the second constant].That 'constant' defined in the 'process_xxx' procedure IS NOT CONSTANT; it now has a DIFFERENT VALUE.
    If you exec the procedure as user HR it will still show the HR constant value.
    That should convince you that each session has its own set of 'constant' values and so does each block.
    Actually the bigger memory issue is the one you didn't ask about: varchar2(6000)
    Because you declared that using a value of 6,000 (which is 2 ,000 or more) the actual memory allocation not done until RUN TIME and will only use the actual amount of memory needed.
    That is, it WILL NOT pre-allocate 6000 bytes. See the same doc
    http://docs.oracle.com/cd/E14072_01/appdev.112/e10472/datatypes.htm#CJAEDAEA
    >
    Memory Allocation for Character Variables
    For a CHAR variable, or for a VARCHAR2 variable whose maximum size is less than 2,000 bytes, PL/SQL allocates enough memory for the maximum size at compile time. For a VARCHAR2 whose maximum size is 2,000 bytes or more, PL/SQL allocates enough memory to store the actual value at run time. In this way, PL/SQL optimizes smaller VARCHAR2 variables for performance and larger ones for efficient memory use.
    For example, if you assign the same 500-byte value to VARCHAR2(1999 BYTE) and VARCHAR2(2000 BYTE) variables, PL/SQL allocates 1999 bytes for the former variable at compile time and 500 bytes for the latter variable at run time.
    >
    So when you have variables and don't know how much space is really needed do NOT do this:
    myVar1 VARCHAR2(1000);
    myVar2 VARCHAR2(1000);
    myVar3 VARCHAR2(1000);The above WILL allocate 3000 bytes of expensive memory even if it those variables are NEVER used.
    This may look worse but, as the doc states, it won't really allocate anything if those variables are not used. And when they are used it will only use what is needed.
    myVar1 VARCHAR2(2000);
    myVar2 VARCHAR2(2000);
    myVar3 VARCHAR2(2000);

  • Private Area not Created in cFolders

    I assigned a collaboration folder (competitive scenario) while creating a Bid Invitation. This creates a public area and I added a document. The bid invitation is then published.
    Now when bidder creates a bid, private area is not being created in cFolders.
    Am I missing some specific settings here or is this a roles issue?
    Please let me know what roles have to be assigned to the bidder so that he can create a new folders/documents in private area.

    Hi Sreenivas,
    Once you create a new collaboration in competetive scenario by default public area will be created.
    To create private area, click on that collaboration then under 'Work Areas' create new work area. This will become private area. Then assign authorizations to this private area to any one of the vendors.
    Reward points if useful.
    Regards,
    Laxminarasimha

  • Private area in CFolders

    Hi all,
    We are implementing Bidding with cFolders. We are using the competitive scenario.
    When a new collaboration is created, a public area gets created.
    When does the system create Private area for the bidders. Is there any customizing setting. We are not able to see private areas for the bidders.
    Thanks and Rgds
    Venkat

    When the bidders try to submit a bid, then the system will create a private folder for them.
    Regards
    Kathirvel

  • Grouping courses within categories in private area

    Hi,
    I searched through past posts and through the iTunes U manual but didn't find anything about this. It is a really simple question though, so I am sure there is some way to do it.
    From the main welcome page in our private area, I would like to create links to three separate program areas.
    As it is now, it seems that the only option is to add courses to the welcome page. I would like to have categories on the welcome page. When you select a category, then inside the category page, you would find links to the courses, and inside the courses, of course you would find the audio/video content.
    So in short, what I have now is welcome>course>content
    What I would like is welcome>program>course>content
    I only see options for adding more courses to the welcome page, not for adding categories/program options.
    Any help?
    Thanks in advance.

    When you click on the Create Page drop down menu, choose "Default Welcome" instead of "Default Course."

  • Creating zones that are on different subnets

    Hi,
    I am running Solaris 10 11/06 with zones. I had no issues creating zones with the same default router as the global zone. However, I want to create zones that will live in the DMZ, on a different subnet. I have looked around and the only thing I could find was to add the new subnet in the defaultrouter on the global zone.
    192.168.69.1 default route for global and zone 1
    10.10.6.1 default route for zone 2
    cat /etc/defaultrouters
    192.168.69.1
    10.10.6.1
    I did that, rebooted, and created the new zone. The new zone did not get the default route set. It was also not set in the global. The only way I can get this to work, is run :
    route add default 10.10.6.1
    I have created an init script to add this route at bootup on the global zone.
    Is this the right way to handle multiple subnets on a container host? Do I need to add the network in ./etc/netmasks?
    Thanks,
    David

    If you are stuck with update 3, this is a workaround we did on our system.
    create a startup script /etc/init.d/zone-defaultroute
    #!/usr/bin/sh
    #######START######
    /usr/sbin/ifconfig interface:x addif zone2_ipaddress netmask up
    /usr/sbin/route add default ip_router_zone2
    /usr/sbin/ifconfig interface:x removeif zone2_ipaddress
    /usr/sbin/zoneadm -z zone2 boot
    #######END########
    link the file to rc3.d
    ln -s /etc/init.d/zone-defaultroute /etc/rc3.d/S90zonedefaultroute
    Edited by: almazh on Oct 16, 2007 2:27 PM

  • Special Economic Zone Customers- ARE-1 form...

    Hi gurus,
    For a SEZ customer( who is in India), no sales tax, excise duty is foregone and ARE-1 has to be created. Can anybody guide me how to create ARE-1 for Indian customer?
    Thanks,
    Raja

    Refer following SAP wiki link contributed by me for this:
    - [ A.R.E 3- Deemed Export |http://wiki.sdn.sap.com/wiki/display/ERPLO/Deemed+Export]
    FYI, Special Economic Zone Customers is cater through A.R.E. 3.
    Consult & confirm with your client business expert for this.
    Regards
    JP
    As I know & practised:
    |- [ FORM A.R.E.-1 |http://www.cbec.gov.in/excise/formidx.htm]|Application for removal of excisable goods for export by air/sea/post/land|
    |- [ FORM A.R.E.-3|http://www.cbec.gov.in/excise/formidx.htm]|Application for removal of goods from a factory or a warehouse to another warehouse  |
    Edited by: Jyoti Prakash on Feb 9, 2012 1:05 PM

  • Howto: Zones in private subnets using ipfilter's NAT and Port forwarding

    This setup supports the following features:
    * Requires 1 Network interface total.
    * Supports 1 or more public ips.
    * Allows Zone to Zone private network traffic.
    * Allows internet access from the global zones.
    * Allows direct (via ipfilter) internet access to ports in non-global zones.
    (change networks to suit your needs, the number of public and private ip was lowered to simplify this doc)
    Network setup:
    iprb0 65.38.103.1/24
    defaultrouter 65.38.103.254
    iprb0:1 192.168.1.1/24 (in global zone)
    Create a zone on iprb0 with an ip of 192.168.1.2
    ### Example /etc/ipf/ipnat.conf
    # forward from a public port to a private zone port
    rdr iprb0 65.38.103.1/32 port 2222 -> 192.168.1.2 port 22
    # force outbound zone traffic thru a certain ip address
    # required for mail servers because of reverse lookup
    map iprb0 192.168.1.2/32 -> 65.38.103.1/32 proxy port ftp ftp/tcp
    map iprb0 192.168.1.2/32 -> 65.38.103.1/32 portmap tcp/udp auto
    map iprb0 192.168.1.2/32 -> 65.38.103.1
    # allow any 192.168.1.x zone to use the internet
    map iprb0 192.168.1.0/24 -> 0/32 proxy port ftp ftp/tcp
    map iprb0 192.168.1.0/24 -> 0/32 portmap tcp/udp auto
    map iprb0 192.168.1.0/24 -> 0/32For testing purposes you can leave /etc/ipf/ipf.conf empty.
    Be aware the you must "svcadm disable ipfilter; svcadm enable ipfilter" to reload rules and the rules stay loaded if they are just disabled(bug).
    Zones can't modify their routes and inherit the default routes of the global zone. Because of this we have to trick the non-global zones into using a router that doesn't exist.
    Create /etc/init.d/zone_route_hack
    Link this file to /etc/rc3.d/S99zone_route_hack.
    #/bin/sh
    # based on information found at
    # http://blogs.sun.com/roller/page/edp?entry=using_branded_zones_on_a
    # http://forum.sun.com/jive/thread.jspa?threadID=75669&messageID=275741
    fake_router=192.168.1.254
    public_net=65.38.103.0
    router=`netstat -rn | grep default | grep -v " $fake_router " | nawk '{print $2}'`
    # send some data to the real network router so we look up it's arp address
    ping -sn $router 1 1 >/dev/null
    # record the arp address of the real router
    router_arp=`arp $router | nawk '{print $4}'`
    # delete any existing arp address entry for our fake private subnet router
    arp -d $fake_router >/dev/null
    # assign the real routers arp address to our fake private subnet router
    arp -s $fake_router $router_arp
    # route our private subnet through our fake private subnet router
    route add default $fake_router
    # Can't create this route until the zone/interface are loaded
    # Adjust this based on your hardware and number of zones
    sleep 300
    # Duplicate this line for every non-global zone with a private ip that
    # will have ipfilter rdr (redirects) pointing to it
    route add -net $public_net 192.168.1.2 -ifaceNow we have both public and private ip addresses on our one iprb0 interface. If we'd really like our private zone network to really be private we don't want any non-NAT'ed 192.168.1.x traffic leaving the interface. Since ipfilter can't block traffic between zones because they use loopbacks we can just block the 192.168.1.x traffic and the zones can still talk.
    The following /etc/ipf/ipf.conf defaults to deny.
    # ipf.conf
    # IP Filter rules to be loaded during startup
    # See ipf(4) manpage for more information on
    # IP Filter rules syntax.
    # INCOMING DEFAULT DENY
    block in all
    block return-rst in proto tcp all
    # two open ports one of which is redirected in ipnat.conf
    pass in quick on iprb0 proto tcp from any to any port = 22 flags S keep state keep frags
    pass in quick on iprb0 proto tcp from any to any port = 2222 flags S keep state keep frags
    # INCOMING PING
    pass in quick on iprb0 proto icmp from any to 65.38.103.0/24 icmp-type 8 keep state
    # INCOMING GLOBAL ZONE UNIX TRACEROUTE FIX PART 1
    #pass in quick on iprb0 proto udp from any to 65.38.103.0/24 keep state
    # OUTGOING RULES
    block out all
    # ALL INTERNAL TRAFFIC STAYS INTERNAL (Zones use non-filtered loopback)
    # remove/edit as needed to actually talk to local private physical networks
    block out quick from any to 192.168.0.0/16
    block out quick from any to 172.16.0.0/12
    block out quick from any to 10.0.0.0/8
    block out quick from any to 0.0.0.0/8
    block out quick from any to 127.0.0.0/8
    block out quick from any to 169.254.0.0/16
    block out quick from any to 192.0.2.0/24
    block out quick from any to 204.152.64.0/23
    block out quick from any to 224.0.0.0/3
    # Allow traffic out the public interface on the public address
    pass out quick on iprb0 from 65.38.103.1/32 to any flags S keep state keep frags
    # OUTGOING PING
    pass out quick on iprb0 proto icmp from 65.38.103.1/32 to any icmp-type 8 keep state
    # Allow traffic out the public interface on the private address (needs nat and router arp hack)
    pass out quick on iprb0 from 192.168.1.0/24 to any flags S keep state keep frags
    # OUTGOING PING
    pass out quick on iprb0 proto icmp from 192.168.1.0/24 to any icmp-type 8 keep state
    # INCOMING TRACEROUTE FIX PART 2
    #pass out quick on iprb0 proto icmp from 65.38.103.1/32 to any icmp-type 3 keep stateIf you want incoming and outgoing internet in your zones it is easier if you just give them public ips and setup a firewall in the global zone. If you have limited public ip address(I'm setting up a colocation 1u server) then you might take this approach. One of the best things about doing thing this way is that any software configured in the non-global zones will never be configured to listen on an ip address that might change if you change public ips.

    Instead of using the script as a legacy_run script, set it up in SMF.
    First create the file /var/svc/manifest/system/ip-route-hack.xml with
    the following
    ---Start---
    <?xml version="1.0"?>
    <!DOCTYPE service_bundle SYSTEM
    "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
    <!--
    ident "@(#)ip-route-hack.xml 1.0 09/21/06"
    -->
    <service_bundle type='manifest' name='NATtrans:ip-route-hack'>
    <service
    name='system/ip-route-hack'
    type='service'
    version='1'>
    <create_default_instance enabled='true' />
    <single_instance />
    <dependency
    name='physical'
    grouping='require_all'
    type='service'
    restart_on='none'>
    <service_fmri value='svc:/network/physical:default' />
    </dependency>
    <dependency
    name='loopback'
    grouping='require_all'
    type='service'
    restart_on='none'>
    <service_fmri value='svc:/network/loopback:default' />
    </dependency>
    <exec_method
    type='method'
    name='start'
    exec='/lib/svc/method/svc-ip-route-hack start'
    timeout_seconds='0' />
    <property_group name='startd' type='framework'>
    <propval name='duration' type='astring'
    value='transient' />
    </property_group>
    <stability value='Unstable' />
    <template>
    <common_name>
    <loctext xml:lang='C'>
    Hack to allow zone to NAT translate.
    </loctext>
    </common_name>
    <documentation>
    <manpage
    title='zones'
    section='1M'
    manpath='/usr/share/man' />
    </documentation>
    </template>
    </service>
    </service_bundle>
    ---End---
    then modify /var/svc/manfiest/system/zones.xml and add the following
    dependancy
    ---Start---
    <dependency
    name='inet-ip-route-hack'
    type='service'
    grouping='require_all'
    restart_on='none'>
    <service_fmri value='svc:/system/ip-route-hack' />
    </dependency>
    ---End---
    Finally create the file /lib/svc/method/svc-ip-route-hack with the
    contents of S99zone_route_hack, minus the sleep timer (perms 0755). Run
    'svccfg import /var/svc/manifest/system/ip-route-hack.xml' and 'svccfg
    import /var/svc/manifest/system/zones.xml'.
    This will guarantee that ip-route-hack is run before zones are started,
    but after the interfaces are brought on line. It is worth noting that
    zones.xml may get overwritten during a patch, so if it suddenly stops
    working, that could be why.

  • Time zones being re-written when files are processed from refernced files

    I'm having a problem where photos with time zones set are being re-written automatically to whatever time zone my mac is currently set at when Aperture processes them for viewing regardless of the time zone they were set to. This happens as the photos load for viewing (i.e. not on the browser thumbnail view). It does this one at a time and eats up processing time. This seems to only happen (I think) for referenced files stored on my external drive and not those stored in the Aperture library. If I re-set the photos back to the proper time zones using Batch Change, it still happens next time it has to load the photo into memory, again "processing" and again the time zone reset to whatever time zone I'm now in.
    Here's the exact situation:
    - I've imported several photosets from a Canon 50D. The camera was set to the time zone of the country in question. Let's use India at GMT +5:30 as an example.
    - When I import the photos, I also use the time zone presets, setting both the Camera Time and the Actual Time to GMT +5:30.
    - I'm importing the photos as referenced files, storing them on an external hard drive.
    - So when I first view the photos, the time and date show correct. Again using a real example IMG_0001 shows the date 6/10/10 1:43:31 PM GMT +05:30.
    - Now I close Aperture and re-open.
    - I open the folder and view the photo. The "loading" message appears at the time and the "processing" message appears on the bottom. (and during this unnecessary processing, it is very slow flipping to the next photo, a serious annoyance in browsing).
    - After processing, the photo IMG_0001 now shows the date as 6/10/10 1:43:31 PM GMT +03:00 (the time zone I'm currently in and the one my Mac is currently set to)
    - If I close it again and re-set my Mac to say I'm in London at GMT, and re-open, the photo is re-set instead to GMT.
    - I can re-set all the dates to the proper time zone using the Batch Change where I again set both the Camera Time and Actual Time to the correct time zone. But when the photo re-loads, it again re-processes and goes back to the date of the Mac. So changes don't seem to stick.
    - This appears to happen only to files that are referenced and stored on my external drive, not those stored in the Aperture library.
    Has anyone had this problem? Is it a bug or some hidden setting I'm missing? Is there some way to tell Aperture "don't change the dates, ever, unless I do it via batch change"? The problem turns into a major inconvenience in reviewing and editing photos with it taking the extra processing time every photo and messing around the order.

    maybe you try this question also in the dedicated Adobe Illustrator Forum, not many inhere (including me) have much experience with Illustrator nor live trace I'm afraid.

  • Blocking Outbound Calls by Area Code and Time Zone

    I am looking to block agents from making outbound calls to certain time zones by area code.  Can't quite get my head around how I can use route filters with time schedules.  Any help would be appreciated.

    Create the route patterns for those area codes and then configure one which allows the calls, and another one that blocks the calls, use TOD to enable/disable them as required.
    HTH
    java
    if this helps, please rate
    www.cisco.com/go/pdihelpdesk

  • Are zones supported with scalable services?

    Howdy,
    Is it possible to use scalable services with SUNW.apache in a non-global zone? The concepts guide seems to indicate that it's possible to combine scalable services and zones, but I don't see any mention of this in the Apache data services guide. I tried to configure an Apache resource in a scalable resource group for the heck of it, but it bombs out:
    $ clresource create --verbose -g apache-rg -t SUNW.apache -p Port_list=80/tcp \
    -p Scalable=true -p bin_dir=/usr/apache2/bin apache-res
    clresource: (C189917) VALIDATE on resource apache-res, resource group apache-rg, exited with non-zero exit status.
    clresource: (C720144) Validation of resource apache-res in resource group apache-rg on node snode2:zone1 failed.
    clresource: (C891200) Failed to create resource "apache-res".
    The zones (there are two zones anmed zone1, one running on each node) are up and operational, and I verified that the version of Apache in /usr/apache2/bin starts and stops correctly. If scalable services are supported, would I need to do anything special with the zones network configuration in zonecfg?
    Thanks for any insight,
    - Ryan

    Yes it is possible. Unfortunatly you did not list the complete steps you did before, e.g. how did you create the scalable apache-rg? Did you create a failover RG with the scalable IP address?
    Let me give you an example that works for me:
    # clrg create -n node-a:zone1,node-b:zone1 shared-ip-rg
    # clressharedaddress create -g shared-ip-rg scalable-ip
    # clrg online -eM shared-ip-rg
    # clrg create -S -n node-a:zone1,node-b:zone1 apache-rg
    # clresource create -g apache-rg -t SUNW.apache -p resource_dependencies=scalable-ip
    -p Port_list=80/tcp -p scalable=true -p bin_dir=/usr/apache2/bin apache-rs
    # clrg online -eM apache-rg
    You need to check /var/adm/messages on both nodes on order to find out why validate failed for you.
    Greets
    Thorsten

  • Can't delete primary zone in DNS after moving the server

    Woe is me!
    Our MacMini was hosted at a Colo site and working fine. No firewall in front of the machine, so we turned on the server firewall and only allowed mail, web, ftp, and a couple of other services. This worked great using our external public DNS wired to our domain names and public fixed IP address. Later, we got VPN up a running (the trick was to create a second, local IP address for the ethernet port), but this also required us to turn on the server's DNS to create a split-brained DNS server.
    Everything was working swimmingly... and then we had a hard drive crash. Since we were thinking about moving the server onsite anyway (our POS system was accessed through the VPN, but it could be slow and made our tasting room dependent on Internet access in order to run the POS), we ordered Comcast business class internet with a fixed IP address.
    We updated the external public DNS to the new public fixed ip. Rather than plug the mini directly to the Comcast router (which is in pass-through mode), we elected to put a AirPort Extreme in front of it, mainly so we could get all of the POS computers on the same local network without using the mini as a DHCP/NAT router. We created a DHCP reservation on the Extreme so that the mini had a fixed local IP address. We port forwarded everything we wanted to expose to the Internet. Email started to work again. However, web services and VPN are nada.
    This being Snow Leopard Server and having spent literally hours debugging DNS issues when we first got the server, I knew it wouldn't be straightforward. And it hasn't been. Even changing the IP address of the server has been a chore.
    We ran "sudo changeip <old IP address> <new IP address>".
    Then we ran "sudo changeip -checkhostname" and received:
    "$ sudo changeip -checkhostname
    Primary address     = 10.0.8.2 <new static internal IP address>
    Current HostName    = <servername>.<domainname>.com
    The DNS hostname is not available, please repair DNS and re-run this tool.
    dirserv:success = "success""
    Oh no, the black pit of death.
    Even though I tried to modify the machine record in the local DNS to reflect the new internal static IP address, Nada.
    So, looking back on my previous research from Mr Hoffman and others, I stopped the DNS service, and I deleted the primary zone and reverse lookups in order to rebuild them from scratch. Except that no matter what I do, I can't delete the primary zone - it comes back like Dracula (even though the reverse zone and all of the zone records are gone). I tried rebuilding everything using the undeletable zone, but after a few services (saved each one separately), they would suddenly disappear.
    I am leery of messing with the DNS files on the server as I don't want to hose up Server Admin (my command line skills are rudimentary and slow). I have so much installed on the machine now that I am concerned about someone saying "reinstall".
    Help!
    Related to this is that it is not clear to me in web services which IP address you should use for the sites. The internal IP? The public IP? I thought Apache cared about the external IP address. And I think Apache is hosed at the moment due to my DNS troubles anyway.
    Thanks in advance!

    Morris Zwick wrote:
    And does anyone know which IP you enter for your sites in the web service? The public static IP or the internal private static IP?
    For the external DNS server I am sure you have already deduced that it should be the static IP issued you by Comcast and this will be forwarded by your router to your server.
    For your internal DNS server you could use either the internal LAN IP, or the external IP although the later might be affected by your firewall so this you will need to test.
    For the Web Server service in Server admin, if your only running a single website you could avoid the issue by just using the wildcard entry which will respond to any IP address, so this would be an empty host name and an IP address of *
    In fact you don't have to specify an IP address you could just use the hostname, so it will listen to traffic arriving at your server addressed to any IP address and as long as the URL that was requested includes the hostname you define for the site it will get responded to. So if as an example you have two websites you want to serve
    www.example.com
    site2.example.com
    then as long as both have the IP address for the site as an * (asterisk) then both should work as separate sites for traffic addressed to either the LAN or WAN IP address of the server.
    You will still need to use two IP addresses on the server to enable VPN, you could use a USB Ethernet adapter for the second one. Port forwarding for VPN is not as simple as other traffic as VPN requires traffic different to the standard IP and UDP packets. Routers that support 'VPN Passthrough' are specifically designed to accomodate this but I don't know if the AirPort Extreme does this. I have also found PPTP copes better with this sort of setup than L2TP although PPTP is generally regarded as less secure.

  • Difference between public void, private void and public string

    Hi everyone,
    My 1st question is how do you know when to use public void, private void and public string? I am mightily cofuse with this.
    2ndly, Can anybody explain to me on following code snippet:
    Traceback B0;//the starting point  of Traceback
    // Traceback objects
    abstract class Traceback {
      int i, j;                     // absolute coordinates
    // Traceback2 objects for simple gap costs
    class Traceback2 extends Traceback {
      public Traceback2(int i, int j)
      { this.i = i; this.j = j; }
    }And using the code above is the following allowed:
    B[0] = new Traceback2(i-1, 0);
    Any replies much appreciated. Thank you.

    1)
    public and private are access modifiers,
    void and String return type declarations.
    2)
    It's called "inheritance", and "bad design" as well.
    You should read the tutorials, you know?

  • How to delete a shortcut favorite from the browser in the WORK Area

    Hiho,
    creating a shortcut favorite icon of a browser-website in the work area shows up a symbol to use this shortcut. In the private area I can delete this favorite-symbol by tapping longer on the icon an a trashcan shows up, so I can delete the icon (and the browser favorite from the private area). Not so in the work (buisiness) part of the BlackBerry, there is no possibility to delete this icon. I can't find a policy on the BES10 server or thomething on the Z10, any ideas how i can remove the icon?
    Greetings... Ifrani

    Hi Ifrani!
    I´ve the same problem on my Z10!
    CommanderApollo

Maybe you are looking for

  • Music wiped from iPod nano but still in library

    Trying to update my iPod nano, I get this error message: Songs on the iPod "Claire's iPod" cannot be updated because all of the playlists selected for syncing no longer exist I inadvertently deleted a playlist entitled Claire's iPod earlier, which se

  • Best performance vs. Best quality in iDVD Prefs!?

    What is the difference? i have selected "best quality" and see no difference in picture, but one can fit more onto a dvd with best quality vs. performance. what is the difference!? thanks!

  • Is it possible to create a Credit memo without reference doc

    Hi, I need to create a billing document (VF01) without passing the refernce document number. Is there any way to create this? While creating the billing document i just want to pass only items. Thanks In Advance, Regards, Raj

  • Vista Home installer disc not found after partitioning

    I don't get an error message but the "installer disc could not be found" message comes up every time I click on Start Installation in Boot Camp and the disk ejects. I repartitioned and am up to date on 10.5.1. I tried setting the partition from 32 to

  • Output sidewise

    I have an itab with some records and fields say A, B and C. I want to write output from this itab on a simple list something like this based on a set of if conditions.          (For cond.1)   (For cond.2)   (For cond.3) Field A   Field B & C   Field