Creating Logical hostname in sun cluster

Can someone tell me, what exactly logical hostname in sun cluster mean?
For registering logical hostname resource in failoover group, what exactly i need to specify
for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

Thanks Thorsten for ur continue help...
The output of clrs status abc_lg
=== Cluster Resources ===
Resource Name Node Name State Status Message
abc_lg node1 Offline Offline
node2 Offline Offline
The status is offline...
the output of clresourcegroup status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
abc_rg node1 No Unmanaged
node2 No Unmanaged
You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
I have 1) Logical hostname resorce.
2) Application resource in my failover resource group
When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

Similar Messages

  • Problem in creating logical hostname resource

    Hi all,
    I have a cluster configured on 10.112.10.206 and 10.112.10.208
    i have a resource group testrg
    I want to create a logical hostname resource testhost
    I have given a ip 10.112.10.245 in /etc/hosts file for testhost
    I am creating a logical hostname resource by below command -
    clrslh create -g testrg testhost
    I am doing this on 206
    As I do, the other node 208 becomes unreachable....I m not able to ping 208 but ssh is done from 206 to 208.
    I am also not able to ping 10.112.10.245
    Please help.

    So, the physical IP addresses of your two nodes are:
    10.112.10.206 node1
    10.112.10.208 node2
    And your logical host is:
    10.112.10.245 testhost
    Have you got a netmask set for this network? Is it 255.255.255.0 and is it set in /etc/netmasks?
    It's most likely that this is the cause of the problem if you have different netmasks on the interfaces.
    Tim
    ---

  • Failed to create resource - Error in Sun cluster 3.2

    Hi All,
    I have a 2 node cluster in place. When i trying to create a resource, i am getting following error.
    Can anybody tell me why i am getting this. I have Sun Cluster 3.2 on Solaris 10.
    I have created zpool called testpool.
    clrs create -g test-rg -t SUNW.HAStoragePlus -p Zpools=testpool hasp-testpool-res
    clrs: sun011:test011z - : no error
    clrs: (C189917) VALIDATE on resource hasp-testpool-res, resource group test-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource hasp-testpool-res in resource group test-rg on node sun011:test011z failed.
    clrs: (C891200) Failed to create resource "hasp-testpool-res".
    Regards
    Kumar

    Thorsten,
    testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
    Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
    Regards
    Kumar

  • Creating Logical Hostname Resource - Resource contains invalid hostnames

    I am desperately trying to create a shared ip address that my two-node zone cluster will utilize for a failover application. I have added the hostname/ip address pair to /etc/hosts and /etc/inet/ipnodes on both global nodes as well as within each zone cluster node. I then attempt to run the following:
    # clrslh create -Z test -g test-rg -h foo.bar.com test-hostname-rs
    which yields the following:
    clrslh: host1.example.com:test - The hostname foo.bar.com is not authorized to be used in this zone cluster test.
    clrslh: host1.example.com:test - Resource contains invalid hostnames.
    clrslh: (C189917) VALIDATE on resource test-hostname-rs, resource group test-rg, exited with non-zero exit status.
    clrslh: (C720144) Validation of resource test-hostname-rs in resource group test-rg on node host1 failed.
    clrslh: (C891200) Failed to create resource "test:test-hostname-rs".
    I have searched high and low. The only thing I found was the following:
    http://docs.sun.com/app/docs/doc/820-4681/m6069?a=view
    Which states: User the clzonecluster(1M) command to configure the hostnames to be used for this zone cluster and then rerun this command to create the resource.
    I do not understand what it is saying. My guess is that I need to apply a hostname to the zone cluster. Granted, I don't know how to accomplish this. Halp?

    The procedure to authorize the hostnames for the zone cluster is below:
    clzc configure <zonecluster> (this will bring you under the zone cluster scope like below)
    clzc:<zonecluster> add net
    clzc:<zonecluster>:net> set address=<hostname>
    clzc:<zonecluster>:net> end
    clzc:<zonecluster> commit
    clzc:<zonecluster>info (to verify the hostname)
    After this operation, run the clrslh command to create the logical host resource
    and the command should pass.
    Thanks,
    Prasanna Kunisetty

  • Creating logical host on zone cluster causing SEG fault

    As noted in previous questions, I've got a two node cluster. I am now creating zone clusters on these nodes. I've got two problems that seem to be showing up.
    I have one working zone cluster with the application up and running with the required resources including a logical host and a shared address.
    I am now trying to configure the resource groups and resources on additional zone clusters.
    In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.
    I log onto the zone and I create a failover resource group, no problem. I then try to create a logical host and I get:
    "Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11"
    This error appears to be happening on the other node, ie: not the one that I'm building from.
    Anyone seen anything like, this have any thoughts on where I should go with it?
    Thanks.

    Hi,
    In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.Look at the stack from your core dump and see whether this is matching with the bug:
    6763940 clzc dumped core after zones were installed
    As far as I know, the above bug is harmless and no functionality should be impacted. This bug is already fixed in the later release.
    "Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11" The above message is not enough to figure out what's wrong. Please look at the below:
    1) Check the /var/adm/messages on the nodes and observe the messages that got printed around the same time that the above
    message got printed and see whether that gives more clues.
    2) Also see whether there is a core dump associated with the above message and that might also provide more information.
    If you need more help, please provide the output for the above.
    Thanks,
    Prasanna Kunisetty

  • How to create Logical domain in Sun Fire T2000 server

    Hi,
    I am new to this Logical domain, Can some one provide me doc to install a guest OS in logical domain.I have sun T 2000 server with solaris 10 OS
    Regards
    Alex

    Hi Alex,
    this is a good place to start reading (even though it refers to LDOMs 1.0):
    http://www.sun.com/blueprints/0207/820-0832.pdf
    followed by here for the docs set for whichever version you're using - have fun!
    http://docs.sun.com/app/docs/prod/ldoms

  • Didadm: unable to determine hostname.  error on Sun cluster 4.0 - Solaris11

    Trying to install Sun Cluster 4.0 on Sun Solaris 11 (x86-64).
    iscs sharedi Quorum Disk are available in /dev/rdsk/ .. ran
    devfsadm
    cldevice populate
    But don't see DID devices getting populated in /dev/did.
    Also when scdidadm -L is issued getting the following error. Has any seen the same error ??
    - didadm: unable to determine hostname.
    Found in cluster 3.2 there was a Bug 6380956: didadm should exit with error message if it cannot determine the hostname
    The sun cluster command didadm, didadm -l in particular, requires the hostname to function correctly. It uses the standard C library function gethostname to achieve this.
    Early in the cluster boot, prior to the service svc:/system/identity:node coming online, gethostname() returns an empty string. This breaks didadm.
    Can anyone point me in the right direction to get past this issue with shared quorum disk DID.

    Let's step back a bit. First, what hardware are you installing on? Is it a supported platform or is it some guest VM? (That might contribute to the problems).
    Next, after you installed Solaris 11, did the system boot cleanly and all the services come up? (svcs -x). If it did boot cleanly, what did 'uname -n' return? Do commands like 'getent hosts <your_hostname>' work? If there are problems here, Solaris Cluster won't be able to get round them.
    If the Solaris install was clean, what were the results of the above host name commands after OSC was installed? Do the hostnames still resolve? If not, you need to look at why that is happening first.
    Regards,
    Tim
    ---

  • Sun cluster: virtual IP address

    Hi,
    What is the virtual IP address and how to configure it?
    For example, should it be defined in /etc/hosts? dns?
    Thank you,

    [[[s this correct to have apache HA?]]]
    Apache can be set up as a failover resource (so it is active only on one node at a time) or a scalable resource (where it would be active on multiple nodes at the same time).
    [[[Just an aside question: HAStoragePlus is NFS sharing? What is difference between NFS resource and mount resource (I saw Veritas differentiate between them)? In case I set up a shared disk, is it NFS or mount resource?]]]
    HAStoragePlus is not NFS sharing. HAStoragePlus let's you create HA storage (it's called HAStoragePlus because there was an earlier generation data service (aka Clustering Agent a-la VCS) called HAStorage). This will let you wrap a shared storage device and fail it back and forth between multiple nodes of the cluster.
    NFS sharing has to be handled using the SUNW.nfs Data service (or in other words, the NFS clustering agent) (ie only if you want to set up NFS as a HA service). Otherwise, you can use standard NFS.
    Mount resource is (i'm guessing here) any resource that can be mounted. In other words, a Filesystem.
    NFS resource is a resource that is shared out via NFS.
    [[[Also, a basic question: The shared disk should not be mounted in /etc/vfstab. Correct? It should be only present when doing format on each node. Right? It is SCS that manages the mounting of the file system? This should be up before testing apache HA…no?]]]
    That is correct. Sun Cluster will handle mounting/unmounting the filesystem and importing/deporting the disk set (in Veritas world it is called a Disk group).
    When you build your cluster resource group (aka VCS Service group), you will have to build the dependency tree (just how you would in VCS).
    1) Create empty RG
    2) Create HAStoragePlus Resource
    3) Create Logical Hostname resource
    4) Create Apache resource
    5) define dependency of Logical hostname (Virtual IP) and HAStoragePlus (filesystem) so that apache can start.
    At each stage, you can test whether the RG is working as it should before proceeding to the next level.

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch

    Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch
    Hi,
    Currently the customer have 3 nodes clusters that are connected to the SE3510 via to the Sun StorEdge[TM] Network Fibre Channel Switch(SAN_Box Manager) and running SUN Cluster 3.X with Disk-Set.The customer want to decommised the system but want to access the 3510 data on the NEW system.
    Initially, I remove one of the HBA card from one the cluster nodes and insert it into the NEW System and is able to detect the 2 LUNS from the SE3150 but not able to mount the file system.After some checking, I decided follow the steps from SunSolve Info ID:85842 as show below:
    1.Turn off all resources groups
    2.Turn off all device groups
    3.Disable all configured resources
    4.remove all resources
    5.remove all resources groups
    6.metaset �s < setname> -C purge
    7.boot to non cluster node, boot �sx
    8.Remove all the reservations from the shared disks
    9.Shutdown all the system
    Now, I am not able to see the two luns from the NEW system from the format command.cfgadm �al shows as below:
    Ap_Id Type Receptacle Occupant Condition
    C4 fc-fabric connected configured
    Unknown
    1.Is it possible to get back the data and mount back accordingly?
    2.Any configuration need to done on the SE3150 or the SAN_Manager?

    First, you will probably need to change the LUN masking on the SE3510 and probably the zoning on the switches to make the LUN available to another system. You'll have to check the manual for this as I don't have these commands committed to memory!
    Once you can see the LUNs on the new machine, you will need to re-create the metaset using the commands that you used to create it on the Sun Cluster. As long as the partitioning hasn't changed from the default, you should get your data back intact. I assume you have a backup if things go wrong?!
    Tim
    ---

  • Sun Cluster 3.1 Failover Resource without Logical Hostname

    Maybe it could sound strange, but I'd need to create a failover service without any network resource in use (or at least with a dependency on a logical hostname created in a different resource-group).
    Does anybody know how to do that?

    Well, you don't really NEED a LogicalHostname in a RG. So, i guess i am not understanding
    the question.
    Is there an application agent which demands to have a network resource in the RG? Sometimes
    the VALIDATE method of such agents refuses to work if there is no network resource in
    the RG.
    If so, tell us a bit more about the application. Is this GDS based and generated by
    Sun Cluster Agent Builder? The Agent Builder has a option of "non Network Aware", if you
    select that while building you app, it ought to work without a network resource in the RG.
    But maybe i should back up and ask the more basic question of exactly what is REQUIRING
    you to create a LogicalHostname?
    HTH,
    -ashu

  • SC 3.2 - logical hostname create rpoblem

    Hello
    I am running with Sun Cluster 3.2 in two node cluster with IPMP. And we are using IBM Storage and metaset (Solaris Volume Manager) in the cluster.
    While trying to create Sun Cluster resource logical hostname, I get this error. I was able to successfully create the data resource but the logical hostname resource is giving this error.
    Command executed: /usr/cluster/bin/clreslogicalhostname create -g test-rg -p Resource_project_name=default -p R_description=Failover\ network\ resource\ for\ SUNW.LogicalHostname:3 -N group1@1:node1,group1@2:node22 -h test-rs test-rs
    Error message:
    clreslogicalhostname: <test-rs> cannot be mapped to an IP address
    Please advise.

    Hi,
    The value you specified with -h option is the logical hostname that should be mapped to an IP address and is not
    the resource name. The address mapping is possible if you have an entry for this hostname either in the /etc/hosts file or an entry in the name service
    that you are using. Make sure that you have an entry in the /etc/hosts file for "test-rs" and retry the create operation.
    BTW, You need not specify the -h option if your hostname
    is same as the resource name, and resource name is resolvable.
    From man page of clrslh command:
    -h lhost[,…]
    --logicalhost lhost[,…]
    Specifies the list of logical hostnames that this resource represents. You must use the -h option either when more than one logical hostname is to be associated with the new logical hostname resource or when the logical hostname does not have the same name as the resource itself. All logical hostnames in the list must be on the same subnet. If you do not specify the -h option, the resource represents a single logical hostname whose name is the name of the resource itself.
    You can use -h instead of setting the HostnameList property with -p. However, you cannot use -h and explicitly set HostnameList in the same command.
    Thanks,
    Prasanna Kunisetty

  • Logical Hostname fail to create

    Hi,
    I have installed Sun Cluster 3.2 on Solaris 10 10/08 with Kernel patch 138888-01
    The cluster is a test cluster running on one node.
    I have create 2 Local Zones with Exclusive IP
    1. clrg create -n HOST:zone1,HOST:zone2 test-rg
    2. I have add test-ip in /etc/hosts on the 2 zones
    3. when I am trying to run the clrslh command to create the logical hostname I get the following errors
    ( I am using the GUI for the creation of the logical hostname)
    clrslh (C189917) VALIDATE on resource OP-rs resource group test-rg exited with non-zero exit status
    clrslh (C720144) validation on resource IP-rs in resource group test-rg on node HOST;ZONE failed
    clrslh (C891200) Failed to create resource IP-rs
    Can any one help to resolve this problem?
    Thanks
    Yacov

    Hi
    I have some information regarding this problem.
    The /var/adm/messages* content regarding the problem
    May 28 09:26:23 za-dr-it-sp1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <test-ip>, resource group <test-rg>, node <za-dr-it-sp1:uat-mozambique2>, timeout <300> seconds
    May 28 09:26:23 za-dr-it-sp1 Cluster.RGM.global.rgmd: [ID 896918 daemon.notice] 10 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<uat-mozambique2.test-rg.test-ip.2>: Calling security_clnt_connect(..., host=<za-dr-it-sp1>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    May 28 09:26:24 za-dr-it-sp1 Cluster.RGM.global.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <test-ip>, resource group <test-rg>, time used: 0% of timeout <300, seconds>
    When creating the logical hostname*
    Command executed: /usr/cluster/bin/clreslogicalhostname create -g test-rg -p Resource_project_name= -p Failover_mode=NONE -p R_description=Failover\ network\ resource\ for\ SUNW.LogicalHostname:3 -h test-ip test-ip
    Error message:
    clreslogicalhostname: (C189917) VALIDATE on resource test-ip, resource group test-rg, exited with non-zero exit status.
    clreslogicalhostname: (C720144) Validation of resource test-ip in resource group test-rg on node za-dr-it-sp1:uat-mozambique2 failed.
    clreslogicalhostname: (C891200) Failed to create resource "test-ip".

  • One IP Address to create a IPMP and logical hostname?

    Hi,
    Is this possible to have a network failover in sun cluster with one IP address ( requirement is in two node cluster, each system having one NIC port. IPMP group will have only one IP address which is also use as logical address also)?
    Thanks

    Hmmmmm?!?
    I doubt it. In your cluster setup, one of the nodes would always have no(!) IP address. Actually, before you would configure your logical IP address, none of your cluster nodes would have a valid IP address, as the logicalhostname resource would want to plumb the address.
    Re-reading your post, the answer is no. This setup will have all sorts of problems.
    Regards
    Hartmut

  • Wrong hostname setting after Sun Cluster failover

    Hi Gurus,
    our PI system has been setup to fail over in a sun cluster with a virtual hostname s280m (primary host s280 secondary host s281)
    The basis team set up the system profiles to use the virtual hostname, and I did all the steps in SAP Note 1052984 "Process Integration 7.1 High Availability" (my PI is 7.11)
    Now I believe to have substituted "s280m" in every spot where previously "s280" existed, but when I start the system on the DR box (s281), the java stack throws erros when starting. Both SCS01 and DVEBMGS00 work directories contain a file called dev_sldregs with the following error:
    Mon Apr 04 11:55:22 2011 Parsing XML document.
    Mon Apr 04 11:55:22 2011 Supplier Name: BCControlInstance
    Mon Apr 04 11:55:22 2011 Supplier Version: 1.0
    Mon Apr 04 11:55:22 2011 Supplier Vendor:
    Mon Apr 04 11:55:22 2011 CIM Model Version: 1.5.29
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 HTTP trace: false
    Mon Apr 04 11:55:22 2011 Data trace: false
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 *** Start SLD Registration ***
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 HTTP open timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP send timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP response timeout = 420 sec
    Mon Apr 04 11:55:22 2011 Used URL: http://s280:50000/sld/ds
    Mon Apr 04 11:55:22 2011 HTTP open status: false - NI RC=0
    Mon Apr 04 11:55:22 2011 Failed to open HTTP connection!
    Mon Apr 04 11:55:22 2011 ****************************
    Mon Apr 04 11:55:22 2011 *** End SLD Registration ***
    Mon Apr 04 11:55:22 2011 ****************************
    notice it is using the wrong hostname (s280 instead of s280m). Where did I forget to change the hostname? Any ideas?
    thanks in advance,
    Peter

    Please note that the PI system is transparent about the Failover system used.
    When you configure the parameters against the mentioned note, this means that in case one of the nodes is down, the load will be sent to another system under the same Web Dispatcher/Load Balancer.
    When using the Solaris failover solution, it covers the whole environment, including the web dispatcher, database and all nodes.
    Therefore, please check the configuration as per the page below, which talks specifically about the Solaris failover solution for SAP usage:
    http://wikis.sun.com/display/SunCluster/InstallingandConfiguringSunClusterHAfor+SAP

Maybe you are looking for

  • All row data in a single column

    i have a table with the following data id name age salary dept city state country 1 abcd 22 20000 IT X Y XYZ 2 efgh 23 30000 IT X Y XYZ i need the output as 1,abcd,22,20000,IT,X,Y 2,efgh,23,30000,IT,X,Y in a single column separated by a comma(,) is i

  • Print Footer Window of PO Form at end of Last Page of PDF Forms(Adobe)

    Hi, I am trying to print PO form using PDF form which includes multiple line items. At the end of last page I am trying to print the footer window that contains Comments and Signature. Right now my Footer Window is being printed immediately after the

  • Tiff Decompressor needed for Images copied from PDF?

    I typed in "Tiff decompressor" into Photoshop's Help (which was no help as is common), and also did a forum search, but nothing came up. Before I go any further, a message to those who write Helps: i make sure that every term that is used within the

  • Can't open AVI file

    I'm trying to open an AVI file but am only getting audio. The codec says it's Microsoft ADPCM, I downloaded something on the microsoft site, flip4mac, but I still can't seem to get the video. Any pointers on how I can resolve this? I have quicktime v

  • Hierarchical List View with Parent and detail as Table

    I have a requirement where in I need to show a list of items e.g Deparments and for each department I need to show the corresponding employees in a table I used the listview component .I used the groupHeaderStamp containing a list item  to display th