OSB 10.2.0.2 Implementation on AIX 5.2 with HACMP - SSL Trust Issues??

Hello All
I think I'm on a bit of a long shot with this one unfortunately, but I am trying to implement an OSB solution on a production HACMP cluster. The configuration would look as follows:
OSB Admin & Media Host : Windows 2003 x86 (Host: FPTXOSB01)
OSB Clients : Server 'pserver1' is node 1 in an HACMP cluster, public IP address 192.168.14.6
: Server 'pserver2' is node 2 in the same HACMP cluster, pubic IP address 192.168.14.10
: Server 'ptest1' is a stand alone AIX 5.2 host)
OSB Version : 10.2.0.2.0
I have implemented the solution on the stand alone host 'ptest1' without any problems, and performed a full database RMAN backup on this test servr at the first time of asking. The problem I am running into is with adding the HACMP clients to the OSB admin domain.
HACMP is configured in such a way (rightly or wrongly I do not know as yet) with boot, public and cluster service addresses. Eg. Server 'pserver1' will return 'pserver1' if you enter the 'hostname' command at the AIX command prompt. Entering the 'uname -a' command also returns 'pserver1' as the machine host name. However, in the folder '/usr/local/oracle/backup/bin there is a link to a binary called 'hostinfo' and this is called by the installob routine during the installation phase. When I run this command manually, it returns the HACMP host boot address 'pserver1_boot'. The /etc/hosts file looks like this on one of the nodes:
# Internet Address Hostname # Comments
# 192.9.200.1 net0sample # ethernet name/address
# 128.100.0.1 token0sample # token ring name/address
# 10.2.0.2 x25sample # x.25 name/address
127.0.0.1 loopback localhost
10.10.10.86 pserver1_boot1 pserver1
10.10.10.87 pserver2_boot1 pserver2
10.11.10.86 pserver1_boot2
10.11.10.87 pserver2_boot2
10.12.10.86 pserver1_hb
10.12.10.87 pserver2_hb
192.168.14.5 pserver_svc
192.168.14.6 pserver1_pers
192.168.14.10 pserver2_pers
As you can see, the main host name is tagged on the same line as the boot1 IP addresses. Unfortunately, the 10.10.10.xx range is private and dedicated to the HACMP cluster configuration. So the situation is, all of the clients on the network access the cluster via the 'pserver_svc' virtual IP, which is fine. The Oracle databases listen on that VIP too, no problems. For telnet/SSH access to the host, we log on via the '?_pers' addresses (persistent addresses), no problem. However, two hosts themselves see their own respective hosts as the '?boot1' name. So, on 'pserver1' if I were to ping 'pserver1' it resolves to the 10.10.10.86 IP. All good, however the OSB admin server is going to come in on the 192.168.14 public network.
When adding the host using either the 'mkhost' command or the web tool, the host creation just sits there and eventually times out. If I change the '/etc/hosts' file such that 'pserver1' as en entry sits on a line on its own and configured with the correct persistent address of 192.168.14.6 and then try adding the host in OSB, the host adds okay. However, if I then try and ping the host using OSB, it returns the following:
ob> pingh pserver1
Error: can't connect to NDMP server on pserver1 (address 192.168.14.6) - timeout waiting for connection status message
pserver1 (address 192.168.14.6): Oracle Secure Backup services are available
Additionally, we have to switch the '/etc/hosts' configuration back because the HACMP cluster services expect that configuration and it will fail over if it performs a cluster host state check.
With this in mind, we've introduced cabling on to another unused NIC port on the two hosts, and put these NICs on the network on 192.168.13.110 and 111. I have retried adding the hosts with the machines actual host name, with the boot address (pserver1_boot1) and also with a new alias for the new NICs of 'pserver1_en1'. In most of these cases, adding the host actually comes back with a success status. However, the OSB ping consistently fails.
I believe that the mismatch in host names on each of the cluster hosts is causing the OSB trust relationships to break down as the certificates will be created with the non routable host/IP combination. The following is an extract of the 'observiced.log' from 'pserver2' following the host addition specifying the '192.168.13 .xxx' network:
2009/01/07.14:33:53 listening for requests on --
2009/01/07.14:33:53 en0 (10.10.10.87) port 400
2009/01/07.14:33:53 en2 (10.11.10.87) port 400
2009/01/07.14:33:53 en1 (192.168.13.111) port 400
2009/01/07.14:34:01 listening for NDMP connections on --
2009/01/07.14:34:01 en0 (10.10.10.87) port 10000
2009/01/07.14:34:01 en2 (10.11.10.87) port 10000
2009/01/07.14:34:01 en1 (192.168.13.111) port 10000
2009/01/07.14:38:54 failure to negotiate SSL connection with component obtool on fd 6 - SSL fatal alert during negotation (FSP Oracle network security functions)
I am clearly looking for help from anyone else who has had the unfortunate experience of implementing OSB in an HACMP environment. Speaking to people who work with HACMP tell me that the configuration is perfectly normal. To me, its odd that machine called one thing should return another value when it looks up itself, one that isn't routable.
If anyone can suggest anything that might help, additional tracing, manually creating SSL certificates to work around the host name, disabling SSL, anything that might allow two way communications on ports 400 and 10000 using the OSB tools.
Any helps here would be much appreciated.
Regards
Simon

I already have.
Thanks,

Similar Messages

  • RAC on AIX 6.1 with HACMP

    Hi
    Can I have installation procedure and documents to Install 10gR2 RAC on AIX 6.1 using HACMP(for cluster control) instead of Oracle Clusterware. Is that possible ? anyone tried it before and share the experience.
    Regards,
    Akthar

    Do you have any particular reason for using HACMP cluster control instead of CRS?
    As you are going to install 10GR2 RAC So there is no need to use HACMP because 10G CRS can be installed without OS clusterware.
    However CRS can be installed along with HACMP but oracle services should be controlled with CRS inly.
    Regards
    Rajesh

  • Oracle 11g upgrade in AIX 6.1 with HACMP

    Hi Friends,
    As i have two Power Servers running in AIX 6.1 with oracle 10g in HACMP in which SAP application is running.
    The one is standalone Database and other is central Instance.
    I have done the 11g upgrades successfully in my DEV and QAS servers which are non-cluster Environment.
    Now i want to do the same upgrade in PRD which is in HACMP.
    Please let me know what are the areas should i concentrate specially for cluster environment servers.
    Thanks,
    Hari

    DB Filesystems
    Filesystem GB blocks Free %Used Iused %Iused Mounted on
    /dev/hd4 4.00 2.62 35% 15438 3% /
    /dev/hd2 8.00 5.03 38% 57744 5% /usr
    /dev/hd9var 4.00 2.85 29% 10914 2% /var
    /dev/hd3 4.00 3.50 13% 2575 1% /tmp
    /dev/fwdump 1.00 1.00 1% 13 1% /var/adm/ras/platform
    /dev/hd1 1.00 1.00 1% 6 1% /home
    /dev/hd11admin 0.25 0.25 1% 107 1% /admin
    /proc - - - - - /proc
    /dev/hd10opt 1.00 0.58 43% 9040 7% /opt
    /dev/livedump 0.25 0.25 1% 7 1% /var/adm/ras/livedump
    /dev/lv_oracle 2.00 1.86 8% 21 1% /oracle
    /dev/lv_ora_pip 2.00 2.00 1% 80 1% /oracle/PIP
    /dev/lv_usr_sap 2.00 1.92 5% 78 1% /usr/sap
    /dev/lv_sapmnt 2.00 0.62 70% 978 1% /sapmnt
    /dev/dumplv 95.00 32.80 66% 26790 1% /dump
    /dev/saparchlv 2.00 1.99 1% 57 1% /home/pipadm
    /dev/lv_pip_64 10.00 5.73 43% 18988 2% /oracle/PIP/102_64
    /dev/lv_mirlogA 1.00 0.61 40% 6 1% /oracle/PIP/mirrlogA
    /dev/lv_mirlogB 1.00 0.61 40% 6 1% /oracle/PIP/mirrlogB
    /dev/lv_oraarch 200.00 121.48 40% 433 1% /oracle/PIP/oraarch
    /dev/lv_oralogA 1.00 0.59 41% 8 1% /oracle/PIP/origlogA
    /dev/lv_oralogB 1.00 0.59 41% 8 1% /oracle/PIP/origlogB
    /dev/fslv01 2.00 1.97 2% 102 1% /oracle/PIP/saparch
    /dev/lv_sapbkp 5.00 5.00 1% 40 1% /oracle/PIP/sapbackup
    /dev/lv_sapchk 5.00 5.00 1% 80 1% /oracle/PIP/sapcheck
    /dev/lv_data1 200.00 86.26 57% 30 1% /oracle/PIP/sapdata1
    /dev/lv_data2 200.00 84.92 58% 26 1% /oracle/PIP/sapdata2
    /dev/lv_data3 200.00 84.92 58% 26 1% /oracle/PIP/sapdata3
    /dev/lv_data4 200.00 84.92 58% 26 1% /oracle/PIP/sapdata4
    /dev/lv_data5 200.00 84.92 58% 26 1% /oracle/PIP/sapdata5
    /dev/lv_data6 200.00 84.92 58% 26 1% /oracle/PIP/sapdata6
    /dev/lv_data7 200.00 84.92 58% 26 1% /oracle/PIP/sapdata7
    /dev/lv_data8 200.00 84.93 58% 26 1% /oracle/PIP/sapdata8
    /dev/lv_saporg 20.00 20.00 1% 7 1% /oracle/PIP/sapreorg
    /dev/saptrance 5.00 4.92 2% 588 1% /oracle/PIP/saptrace
    /dev/lv_inventry 2.00 1.99 1% 55 1% /oracle/oraInventory
    /dev/lv_102_64 10.00 5.05 50% 11044 1% /oracle/stage/102_64
    CI
    /dev/hd4 4.00 1.99 51% 14429 3% /
    /dev/hd2 8.00 5.01 38% 57680 5% /usr
    /dev/hd9var 4.00 3.38 16% 10936 2% /var
    /dev/hd3 4.00 3.82 5% 1362 1% /tmp
    /dev/fwdump 1.00 1.00 1% 18 1% /var/adm/ras/platform
    /dev/hd1 1.00 1.00 1% 55 1% /home
    /dev/hd11admin 0.25 0.25 1% 5 1% /admin
    /proc - - - - - /proc
    /dev/hd10opt 1.00 0.58 42% 9024 7% /opt
    /dev/livedump 0.25 0.25 1% 8 1% /var/adm/ras/livedump
    /dev/lv_oracle 2.00 2.00 1% 9 1% /oracle
    /dev/lv_ora_pip 2.00 2.00 1% 52 1% /oracle/PIP
    /dev/lv_usr_sap 10.00 10.00 1% 17 1% /usr/sap
    /dev/lv_client 2.00 1.86 8% 16 1% /oracle/client
    /dev/lv_smnt_pip 10.00 2.20 78% 114142 18% /sapmnt/PIP
    /dev/lv_sap_pip 10.00 8.10 19% 1577 1% /usr/sap/PIP
    /dev/lv_sap_cms 5.00 5.00 1% 8 1% /usr/sap/ccms
    root@pagedb:/ $ su - orapip
    pagedb:orapip 1> echo $ORACLE_HOME
    /oracle/PIP/102_64
    i have upgraded successfully in my DEV and QAS.
    So can i go with the same procedure as i went with non-cluster Env.
    Thanks

  • OUI failed to Select Cluster node on AIX- 9i RAC with HACMP

    Dear
    It seems HACMP cluster is working fine.
    rlogin, rcp are also working.
    But oracle installer failed to pop up the "Cluster Node Selection" window.
    [p650_cdr1][root]/> lssrc -a | grep -E "ES|svcs"
    clcomdES clcomdES 35888 active
    topsvcs topsvcs 34634 active
    grpsvcs grpsvcs 23258 active
    emsvcs emsvcs 26588 active
    emaixos emsvcs 25356 active
    clstrmgrES cluster 26090 active
    clsmuxpdES cluster 37556 active
    clinfoES cluster 26118 active
    grpglsm grpsvcs inoperative
    [p650_cdr1][root]/> lssrc -g cluster
    Subsystem Group PID Status
    clstrmgrES cluster 26090 active
    clsmuxpdES cluster 37556 active
    clinfoES cluster 26118 active
    [p650_cdr1][root]/>
    Would you please let me know what to do now?
    Oracle : 9.2.0.1
    $ oslevel -r
    5200-04
    $
    HACMP :5.2
    Regards
    Faruque
    Message was edited by: Faruque
    fahmed

    The problem is resolved. A patch(IY73937) for cluster was required.
    Thanks and regards
    Faruque

  • Cannot install CRS on AIX 5.3 ML09 with HACMP 5.4.1

    We have a two-node HACMP cluster, it's handling the networking and storage. (raw LV's) It's a fairly simple setup. I followed the instructions in document 404474.1 "Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4" to ensure it's compatible.
    Full OS and software Versions:
    AIX 5.3 ML9 (53-09_SP1_5300-09-01-0847)
    HACMP 5.4.1.4
    Oracle 10gR2 (10.2.0.1 from the install media, DBA will be updating to .3 or .4 after install)
    We've run the cluvfy script. It does complain about some missing patches; however, I've verified that none apply to our system. Ie, some of the patches apply to previous AIX ML, etc. Cluvfy does, however, detect the presence of the cluster and successfully collects the networking info from both nodes.
    So, the cluster is running, we run rootpre.sh successfully on both nodes. It properly detects the presence of the cluster and successfully makes several changes to the system.
    Next, we fire up the installer. (runInstaller) We provide it with the details for the various directories and what-not, everything is successful until we get to the "Specify Cluster Configuration" screen. It's blank. For some reason, it's not talking to the HACMP and can't acquire any of the cluster configuration data. There are no errors in any logs. We ran the installer through truss at the request of Oracle support, but it didn't seem to show us anything useful.
    Has anyone experienced this before? Is there something we need to do in order for the installer to communicate with HACMP?
    Thanks!
    Adam

    Dear Kushan,
    Few hints from my side
    1) For ECC 6.0 EHP4 -> First ECC 6.0 will be installed on NW7.01 stack. Later you will need to use ehpi tool to upgrade to EHP4.
    2) For SCM 7.0 plan the resource allocation for Livecache and Optimizer server. Ensure that they have ample resources for operation.
    Regards,
    Deepak Kori

  • How to implement a Copy or Create with Reference scenario

    For business objects, you might want to implement a Copy or Create with Reference scenario. The following procedure describes the UI configuration that you need if you want to place a copy button (in our example on an OWL) that starts a quick activity floorplan (on the same BO = Copy, or a different BO = Create with Reference). Pre-requisie in the target BO: The target BO requires a BO element SourceBOID and a Copy action that reads the SourceBO by SourceBOID by query, and copies the elements from the source to the target BO.
    The UI configuration in the target floor plan is:
    1. Open the QAF floor plan of the target BO (target floor plan).
    2. In the DataModel view of the target floor plan, select the Root entry and select Add Data Field from the context menu. Rename the created data element to OBN_OriginBOID.
    3. Choose the Controller tap, and select INPORTS and choose ADD INPORT from the context menu. A new in-port is created. Rename the in-port, for example to Copy.
    4. In the in-port maintenance form, activate the check box OBN INPORT.
    5. Select the namespace of your solution and the target business object.
    6. In the input field SELECT OPERATION enter Copy. A new select operation is created. The combination of business object name (including namespace), business object node and operation identifies the in-port and therefore the related floor plan as navigation target.
    7. Select the port type package /SAP_BYD_UI/SystemPortTypes.PTP.uicomponent.
    8. In the PARAMETERS section of the form, click the ADD button. Maintain the binding of the created parameter to /Root/OBN_OriginBOID. Based on this configuration, the system will transfer the parameter of the in-port to the element in the data model when the OBN is executed.
    9. In the Properties view, select the drop-down list box of the property EVENTS u2022 ONFIRE. Scroll down and select the entry u2026 NEW EVENT HANDLER u2026. The system starts the maintenance window for event handlers. Rename the event handler to CopyIn.
    10. In the OPERATIONS table of the maintenance window for event handlers, select type: BUSINESS OBJECT OPERATION. In the form below the table select the value CREATE for the input field BUSINESS OBJECT OPERATION TYPE. This operation will create a BO instance in the backend when the OBN is executed.
    11. In the OPERATIONS table, create a new operation of type: DATAOPERATION.  In the configuration of the data operation, select the operation type ASSIGN, source expression /Root/OBN_OriginBOID and target expression /Root/<BO>/OriginBOID.
    12. Create a third operation of type: BUSINESS OBJECT ACTION. Select the Copy action of the target business object and click the BIND button. Note: This action enforces another roundtrip to the backend. The Copy action must be implemented so that it will read the origin BO and copy selected data from the origin to the newly created object.
    13. Test the changes in the preview. If no error message is issued, save and activate the floor plan.
    The following procedure describes the configuration in the source floor plan (e.g. OWL floorplan ):
    1. Open the Source BO OWL floor plan (source floor plan).
    2. In the Designer view, place cursor the on the toolbar area and select ADD u2022 APPLICATION-SPECIFIC BUTTON u2022 MY BUTTON from a context menu. Rename the new button to Copy.
    3. In the Properties view, select the drop-down list box of the property MENU INFORMATION u2022 NAVIGATION. The system launches the maintenance window for OBN configuration.
    4. Select the in-port of the target floor plan by selecting the target business object (with namespace and name) the target business object node and the target operation, and the operation Copy.
    5. Choose the navigation style NEWWINDOW.
    6. Close the OBN configuration maintenance window by clicking the OK button. The system creates the OBN configuration, an out-port, that is used by the OBN configuration, and an event handler that uses the out-port and that is assigned to the button (see Properties view, EVENTS u2022 ONFIRE).
    7. Go to the Controller view and rename the OBN configuration to Copy, the new out port to Copy and the new event handler to CopyOut.
    8. Check that the event handler CopyOut fires the out-port Copy.
    9. In the Parameters section of the out-port maintenance form, click the ADD button. Maintain the binding of the created parameter as /Root/<BO>/<BO>ID. Based on this configuration, the system will transfer the identifier of the selected source BO to the out-port data structure when the OBN is executed.
    10. In the Operations table, select type: FIREOUTPORT. In the form below the table select the out-port CopyOut.
    11. Test the changes in the preview. If no error message is issued, save and activate the floor plan.

    Hi Dries-
    There are no pre-packaged solutions with BADIs since they are, by definition, custom development.  If that's the path you need to go down then consider the following high level alternatives:
    Incorporate custom code into the BPC Write Back BADI.  You can restrict the execution of the BADI using filters on the BADI definition, so that the BADI execution only occurs when a data manager package is called, and only for some defined combination of applications/appsets.  Utilized the standard copy/move functions delivered in Data Manager. When the BADI is called, interrogate each record being processed (table CT_ARRAY) and determine if the record has a value you want to process (i.e. save to the target application).  Skip any record that has a zero value.
    Another alternative is to develop the BADI as custom logic.  Data Manager parameters can be picked up in Script Logic and the values can be sent to the BADI by adding parameters.  Please see an example of parmater use in the "How To" document for Destination App at:
    [EPM How To Guides|https://wiki.sdn.sap.com/wiki/display/BPX/Enterprise%20Performance%20Management%20%28EPM%29%20How-to%20Guides]  > "How-to Desitnation App"
    Regards,
    Sheldon

  • Implementing SQL Server Reporting Services with a Java EE Application

    Hi All,
    I need to find some tutorial on
    "Implementing SQL Server Reporting Services with a Java EE Application"
    for my j2EE application.
    Until now i have searched a lot of sites but have not any thing related to this topic.
    I am using apache axis along with SQL Server Report Manager for creating the sql server reports.
    I have done upto :
    Creating the webservice with the help of report manager.
    Now i want to connect it through my j2EE application and want to retrieve some data from that web service.
    I have got stuck on the following BOLD lines in my code .
    code:
    public CatalogItem[] getData(String res, String searchStr) throws RemoteException     {
                  ReportingService2005Soap port = null;
                  ReportingService2005Locator loc = new ReportingService2005Locator();
                  // Retrieve a port from the service locator
                  try {
                       port = loc.getReportingService2005Soap(new java.net.URL(res));
                  } catch (MalformedURLException e) {
                       // TODO Auto-generated catch block
                       e.printStackTrace();
                  } catch (ServiceException e) {
                       // TODO Auto-generated catch block
                       e.printStackTrace();
                  org.apache.axis.client.Stub stub = (org.apache.axis.client.Stub) port;
                  stub.setUsername("localhost\\Administrator");
                  stub.setPassword("servWIN@2007");
                  // Retrieve a port from the service locator
                  SearchCondition condition = new SearchCondition();
                  condition.setCondition(ConditionEnum.Contains);
                  condition.setName("Name");
                  if (searchStr != null)
                  condition.setValue(searchStr);
                  else
                  condition.setValue("");
        //           Create an array of SearchConditions which will contain our single search condition
                  SearchCondition[] conditions;
                  conditions = new SearchCondition[1];
                  conditions[0] = condition;
        //           Call the Web service with the appropriate CatalogItem[] returnedItems;
                  CatalogItem[] returnedItems = null;
                  port.findItems("foldername",BooleanOperatorEnum.Or, conditions);
                  return returnedItems;
             }i.e. while executing the findItems() method i got the following exception :
    System.Web.Services.Protocols.SoapException: The item '/reportingservices' cannot be found. ---> Microsoft.ReportingServices.Diagnostics.Utilities.ItemNotFoundException: The item '/reportingservices' cannot be found.
    at Microsoft.ReportingServices.Library.RSService.FindItems(String folder, String operation, SearchCondition[] properties)
    at Microsoft.ReportingServices.WebServer.ReportingService2005Impl.FindItems(String Folder, BooleanOperatorEnum BooleanOperator, SearchCondition[] Conditions, CatalogItem[]& Items)
    Any any body has any idea please help me.
    I need it urgently.
    Regards.

    The above example is mentioned in msdn virtual lab that teaches how to connect to reporting services from a j2ee application.
    I have done the same thing .. but i am not able to render the report programmatically .. if anybody knows pls let me know the solution.

  • How  to Implement a Chained Hash Table with Linked Lists

    I'm making a migration from C/C++ to Java, and my task is to implement a Chained Hash Table with a Linked List. My problem is to put the strings(in this case names) hashed by the table using de Division Metod (H(k)= k mod N) in to a Linked list that is handling the colisions. My table has an interface implemented(public boolean insert(), public boolean findItem(), public void remove()). Any Help is needed. Thanks for everyone in advance.

    OK. you have your hash table. What you want it to do is keep key/value pairs in linked lists, so that when there is a collision, you add the key/value pair to the linked list rather than searching for free space on the table.
    This means that whenever you add an item, you hash it out, check to see if there is already a linked list and if not, create one and put it in that slot. Then in either case you add the key/value pair to the linked list if the key is not already on the linked list. If it is there you have to decide whether your requirements are to give an error or allow duplicate keys or just keep or modify one or the other (old/new).
    When you are searching for a key, you hash it out once again and check to see if there is a linked list at that slot. If there is one, look for the key and if it's there, return it and if not, or if there was no linked list at that slot, return an error.
    You aren't clear on whether you can simply use the provided linked-list implementations in the Java Collections or whether you have to cobble the linked list yourself. In any case, it's up to you.
    Is this what you're asking?
    Doug

  • MCTS 70-466 Implementing Data Models and Reports with Microsoft SQL Server 2012

    I am searching for training kit for Exam 70-466 (Implementing Data Models and Reports with Microsoft SQL Server 2012) but I think is not published yet. I was expecting its release in Jan or Feb 2014. Would any one can tell me its release date or any place
    where I can find this book.
    Thanks
     

    Hi Azhar lqbal Gondal,
    According to your description, since the issue regards training and certification,
     I suggest you post the question in the Learning forums at
    http://social.technet.microsoft.com/Forums/en-US/home?category=learning. It is appropriate and more experts will assist you. If you have a specific technical question about Microsoft SQL Server,
     you can visit and post your question on  the SQL Server Forum.
    There is some detail about Exam 70-466 Implementing Data Models and Reports with Microsoft SQL Server 2012, you can review the following articles.
    Exam content can be found here:
    http://www.microsoft.com/learning/en-us/exam-70-466.aspx
    http://borntolearn.mslearn.net/certification/database/w/wiki/525.466-implementing-data-models-and-reports-with-microsoft-sql-server-2012.aspx#fbid=Mn-t6aRhs-H
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • How to implement a client side map with ObjectImage control?

    We need to implement a client side map with an ADF Faces ObjectImage control. In the code below, the JSF Faces GraphicImage contol does support a client side image map using the usemap property. However, it appears that the ADF Faces ObjectImage control does not support a client side map. Is there someway of implementing this functionality in an ObjectImage control?
    <h:graphicImage url="/images/map-usa.gif"
    usemap="#m_mapusa"
    binding="#{backing_map.graphicImage2}"
    id="graphicImage2"
    style="border-style:none;"/>
    <af:objectImage source="/images/map-usa.gif"
    binding="#{backing_map.objectImage2}"
    id="objectImage2" />
    We could use the Graphic Image control except we have a problem by mixing a JSF GraphicImage control in the same table with a variety of ADF Faces controls in that when a user clicks on the GraphicImage, then the browser windows scrolls down to center the GraphicImage control. A user then needs to scroll back up to see the rest of the page. If an ObjectImage control is used with an onClick action, then the page does not scroll, which is what we want. So if we can figure out how to add a client side map to an ObjectImage control we would get the desired results.
    An alternative might be to use a server side map with the ObjectImage control. But our question here is how to implement the existing client side image map in a backing bean. As the following map code shows, not all image map areas are rectangles - some are polygons.
    <area id="_state_05" href="#"
    shape="rect"
    coords="681,38,702,50"
    target="_self" value="VT" alt="Vermont"
    onclick="javascript:getDtl(this);"/>
    <area id="_state_06" href="#"
    shape="poly"
    coords="221,442,209,436,209,418,191,403,155,382,116,367,101,370,98,364,
    122,355,158,367,203,388,212,394,242,427"
    target="_self" value="HI" alt="Hawaii"
    onclick="javascript:getDtl(this);"/>

    Hi,
    Any news about that issue, we are also interested in any solution.
    Thanks
    Math

  • Error initializing LiveCycle ES3 on AIX, Websphere 7 with DB2 9.5

    Hello,
    I'm having a problem during configuration of LiveCycle ES3 on an AIX 6.1 with WebSphere 7.0.0.25 and DB2 9.5 on the same machine.
    I'm getting following error:
    com.adobe.livecycle.lcm.core.LCMException[ALC-LCM-000-000]: Failed on step 'Invoking component bootstrapper' for task 'Bootstrapping DocumentServiceContainer'
    ALC-TTN-103-000: Bootstrapping request failed on server.  Message from server:
    ALC-TTN-011-031: Bootstrapping failed for platform component [DocumentServiceContainer].  The wrapped exception's message
    reads:
    See nested exception; nested exception is: com.adobe.pof.schema.ObjectTypeNotFoundException: Object Type: dsc.proxy_permissions not
    found.
    Check application server logs for details.Bootstrapping Error: ALC-TTN-103-0000
            at com.adobe.livecycle.lcm.headless.HeadlessLCMImpl.bootstrapPlatform(HeadlessLCMImpl.java:4 00)
            at com.adobe.livecycle.lcm.headless.HeadlessLCMImpl.bootstrapPlatform(HeadlessLCMImpl.java:2 82)
            at com.adobe.livecycle.lcm.cli.InitializeLiveCycleCLI.executeCommandLineImpl(InitializeLiveC ycleCLI.java:81)
            at com.adobe.livecycle.lcm.cli.LCMCLI.execute(LCMCLI.java:299)
            at com.adobe.livecycle.lcm.cli.LCMCLI.main(LCMCLI.java:344)
    Caused by: com.adobe.livecycle.bootstrap.BootstrapException: ALC-TTN-103-000: Bootstrapping request failed on server.  Message from server:
    ALC-TTN-011-031: Bootstrapping failed for platform component [DocumentServiceContainer].  The wrapped exception's message
    reads:
    See nested exception; nested exception is: com.adobe.pof.schema.ObjectTypeNotFoundException: Object Type: dsc.proxy_permissions not
    found.
    Check application server logs for details.
            at com.adobe.livecycle.bootstrap.client.BootstrapRequestClient.analyzeResponse(BootstrapRequ estClient.java:152)
            at com.adobe.livecycle.bootstrap.client.BootstrapRequestClient.bootstrap(BootstrapRequestCli ent.java:63)
            at com.adobe.livecycle.bootstrap.client.BootstrapManager.executeStep(BootstrapManager.java:2 03)
            at com.adobe.livecycle.lcm.headless.HeadlessLCMImpl.bootstrapPlatform(HeadlessLCMImpl.java:3 93)
            ... 4 more
    I already searched within this forum, but i did not find a topic, wehere the solution has been helpful for me.
    So i wonder, if you have any idea to solve my problem.
    The server type on the WAS is Cluster.
    Thanks and best regards,
    Coopar8502

    > MOS-01109  Needed space on mountpoint /usr is 100000 KB, but got only 41736 KB.
    > /dev/hd2        1984.00     40.76   98%    42811    63% /usr
    The system says it needs 100 GB on /usr but you provide on 42.
    Markus

  • Need help: how to implement sun niagara encryption chip with weblogic

    Hi all, I have a issue with SSL performance on top of T2000 server, thats why I try to use encryption chip provided by SUN T1 technology, but I couldnt find the implementation document for this technology with weblogic server.
    Im using weblogic 9.2MP3 with sun solaris 10. I really appreciate if any body can help me with this problem.
    Thanks,

    Hello,
    How are you obtaining the EntityManager and EntityManagerFactory? If it is injection, you should verify that the second transaction scope gets its own EM/EMF injected rather than being passed the one from the first one.
    Best Regards,
    Chris

  • Installation Of Solman 7.1 aix Oracle failed with Java Error

    Dear All,
    We are doing SOLMAN Installation 7.1 SR1 AIX Oracle.
    Now the installation stopped at the 19th phase "IMport ABAP"as in the attached screenshot.
    OS : AIX 6100-07-02-1150
    Database:  Oracle 11.2.0.3
    Java Version currently maintained:  java version "1.4.2"
    When i checked in log File  import_monitor.java.log  (  /tmp/sapinst_instdir/SOLMAN71/SYSTEM/ORA/CENTRAL/AS) ,
    [root@solmantrg: /tmp/sapinst_instdir/SOLMAN71/SYSTEM/ORA/CENTRAL/AS] cat import_monitor.java.log
    java version "1.6.0_45"
    Java(TM) SE Runtime Environment (build 6.1.051)
    SAP Java Server VM (build 6.1.051 23.5-b02, May 30 2013 05:04:21 - 61_REL - optU - aix ppc64 - 6 - bas2:197575 (mixed mode))
    Required system resources are missing or not available:
      Import directory '/cddumps/Solman_Export_Cds/51042607_1/DATA_UNITS/EXP1' does not exist;
      Import directory '/cddumps/Solman_Export_Cds/51042607_1/DATA_UNITS/EXP2' does not exist;
      Import directory '/cddumps/Solman_Export_Cds/51042607_2/DATA_UNITS/EXP3' does not exist;
      Import directory '/cddumps/Solman_Export_Cds/51042607_2/DATA_UNITS/EXP4' does not exist.
    [root@solmantrg: /tmp/sapinst_instdir/SOLMAN71/SYSTEM/ORA/CENTRAL/AS]
    Also i checked all the above import Directories [
    '/cddumps/Solman_Export_Cds/51042607_1/DATA_UNITS/EXP1'
    '/cddumps/Solman_Export_Cds/51042607_1/DATA_UNITS/EXP2'
    '/cddumps/Solman_Export_Cds/51042607_2/DATA_UNITS/EXP3'
    '/cddumps/Solman_Export_Cds/51042607_2/DATA_UNITS/EXP4'
    ] and all the above import directories are present in my system.
    Later from the above log, i compared the Java Version of our System
    [root@solmantrg: /tmp/sapinst_instdir/SOLMAN71/SYSTEM/ORA/CENTRAL/AS] java -version
    java version "1.4.2"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2)
    Classic VM (build 1.4.2, J2RE 1.4.2 IBM AIX 5L for PowerPC (64 bit JVM) build caix64142ifx-20110630 (SR13 FP10) (JIT enabled: jitc))
    [root@solmantrg: /tmp/sapinst_instdir/SOLMAN71/SYSTEM/ORA/CENTRAL/AS] which java
    /usr/java14_64/jre/bin/java
    Kindly clarify the below Queries.
    1)Is the above Solman installation error was due to Java version? If so, do we need to install   java version "1.6.0_45" in Solman Server to proceed with the installation.
    2)Cant we proceed the Solman installation with the current Java version (java version "1.4.2") of our System .
    3) What is the actual Prerequistes to be maintained for JAVA.  I have downloaded the Installtion guide from SMP.
    but in the installation guide, The exact java version to be maintained to proceed with the Solman Installation 7.1 SR1 is not mentioned anywhere.
    4)If we install the Java 1.6, do we need to start Solman installation from the scratch . Any reboot required after installing java 1.6?
    5)Because of the inconsistencies in java version in our System, The installation shows the Error " Import directory does not exist  "?
    6)Please tel the cause of the Error
    Kindly assist to proceed with the installation
    Regards,
    Gayathri.K

    Hello Gayathri
    The import_monitor.java.log says the JRE used is 1.6 - java version "1.6.0_45"
    There must be multiple JRE's installed on the system.
    Run the below command and check the output
    ls -lad /usr/java*
    Few things I would try.
    Is the '/cddumps/Solman_Export_Cds/51042607_1/DATA_UNITS/EXP1' an NFS mount ? If yes use a local file system.
    Create a directory called /SMINSTALL and move all from /cddumps/Solman_Export_Cds
    It should look like /SMINSTALL/51042607_1/DATA_UNITS/EXP1
    Start the installation.
    Try to use Java 1.4 or 1.5 and see if that helps.
    You need to set the environment variable JAVA_HOME before starting the installation.
    export JAVA_HOME=/usr/java14_64/jre
    Are there any files/directories under 51042607_*/DATA_UNITS/EXP* ?
    Regards
    RB

  • How to implement Oracle user/role security with Access front end?

    Hi,
    We have successfully migrated our Access database tables to Oracle 10g using SQL developer. We've recreated all the users and roles(i.e., access groups) in Oracle and granted rights to tables.
    In the Access front end database, in the Database window we have saved linked Oracle tables which replaced the Access tables. The forms, reports, queries run fine with the linked Oracle tables. All the linked table use one ODBC DSN to the Oracle database with the same Oracle user id.
    We need to be able to authenticate users into the Oracle database and RE-link the tables based on their own unique user id. By during so we can allow users to use the Oracle standard user id/role and system privileges to control select, update, ect. rights to the database.
    I've been able to use the VB code within Access to logon into the database with a unique id, but I have not been able to find out how to RE-link the tables to the unique user id using VB. There should be some way to relink tables dynamically, based on users login into the Access front end.
    I don't know a great deal about Access projects, but I do know with SQL server allows login into your Access project and link tables dynamically.
    Can someone give me some assistance or point me in the right direction?
    Thanks in advance,
    Larry

    We had one of our programmers here come up with a VB code solution for re-linking table within Access. However the relinking takes 3-4 minutes for 100+ tables.
    In an effort to help you understand the situation better, I will attempt to elaborate on the problem:
    We have an Access 2003 application which currently has a front end using Access(forms, reports, queries, & VB code) and a MS Access 2003 backend.
    We have migrated the backend tables to Oracle. However, we still have a need to maintain the front end in Access, since we have over 60 forms, 40 reports, 200+ queries in Access. Its easy to understand, we have a significant investment in the front end(Obviously, the plan is to migrate the front end also at some future date).
    In order to utilized the existing front end, we have to validate and modify the current front end connections to the new Oracle backend. One of the features of Access is that you can "link" tables and save the link for runtime. Each Access table can have its own link which is a separate ODBC/JET connection. As such, each separate link has its own userid/database information.
    The other issue with using the Access front-end is that Access utilizes a workgroup file to implement user and group security. The workgroup file contains all the users and which groups the users belong to in Access. Then within Access, you allow users access to object(tables, queries, ect) by their userid and or group. When users open an Access database with Access security enabled, they are required to log into Access. The login is authenticated by the workgroup file. Once, logged into Access, users have rights to Access objects based on their rights granted to their userid and groups they belong. The problem here is that when you remove the linked Access tables and replace them with linked Oracle tables, Access has knowledge about Oracle table rights granted to users; nor would you expect it to.
    The dilema is the disconnect between Access and the fact Oracle utilizes a similar but much more sophisticated security model. It creates users and roles(which are similar to Access groups), and again this is independent of Access security.
    Our solution was to still use the Access workgroup file security along with the Oracle security model. By using the Access userid and then creating a similar Oracle userid with similar table rights granted in Access, you could apply security within Access and also with the Oracle database.
    For example, a user BOB logs into Access via the workgroup file, using VB code, Access then establishes a Oracle connection logining into Oracle using the same unique userid BOB into Oracle.
    After connecting and validating user BOB into Oracle, then the Access tables are relinked to Oracle using the user BOB userid and table rights.
    This Oracle userid has been granted table rights specific for this userid.This allows the user BOB to use the Access application and still be authenticated into the Oracle database.
    The problem with this solution is that the relinking of the saved Access tables takes 3-7 minutes for about 100+ tables. This is not acceptable for users each time they log into the application.
    Our current alternative is to use one Oracle userid to login each user, and use Access form restrictions/security to allow/prevent users from updating/viewing data. Obviously, this is not the optimal solution in respect to security, but it at least allows us to control access to the data(via the forms) by using one logon required for each user, and quick startup time for the application.
    I understand SQL server does a better job in integration, but we use Oracle which is what I am trying to work with.
    Larry

  • HELP!!! Listener problem on AIX 4.1 with oracle 7.2.2.0

    When I try to do "lnsrctl start" (or stop) the server give to me the error below:
    Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=ora7))
    TNS-12224: TNS:no listener
    TNS-12541: TNS:no listener
    TNS-12560: TNS:protocol adapter error
    TNS-00511: No listener
    IBM/AIX RISC System/6000 Error: 2: No such file or directory
    Connecting to (ADDRESS=(COMMUNITY=TCP.world)(PROTOCOL=TCP)(Host=BILANCIO)(Port=1521))
    TNS-12545: Connect failed because target host or object does not exist
    TNS-12560: TNS:protocol adapter error
    TNS-00515: Connect failed because target host or object does not exist
    IBM/AIX RISC System/6000 Error: 79: Connection refused
    Connecting to (ADDRESS=(COMMUNITY=TCP.world)(PROTOCOL=TCP)(Host=BILANCIO)(Port=1526))
    TNS-12545: Connect failed because target host or object does not exist
    TNS-12560: TNS:protocol adapter error
    TNS-00515: Connect failed because target host or object does not exist
    IBM/AIX RISC System/6000 Error: 79: Connection refused
    On metalink, solutions are client side, but I have this problem on server side.
    I try to change host name to ip add, but problem persist.
    I try to reboot server (because I think the problem was the oracle istance) but the db works correctly.
    What can I do?
    thank u so much!!!
    pedro

    Hello Pedro,
    Please can you tell me if you resolved this issue you were having and how did you resolve this. I am having the same issue now....
    We did a restore to a different box and now cannot connect to the database and having same issues you described.
    Your help will be appreciated,
    Regards
    Avishkar Bandu

Maybe you are looking for

  • Why do I need to render an imported clip in the timeline to view it??

    I can't figure this out.I have used FCP 3 many times before but now I have imported an H264.mp4 clip into my timeline using FCP 4.5. Now I have to render it to watch it in the timeline?It appears that I have to render any clips I import into the time

  • Payment System Not Working

    I am trying to purchase 0ffice 365 for small business after having the trial, it goes through to the payments screen but does not show any buttons to complete the process. My trial is due to run out today so I need assistance so that I can carry on w

  • How to get itunes on my iphone

    I have a 5S and want to use it instead of my ipod for Nike+. I can't seem to get my itunes to work - only a couple of songs are there and can't see my playlists from itunes.

  • BPM -- Unable to deliver event 'RECEIVED' of object

    Hi XI Experts, I have completed a scenario depicting 'BpmPatternCollectMessage', where in IP is going to run with a infinite loop and when a Stop message is sent, IP should stop and tranform the messages collected to send it to target. I have sent st

  • Disappearing image in header

    i'm creating a template, When I pull the file I notice in my header my logo I have for an image is missing. I get a window that says Inconsistent Region. I select Docujent head and select the corect file. Logo comes in blank. I even try to preview it