XI Implementation Architecture

Hi All,
For a project, we have multiple instances of SAP supporting different parts of the business.
Would like to get feedback on experiences of implementing different architecture of XI:
multi-tier
point-to-point
hub-spoke
bus
Would appreciate feedback.

Quick answers;
XI supports all of these architecture strategies:
multi-tier: Yes this can be done; for instance via the portal or any other web based application. XI connects legacy applications to the front-end in this case the portal or any other application interested in this data.
p-t-p: native Web services, plain HTTP or SOAP over HTTP is also supported by XI.
hub-spoke: This can be implemented in XI by using the JMS adapter. As a matter of fact XI is kind of a sophisticated MOM product.
bus: The whole XI concept is based on a ESB (Enterprise Sevice Bus) framework.
Cheers, Roberto
Message was edited by: Roberto Viana
Message was edited by: Roberto Viana

Similar Messages

  • EBS R12.1.2 Implementation  - Architecture Doubt

    Hi guys,
    I have some questions about an architecture a client wants to implement.
    The customer has 12.1.2 on two nodes. One for the database (11.1.0.7) and the other for all the Application services.
    They want to add another Application node (with all the services) to receive another location users /requests. So basically they want two apps nodes (with all the services). one for the New York users and other one for the Los Angeles users. This two nodes working against the same DB Node. They dont have the hardware to enable load balancing.
    Can this implementation work? It is possible to have two web/forms nodes without load balancing?
    I know PCP will work. dont now about the web/forms.
    Any answer should be helpful.
    thanks.!

    Hi,
    Can this implementation work? It is possible to have two web/forms nodes without load balancing?Without load balancing I believe you cannot restrict the users from region 1 to access the first node and users from region 2 to access to the second node (or vice versa). Moreover, having two forms/web nodes accessing the same node and both are independent of each other is not supported (ICX_PARAMETERS table should have a single entry, either Node 1 or Node 2 for the application URL), so you need to have a load balancer implemented -- Please log a SR to confirm this with Oracle support.
    Note: 727171.1 - Implementing Load Balancing On Oracle E-Business Suite - Documentation For Specific Load Balancer Hardware
    Thanks,
    Hussein

  • MII Implementation Architecture

    Hi,
    Due to service maintenance costs for each server deployed, I've been asked to try limit the number of servers used for a multi-site MII roll out.  The MII system requirements are for Operator/Management Production Reports with a view for system integration at a later stage.  The current architecture options on the table are:
    1. Local - one instance per production site (Typical and my usual approach)
    2. Regional - one instance serving four or five Production Sites within close proximity (50 kms)
    3. Central - one instance serving all Production Sites Globally.
    While I've successfully been able to convince the necessary parties that option 3 is not an option, I'm finding it difficult to build up a convincing case for option 1 over option 2 (other than this is the official/preferred way - money talks I'm afraid ).
    My immediate reluctance for the regional approach is because:
    1. Increased communication overhead will impact on performance (esp. if interactive screen)
    2. Increased risk of communication failure to source production systems (located on each site).
    Point 1 is easy to test and measure, but Point 2 is what I'm having difficulty quanitifying for this evaluation.  This will be a 12.x installation, so the Query Data Buffering will be available (Tag and SQL), but I haven't used it within a production environment extensively so I'm not too sure if it's a recommended route to rely on.  I'm also of the thought that it's better to avoid the problem than "fix" it.  Also, while the buffering is great for an integration/transactional environment, it doesn't help much with regards to an operator screen/report - from the perspective of the operator waiting for data.
    Does anyone have any experience/views on the Regional Approach, in particular my concerns on the communication failure, or am I being over paranoid?
    Thanks.

    Hi, Lawrence.  Here's my view, for what it's worth...
    Since you're paying a license for each site anyway, it isn't a "license-based" cost decision - it's largely a question of the cost of administering multiple MII instances/servers and related hardware.  In the 11.X era, this cost was reasonably low.  With 12.X, it has increased a bit with the more frequent need for NW patches and management (or so I've been told by a few customers who I trust greatly).
    A few key considerations are performance/responsiveness, availability, and overall application manageability.  As I recall, the networking infrastructure in S.A. can be a challenge in some remote locations, with limited bandwidth ISDN or DSL connections.  If there will be a lot of "trending" views by the users, mostly against data local to their site, you'll be wasting an enormous amount of network bandwidth (and response time) shipping data up to the regional or central server and then all the way back to the user.  Also, there is always the question of availability, and the likelihood of a local server on a local network being down versus a central/regional server with intermittent outages is important to consider.
    One of the "hidden features" of MII that offers a good compromise solution is the "Virtual Server" (a special type of connector, not something like VMware).  This approach allows you to have MII systems at each site handling communications to historians/databases, but also regional or central servers that can utilize these data connections remotely.  Customers have benchmarked performance and generally found that accessing a historian from a regional server, for example, is far more efficient and faster if you use a Virtual Server connection to the historian than versus connecting to it directly from the regional server.  The reason is often due to the binary protocol that MII uses being more efficient/lean than the vendors underlying protocols.  Of course, you may find different results, but it is something to consider.
    Similarly, you might want to consider application segmentation/partitioning, whereby you could create very ad-hoc "engineering" applications on the local MII server at each site, and do the more "corporate oriented" dashboards, reports, and ERP integration activities on the regional or central servers.  This way you can get the best of both worlds.

  • Web service and ejp enterprise in JBoss

    Is Java a compiled language?
    Actually, Java is a compiled/interpreted language. See the links below. This is the best classification for the Java language, in my opinion. Read [_this thread_|http://forums.sun.com/thread.jspa?threadID=5320643&start=0&tstart=0] and give your opinion, too! You are very welcome in this interesting discussion. The more I participate in this forum, the more I learn. The more you participate the more you learn, too! Thank you very much for this forum, Sun!
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+
    [_Introduction to scripting in Java, Part 1_|http://www.javaworld.com/javaworld/jw-07-2007/jw-07-awscripting1.html?page=2]
    +Many of today's interpreted languages are not interpreted purely. Rather, they use a hybrid compiler-interpreter approach, as shown in Figure 1.3.+
    +In this model, the source code is first compiled to some intermediate code (such as Java bytecode), which is then interpreted. This intermediate code is usually designed to be very compact (it has been compressed and optimized). Also, this language is not tied to any specific machine. It is designed for some kind of virtual machine, which could be implemented in software. Basically, the virtual machine represents some kind of processor, whereas this intermediate code (bytecode) could be seen as a machine language for this processor.+
    +This hybrid approach is a compromise between pure interpreted and compiled languages, due to the following characteristics:+
    Because the bytecode is optimized and compact, interpreting overhead is minimized compared with purely interpreted languages.
    The platform independence of interpreted languages is inherited from purely interpreted languages because the intermediate code could be executed on any host with a suitable virtual machine.
    Lately, just-in-time compiler technology has been introduced, which allows developers to compile bytecode to machine-specific code to gain performance similar to compiled languages. I mention this technology throughout the book, where applicable.
    [_Compiled versus interpreted languages_|http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/index.jsp?topic=/com.ibm.zappldev.doc/zappldev_85.html]
    Assembler, COBOL, PL/I, C/C++ are all translated by running the source code through a compiler. This results in very efficient code that can be executed any number of times. The overhead for the translation is incurred just once, when the source is compiled; thereafter, it need only be loaded and executed.
    Interpreted languages, in contrast, must be parsed, interpreted, and executed each time the program is run, thereby greatly adding to the cost of running the program. For this reason, interpreted programs are usually less efficient than compiled programs.
    +Some programming languages, such as REXX and Java, can be either interpreted or compiled.+

    Is Java a compiled language?
    Actually, Java is a compiled/interpreted language. See the links below. This is the best classification for the Java language, in my opinion. Read [_this thread_|http://forums.sun.com/thread.jspa?threadID=5320643&start=0&tstart=0] and give your opinion, too! You are very welcome in this interesting discussion. The more I participate in this forum, the more I learn. The more you participate the more you learn, too! Thank you very much for this forum, Sun!
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+
    [_Introduction to scripting in Java, Part 1_|http://www.javaworld.com/javaworld/jw-07-2007/jw-07-awscripting1.html?page=2]
    +Many of today's interpreted languages are not interpreted purely. Rather, they use a hybrid compiler-interpreter approach, as shown in Figure 1.3.+
    +In this model, the source code is first compiled to some intermediate code (such as Java bytecode), which is then interpreted. This intermediate code is usually designed to be very compact (it has been compressed and optimized). Also, this language is not tied to any specific machine. It is designed for some kind of virtual machine, which could be implemented in software. Basically, the virtual machine represents some kind of processor, whereas this intermediate code (bytecode) could be seen as a machine language for this processor.+
    +This hybrid approach is a compromise between pure interpreted and compiled languages, due to the following characteristics:+
    Because the bytecode is optimized and compact, interpreting overhead is minimized compared with purely interpreted languages.
    The platform independence of interpreted languages is inherited from purely interpreted languages because the intermediate code could be executed on any host with a suitable virtual machine.
    Lately, just-in-time compiler technology has been introduced, which allows developers to compile bytecode to machine-specific code to gain performance similar to compiled languages. I mention this technology throughout the book, where applicable.
    [_Compiled versus interpreted languages_|http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/index.jsp?topic=/com.ibm.zappldev.doc/zappldev_85.html]
    Assembler, COBOL, PL/I, C/C++ are all translated by running the source code through a compiler. This results in very efficient code that can be executed any number of times. The overhead for the translation is incurred just once, when the source is compiled; thereafter, it need only be loaded and executed.
    Interpreted languages, in contrast, must be parsed, interpreted, and executed each time the program is run, thereby greatly adding to the cost of running the program. For this reason, interpreted programs are usually less efficient than compiled programs.
    +Some programming languages, such as REXX and Java, can be either interpreted or compiled.+

  • I can�t understand this

    Is Java a compiled language?
    Actually, Java is a compiled/interpreted language. See the links below. This is the best classification for the Java language, in my opinion. Read [_this thread_|http://forums.sun.com/thread.jspa?threadID=5320643&start=0&tstart=0] and give your opinion, too! You are very welcome in this interesting discussion. The more I participate in this forum, the more I learn. The more you participate the more you learn, too! Thank you very much for this forum, Sun!
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+
    [_Introduction to scripting in Java, Part 1_|http://www.javaworld.com/javaworld/jw-07-2007/jw-07-awscripting1.html?page=2]
    +Many of today's interpreted languages are not interpreted purely. Rather, they use a hybrid compiler-interpreter approach, as shown in Figure 1.3.+
    +In this model, the source code is first compiled to some intermediate code (such as Java bytecode), which is then interpreted. This intermediate code is usually designed to be very compact (it has been compressed and optimized). Also, this language is not tied to any specific machine. It is designed for some kind of virtual machine, which could be implemented in software. Basically, the virtual machine represents some kind of processor, whereas this intermediate code (bytecode) could be seen as a machine language for this processor.+
    +This hybrid approach is a compromise between pure interpreted and compiled languages, due to the following characteristics:+
    Because the bytecode is optimized and compact, interpreting overhead is minimized compared with purely interpreted languages.
    The platform independence of interpreted languages is inherited from purely interpreted languages because the intermediate code could be executed on any host with a suitable virtual machine.
    Lately, just-in-time compiler technology has been introduced, which allows developers to compile bytecode to machine-specific code to gain performance similar to compiled languages. I mention this technology throughout the book, where applicable.
    [_Compiled versus interpreted languages_|http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/index.jsp?topic=/com.ibm.zappldev.doc/zappldev_85.html]
    Assembler, COBOL, PL/I, C/C+ are all translated by running the source code through a compiler. This results in very efficient code that can be executed any number of times. The overhead for the translation is incurred just once, when the source is compiled; thereafter, it need only be loaded and executed.+
    Interpreted languages, in contrast, must be parsed, interpreted, and executed each time the program is run, thereby greatly adding to the cost of running the program. For this reason, interpreted programs are usually less efficient than compiled programs.
    +Some programming languages, such as REXX and Java, can be either interpreted or compiled.+

    why this ; after the condition?
    and why do you put else + condition? Maybe you want to put else if?
    try this:
      if (a.length() == 1) {  // I always open and close { } to make my code more readable and avoid errors
        a = "0" + a;
      } else if (a.length() == 3) {
        a = a.substring(1, 3);
      }Hope this helps
    Zerjio

  • ORA-12154 Connection error from HFM to Oracle Database

    Hi,
    I am trying to configure Hyperion HFM but can write to HFM database.
    The implementation architecture:
    Hyperion 11.1.2.2 (with all the requiered patches for HFM, FDM, Shared Services, Workspace and Oracle Application Development)
    Server 1:
    Windows Server 2008 x64
    Installed products: Foundation (EPMA, CalcManager), BI, HFM web components and ADM driver
    Configured products: Foundation(EPMA, CalcManager), BI.
    Database Client: 11gR2 x64
    Server 2:
    Windows Server 2008 x64
    Installed products: HFM, FDQM
    Configured Products: FDQM, HFM
    Database Client: 11gR2 x32, 11gR2 x64 (x32 version installed first)
    Server 3:
    Database: Oracle 11.2.0.2
    All the products from server 1 are working fine, FDQM (server 2) is also working fine, but, when I try to do any action related with HFM database the system fails.
    I have tested the connection is these scenarios:
    1. SQLdeveloper: successfull!, I can create tables, views, etc. Double checking the user privileges it has all the required.
    2. tnsping: successfull!
    3. HFMApplicationCopy utility: Successfull using UDL file and writing the connection parameters.
    4. EPM System Configurator: the configurator successfully validates the database connection information, but does not create the tables on the database. No errors in the configtool log.
    5. EPM Diagnostic Tool: fails with this error message:
    ------------STARTING VALIDATION SCRIPTS----------
    LOGGING IN HFM....
    CREATING APPLICATION....
    ERROR: Unable to CreateApplicationCAS
    Number (dec) : -2147215936
    Number (hex) : &H800415C0
    Description  : <?xml version="1.0"?>
    +<EStr><Ref>{DC34A1FD-EE02-4BA6-86C6-6AEB8EF5E5A3}</Ref><AppName/><User/><DBUpdate>1</DBUpdate><ESec><Num>-2147467259</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>HfmADOConnection.cpp</File><Line>511</Line><Ver>11.1.2.2.300.3774</Ver><DStr>ORA-12154: TNS:could not resolve the connect identifier specified</DStr></ESec><ESec><Num>-2147215616</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxSQLConnectionPool.cpp</File><Line>585</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServerImpl.cpp</File><Line>8792</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServer.cpp</File><Line>90</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>1356</Line><Ver>11.1.2.2.300.3774</Ver><PSec><Param><server_name></Param></PSec></ESec><ESec><Num>-2147215936</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>936</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>4096</Line><Ver>11.1.2.2.300.3774</Ver></ESec></EStr>+
    Source       : Hyperion.HFMErrorHandler.1
    ERROR: while Application created
    7. HFM Classic application creation: fails with the following error:
    Error*11*<user_name+>*10/19/2012 08:30:52*CHsxServer.cpp*Line 90*<?xml version="1.0"?>+
    +<EStr><Ref>{DC34A1FD-EE02-4BA6-86C6-6AEB8EF5E5A3}</Ref><AppName/><User/><DBUpdate>1</DBUpdate><ESec><Num>-2147467259</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>HfmADOConnection.cpp</File><Line>511</Line><Ver>11.1.2.2.300.3774</Ver><DStr>ORA-12154: TNS:could not resolve the connect identifier specified</DStr></ESec><ESec><Num>-2147215616</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxSQLConnectionPool.cpp</File><Line>585</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServerImpl.cpp</File><Line>8792</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServer.cpp</File><Line>90</Line><Ver>11.1.2.2.300.3774</Ver></ESec></EStr>+
    8. EPMA Application deployment: fails with same message.
    Please help me with some insights on this problem, I have tried everything but nothing works.
    Regards
    Edited by: Otein on 19-oct-2012 14:04

    Hi,
    I Have solved one of my problems, the one that keep HFM from connecting to the Oracle database.
    I just change the TNSNAMES.ORA, like this:
    Initial tnsnames.ora
    PRUEBA.WORLD=
    +(DESCRIPTION_LIST =+
    +(DESCRIPTION =+
    +(LOAD_BALANACE = ON)+
    +(FAILOVER = ON)+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = <server_name>)(PORT = <port>))+
    +)+
    +(CONNECT_DATA =+
    +(SERVICE_NAME = <service_name>)+
    +)+
    +)+
    +)+
    Modified tnsnames.ora
    PRUEBA.WORLD=
    +(DESCRIPTION =+
    +(LOAD_BALANACE = ON)+
    +(FAILOVER = ON)+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = <server_name>)(PORT = <port>))+
    +)+
    +(CONNECT_DATA =+
    +(SERVICE_NAME = <service_name>)+
    +)+
    +)+
    I Just delete the line "+(DESCRIPTION_LIST =+" and its corresponding closing parenthesis, I did this cause in the configuration utility log I saw this line:
    +TNS parsing: Entry: DESCRIPTION_LIST [[Address: Protocol:(TCP) Host:(<server_name>) Port:(1521) SID:(<service_name>)]]+
    So, if the applications were trying to connect to connection descriptor DESCRIPTION_LIST, the driver could not recognize DESCRIPTION_LIST as a valid one.
    There is a lot going on behind the scenes when you work with Oracle Database as the repository, maybe there is some other way to address this issue, but it worked for me, hope it can help you too.

  • My company is looking for help

    My group is looking for a Security Architect with strong knowledge of Identity manager. we are a very distinguished group in NYC and need someone to help us put it together. Can anyone help? Thanks

    Feel Free to give me a call. We would be happy to speak to you.
    As a quick summary, our real differentiator is that we have a repeatable, scalable implementation architecture and installer that sits on top of your IDM solution. Not only can it reduce your coding time and risk greatly, it is also trainable and scalable. This way your future phases will require less work then your first phase (the way it should be), while still conforming and "plugging in" to a standard architecture. This, of course, maintains the consistency of the application as it matures.
    We have this implemented this architecture in many many clients nationwide from higher education to retail to defense, and would be happy to discuss with you ways in which we may be able to partner and assist you.
    Feel free to contact me. I'd be happy to share with you more about it.
    Dana Reed
    [email protected]

  • Installation sequence using PS7 and PS7 with SRA

    Our implementation architecture has three network zones i.e., Internet zone, SSN and data zone. We have two solaris machines. Each machine has one global and three local zones. Every machine zone has a iDM product installation. We did installation in following sequence
    1. DS on M1Z1
    2. AM on M1Z2
    3. IM on M1Z3
    Everything works here as expected.
    We have received PS7 and PS7 with SRA support bits from Sun.
    Can you please let us know which bits to use to complete following installation
    4. PS on M2Z1
    5. SRAG on M2Z2
    6. rewrite on M2Z3
    Do you have any documentation on installation of SRAG and rewrite on different machine and these machines does not have portal server installation.

    Portal server also requires DS and AM. So, I assume that you are going to point to the instances on M1 when you install portal.
    You also will need to install DS on the SRA machine. This directory is only needed during installation. Once things are up and running, then you can disable it.
    Start by installing Portal and make sure you install the SRA Core package. You must select this when you install portal the first time because the installer will not let you add it later. This is a known bug.
    Then, install the SRA Gateway component following these directions:
    http://docs.sun.com/app/docs/doc/819-3027/6n59bv1ne#hic
    These instructions are not entirely correct. Specifically, install everything in "Configure Now" mode.
    I don't know about installing Rewriter Proxy on a separate machine. I have not done this yet. Although it is a supported configuration.
    HTH,
    Jim

  • Secure OSB10g with owsm 10g

    Hi,
    I have a customer who have some flows exposed as webservices via proxy services on OSB 10g, he would like to implement authentication and authorisation, what is the best architecture to do it ? he is thinking to use OWSM 10g but don't know what is the best implementation architecture ?
    He is also asking this questions : OWSM 10g is it compatible with OSB 10g or not ?
    Thanks for your help.

    OSB 10g is compatible with OWSM ( 10.1.3.x and later & 11.1.1). Please refer to the following links for more details:
    http://docs.oracle.com/cd/E13159_01/osb/docs10gr3/security/owsm.html
    http://docs.oracle.com/cd/E13159_01/osb/docs10gr3/interopmatrix/matrix.html (Refer to Platform Interoperability section)
    Hope this helps.
    Thanks,
    Patrick

  • Is java a compiled language - part 2

    yamm_v6.6_6 said
    Again, I think you are confusing the compilation process and the interpretation process with the meaning of "compiled language" and "interpreted language", mr. jschell. Compilation process and interpretation process have definitions very well established in the Computer Science, as you said, and I agree with you. There are no controversies in those definitions. On the other hand, the meaning of "compiled language" and "interpreted language" might vary, because in this case there are controversiesNo.
    Compilers are one of the most well studied and well defined aspects of computer science more so than any other part of computer science.
    And the Dragon book is now, and has been for at least 20 years, the recognized authoratative source of that part of computer science.
    And there is nothing in there that supports your conclusion nor nothing that supports your term definitions.
    A language in of itself is never compiled nor interpreted. That book specifically defines the process under which a language must be processed to become either compiled and/or interpreted.
    If you wish to disagree with that then state something from that book or first demonstrate that that book is no longer the authorative source.
    Finally note that in terms of computer science none of the java documents (specifications or otherwise) are suitable authoratative sources. The java specifications although adequate are far from precise in the adherence to definitions in computer science, or even in the definition of java itself.

    Wow! I was just googling, and then look at what I found! This thread! Thanks, mr jschell, for your replies. I really appreciate them.
    jschell wrote:
    Finally note that in terms of computer science none of the java documents (specifications or otherwise) are suitable authoratative sources. The java specifications although adequate are far from precise in the adherence to definitions in computer science, or even in the definition of java itself.Well, do you want authoritative sources, right? How about this:
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc[_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+

  • Can Windows 8 Taskbar contain program groups

    I suppose that in part, because I really want the start button to be back on Windows 8 that I am posing this question, but I see a feature here that would be nice and would satisfy me such that I would no longer care about a Start button.  I've always
    liked the taskbar more than the start button but it is missing my favorite feature that was in the Start Menu.   And maybe it isn't and that's what this question is about?
    How do I add an icon to the taskbar that will pull up a drop down of all the Microsoft Office tools (as opposed to putting each one on the taskbar)?
    R, J

    RJ, We see two (2) important 'currents' in what you propose as User TASKBAR facilitations.
    implement a program (or workplace application) group; and,
    build a new toolbar.
    Clearly, we are underlining advanced [?] existing functionality for User. Are we looking for something more?
    Architecture says always but draws out another [sigh] critical flaw in the toolbar manipulation. Everyone's Taskbar jams everything 'hard left', or for Notification Area and for new User Taskbar toolbars, 'hard right' and then 'hard left'. We can casually
    put things wherever we want them on our desktops.
    Taskbar and Start Screen line up icons like a cheap cell phone gimmick! Why?
    As a web author, constantly teasing CSS3 phrasic bullets and JS escape from rigid text alignment, one would speculate that perhaps Ronnie envisions a grand extension of Taskbar-Start Screen to implement architecture of the very same display implementation:
    that is, loose Start Screen and somehow 'plonk' that new 'curiosity' called the Xbox Frame into a simple right-click Taskbar popup? Float Start Screen with opacity 0.8 over the top half of the desktop with a drag able bottom border, or something like
    that. Taskbar needs a toolset to implement/manage such integration. Yes?
    CURRENT IMPLEMENTATION: right click taskbar and select Properties. Fiddle and fuddle settings to position two or more User toolbars, without any breakable objects close at hand. Architecture with
    a digital bent.
    Windows 8 Service Pack 1, a.k.a. "Blue" [appropriately] is due sometime around July this year. Casual placement of Taskbar icons and toolbars? We shall see...
    Regards as always, Mark
    [ no offense, peoples: note colon's position here and comment occidentally to
    [email protected], if you dare ]
    Oh as I was young and easy in the mercy of his means,
    Time held me green and dying
    Though I sang in my chains like the sea.
    What is
    Riverleaf

  • Vignette Collaboration not yet ready

    Hello Experts,
    My problem is next:
    I successfully installed Server ANCILE uPerform 4.50. I use a two-server implementation architecture.
    Yesterday I successfully opened the page, I enter:
    http://[CollabServerName]:[collaboration port number]/gm/workplace.
    This morning I turned on the servers (yesterdey they where off) and when I entered:
    http://[CollabServerName]:[collaboration port number]/gm/workplace.
    it returns an massage:
    “Vignette Collaboration not yet ready to receive command request”
    The services Common Apache Solr, Vignette Collaboration and Ancile uPerform monitoring service have status “Started”
    What could be the problem?
    Thank you!
    Sincerely, Vitaliy!

    Hi Vitalitiy,
    You should check the collab.props configuration file that the domain name is incorrect.
    Also try run the Configuration Wizard to ensure the host name is properly updated.
    Check IIS and the website are bound to the correct server IP address.
    Hope this helps,
    Matthew

  • RDIMM memory in the S20

    I am looking to upgrade the RAM in my machine (4105-57U). From what I read everywhere my options are only limited to UDIMM ECC memory. However this spec sheet  states that RDIMM memory could be used as well:
    Memory
    Type DDR3 Unbuffered SDRAM
    ECC Supported
    Maximum Capability : 1600MHz, UDIMM : PC3-10600 (1333MHz), RDIMM : PC3-8500 (1066MHz)
    Max DIMM Sizes : 4GB
    Maximum system memory : 24GB
    I am wondering if I can use registered ECC memory in this machine?

    Maximum memory capacities can be tricky.  It depends on several things:
    -maximum DIMM density available at the time of launch
    -maximum chipset capabiliy
    -architecture of the system (i.e. how many slots provided)
    When S20 was originally launched, 8GB DIMMs were generally the highest capacities available.  That leads to a 48GB max.  Since then, the DIMM density has increased and larger DIMMs have become available.  Many time (unfortunately), the original launch documentation doesn't get updated when these changes happen over time though.
    Also, sometimes people get mixed up and start stating the max memory support for the chipset, when the max support for the motherboard can be something completely different due to the implemented architecture.  For example, the Nehalem/Westmere class CPUs I believe can support up to 3 DIMMs per channel, but S20 is designed with 2DPC support...so it's overall max memory capacity would be lower than what the CPU could physically support.  
    S20 was qualified with ECC UDIMM support only.  This support is really determined by the MRC, which is a section of code integrated into BIOS that is provided by Intel.  We generally don't manipulate this code in any way and tend to follow Intel guidelines for this sort of thing.  Intel's documentation has always been that RDIMM has not been validated in 1P Nehalem/Westmere (Thurley) platforms.
    If Intel's MRC doesn't block it, then it might indeed work.  I've never really checked.  However I'm not going to state in any way that it's supported or that you should try it out as it simply has never been tested.

  • Ask the Expert: Plan, Design, and Implement Mobile Remote Access, the Cisco Collaboration Edge Architecture

    Welcome to the Cisco® Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about planning, designing, and implementing mobile remote access (Cisco Collaboration Edge Architecture) with Cisco subject matter experts Aashish Jolly and Abhijit Anand.
    Cisco Collaboration Edge Architecture is an architecture that provides VPN-less access of Cisco Unified Communications resources to Cisco Jabber® users. This discussion is dedicated to addressing questions about design best practices while implementing mobile remote access.
    For more information, refer to the Unified Communications Mobile and Remote Access via Cisco VCS deployment guide. 
    Aashish Jolly is a network consulting engineer who is currently serving as the Cisco Unified Communications consultant for the ExxonMobil Global account. Earlier at Cisco, he was part of the Cisco Technical Assistance Center (TAC), where he helped Cisco partners with installation, configuring, and troubleshooting Cisco Unified Communications products such as Cisco Unified Communications Manager and Manager Express, Cisco Unity® solutions, Cisco Unified Border Element, voice gateways and gatekeepers, and more. He has been associated with Cisco Unified Communications for more than seven years. He holds a bachelor of technology degree as well as Cisco CCIE® Voice (#18500), CCNP® Voice, and CCNA® certifications and VMware VCP5 and Red Hat RHCE certifications.
    Abhijit Singh Anand is a network consulting engineer with the Cisco Advanced Services field delivery team in New Delhi. His current role involves designing, implementing, and optimizing large-scale collaboration solutions for enterprise and defense customers. He has also been an engineer at the Cisco TAC. Having worked on multiple technologies including wireless and LAN switching, he has been associated with Cisco Unified Communications technologies since 2006. He holds a master’s degree in computer applications and multiple certifications, including CCIE Voice (#19590), RHCE, and CWSP and CWNP.
    Remember to use the rating system to let Aashish and Abhijit know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation on the Cisco Support Community Collaboration, Voice and Video page, in the Jabber Clients subcommunity, shortly after the event. This event lasts through June 20, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Marcelo,
       Yes, there are some requirements for certificates in Expressway.
    Expressway Core (Exp-C)
    - Can be signed by either External or Internal CA
    - Better to use a cluster name even if you start with 1 peer in Exp-C cluster. In the future, if more peers are added, changes would be minimal.
    - Better to use FQDN of cluster as CN of certificate, this way the traversal zone configuration on Expressway-E won't require any change even if new peers are added to Exp-C cluster.
    - If CUCM is mixed mode, include security profile names (in FQDN format) as Subject Alternate Names
    - The Chat Node Aliases that are configured on the IM and Presence servers. They will be required only for Unified Communications XMPP federation deployments that intend to use both TLS and group chat. (Note that Unified Communications XMPP federation will be supported in a future Expressway release). The Expressway-C automatically includes the chat node aliases in the CSR, providing it has discovered a set of IM&P servers.
    - For TLS b/w CUCM, IM-P & Exp-C
      + If using self-signed certificates on CUCM, IM/P. Load Cisco Tomcat, cup, cup-xmpp certificates from IM-P on Exp-C. Load callmanager, Cisco Tomcat certificates from CUCM on Exp-C.
      + If using Internal CA signed certificates on CUCM, IM/P. Load Root CA certificates on Exp-C.
      + Load CA certificate under tomcat-trust, cup-trust, cup-xmpp-trust on IM-P.
      + Load CA certificate under tomcat-trust, callmanager-trust on CUCM.
    Expressway Edge (Exp-E)
    - Signed by External CA
    - Configured Unified Communications domain as Subject Alternate Name
    - If using a cluster, select FQDN of this peer as CN and FQDN of Cluster + this peer as Subject Alternate Name.
    - If XMPP federation is being deployed, enter the same Chat Node Aliases as entered in Exp-C.
    For more details, please refer to the Certificate Creation Guide for Cisco Expressway x8.1.1
    http://www.cisco.com/c/dam/en/us/td/docs/voice_ip_comm/expressway/config_guide/X8-1/Cisco-Expressway-Certificate-Creation-and-Use-Deployment-Guide-X8-1.pdf
    - Aashish

  • How to implement mvc model in designing game architecture

    I have a problem in implementing the mvc architecture in my game designing. I want you to suggest me how should I do it ?
    I have created 3 packages viz : model , view , and controller
    I am confused in which package should include canvas ? which should implement Runnable ? and what actually the model package must include ?
    Also i would like to know that whatever dynamic active background is generated in my game play must it be treated as model or not ?

    Hi
    Here is a good article about this: http://www-128.ibm.com/developerworks/library/wi-arch6/?ca=drs-wi3704
    Mihai

Maybe you are looking for

  • Iphoto crashes every time I try to process book

    Hi everyone, I understand that others have had the same issue and have had it resolved, through help from this thread: http://discussions.apple.com/thread.jspa?threadID=2623935&tstart=0 I tried this - and my book order is still crashing iphoto, and t

  • The ABAP/4 Open SQL array insert results in duplicate database records

    Hi, Iam getting following error : The ABAP/4 Open SQL array insert results in duplicate database records. Error in ABAP application program. The current ABAP program "SAPLV60U" had to be terminated because one of the statements could not be executed.

  • Report painter - Reverse signs

    Hi, I want to change a report using the report painter. There is one specific line that I want to reverse signs, i.e. if the value is actually 2,500, I want it to be 2,500-. I'm trying to select this specific line by clicking on it and then F9 (Selec

  • Safari is not working on my iPad mini

    When I open Safari it opens as if it works fine, but displays nothing, and does not function at all.

  • Conditional Printing when NE to 0

    Post Author: LaVerne CA Forum: Formula Good Morning - I would like to suppress printing of records when (the Actual Volume = 0 and the Budgeted Volume = 0).  Amounts in either of these field can be less than 0 in which case, I would like the row to p