MII Implementation Architecture

Hi,
Due to service maintenance costs for each server deployed, I've been asked to try limit the number of servers used for a multi-site MII roll out.  The MII system requirements are for Operator/Management Production Reports with a view for system integration at a later stage.  The current architecture options on the table are:
1. Local - one instance per production site (Typical and my usual approach)
2. Regional - one instance serving four or five Production Sites within close proximity (50 kms)
3. Central - one instance serving all Production Sites Globally.
While I've successfully been able to convince the necessary parties that option 3 is not an option, I'm finding it difficult to build up a convincing case for option 1 over option 2 (other than this is the official/preferred way - money talks I'm afraid ).
My immediate reluctance for the regional approach is because:
1. Increased communication overhead will impact on performance (esp. if interactive screen)
2. Increased risk of communication failure to source production systems (located on each site).
Point 1 is easy to test and measure, but Point 2 is what I'm having difficulty quanitifying for this evaluation.  This will be a 12.x installation, so the Query Data Buffering will be available (Tag and SQL), but I haven't used it within a production environment extensively so I'm not too sure if it's a recommended route to rely on.  I'm also of the thought that it's better to avoid the problem than "fix" it.  Also, while the buffering is great for an integration/transactional environment, it doesn't help much with regards to an operator screen/report - from the perspective of the operator waiting for data.
Does anyone have any experience/views on the Regional Approach, in particular my concerns on the communication failure, or am I being over paranoid?
Thanks.

Hi, Lawrence.  Here's my view, for what it's worth...
Since you're paying a license for each site anyway, it isn't a "license-based" cost decision - it's largely a question of the cost of administering multiple MII instances/servers and related hardware.  In the 11.X era, this cost was reasonably low.  With 12.X, it has increased a bit with the more frequent need for NW patches and management (or so I've been told by a few customers who I trust greatly).
A few key considerations are performance/responsiveness, availability, and overall application manageability.  As I recall, the networking infrastructure in S.A. can be a challenge in some remote locations, with limited bandwidth ISDN or DSL connections.  If there will be a lot of "trending" views by the users, mostly against data local to their site, you'll be wasting an enormous amount of network bandwidth (and response time) shipping data up to the regional or central server and then all the way back to the user.  Also, there is always the question of availability, and the likelihood of a local server on a local network being down versus a central/regional server with intermittent outages is important to consider.
One of the "hidden features" of MII that offers a good compromise solution is the "Virtual Server" (a special type of connector, not something like VMware).  This approach allows you to have MII systems at each site handling communications to historians/databases, but also regional or central servers that can utilize these data connections remotely.  Customers have benchmarked performance and generally found that accessing a historian from a regional server, for example, is far more efficient and faster if you use a Virtual Server connection to the historian than versus connecting to it directly from the regional server.  The reason is often due to the binary protocol that MII uses being more efficient/lean than the vendors underlying protocols.  Of course, you may find different results, but it is something to consider.
Similarly, you might want to consider application segmentation/partitioning, whereby you could create very ad-hoc "engineering" applications on the local MII server at each site, and do the more "corporate oriented" dashboards, reports, and ERP integration activities on the regional or central servers.  This way you can get the best of both worlds.

Similar Messages

  • Scope for MII Implementation

    Hi,
    What must cover in the Scope with respect to MII Implementation, is there any template exists for the same.
    Thanks,
    Raveen

    Raveen,
    I am not sure what you are talking about in terms of scope templates. The MII Best Practice guide covers typical architecture and the install guides on Service Marketplace discuss installation and configuration. Beyond installing NW and MII, the scope of MII applications really depends on data source connections and the output of MII (data broker vs. dashboards for example).
    Regards,
    Kevin

  • MII implementation timelines

    Hi All,
    I want to know what is the average number of months and no of resource required to implement SAP MII for a pharma company with multi plant architecture. Also the no. of applications to integrate are around 10 to 12.
    Many thanks in advance..
    Regards,
    Pooja S.
    Edited by: PoojaShah on Sep 30, 2009 7:07 AM

    Pooja,
    you need some more information about your requests to give an estimation on effort and resources needed.
    - what is your system landscape (local / central MII servers, one or more MI server) ?
    - how is the MII knowledge of your MII developers (beginner, experienced) ?
    - how many people are involved in the project (small or big team, many interface responsibles) ?
    - how many interfaces do you need (external DBs, one or more SAP systems) ?
    - what kind of applications are requested (only routing, number of GUI pages, reporting) ?
    Without knowing those factors it is hard to give a reasonable estimation.
    Michael

  • EBS R12.1.2 Implementation  - Architecture Doubt

    Hi guys,
    I have some questions about an architecture a client wants to implement.
    The customer has 12.1.2 on two nodes. One for the database (11.1.0.7) and the other for all the Application services.
    They want to add another Application node (with all the services) to receive another location users /requests. So basically they want two apps nodes (with all the services). one for the New York users and other one for the Los Angeles users. This two nodes working against the same DB Node. They dont have the hardware to enable load balancing.
    Can this implementation work? It is possible to have two web/forms nodes without load balancing?
    I know PCP will work. dont now about the web/forms.
    Any answer should be helpful.
    thanks.!

    Hi,
    Can this implementation work? It is possible to have two web/forms nodes without load balancing?Without load balancing I believe you cannot restrict the users from region 1 to access the first node and users from region 2 to access to the second node (or vice versa). Moreover, having two forms/web nodes accessing the same node and both are independent of each other is not supported (ICX_PARAMETERS table should have a single entry, either Node 1 or Node 2 for the application URL), so you need to have a load balancer implemented -- Please log a SR to confirm this with Oracle support.
    Note: 727171.1 - Implementing Load Balancing On Oracle E-Business Suite - Documentation For Specific Load Balancer Hardware
    Thanks,
    Hussein

  • XI Implementation Architecture

    Hi All,
    For a project, we have multiple instances of SAP supporting different parts of the business.
    Would like to get feedback on experiences of implementing different architecture of XI:
    multi-tier
    point-to-point
    hub-spoke
    bus
    Would appreciate feedback.

    Quick answers;
    XI supports all of these architecture strategies:
    multi-tier: Yes this can be done; for instance via the portal or any other web based application. XI connects legacy applications to the front-end in this case the portal or any other application interested in this data.
    p-t-p: native Web services, plain HTTP or SOAP over HTTP is also supported by XI.
    hub-spoke: This can be implemented in XI by using the JMS adapter. As a matter of fact XI is kind of a sophisticated MOM product.
    bus: The whole XI concept is based on a ESB (Enterprise Sevice Bus) framework.
    Cheers, Roberto
    Message was edited by: Roberto Viana
    Message was edited by: Roberto Viana

  • MII implementation of WECo rule. Zone A

    Dear Forum,
    I would like to configure the SPC chart in MII to have an alarm when a data point falls beyond 3 standard deviations from the center line. The zone A alarm is activated when the data point falls in zone A or beyond (this is between 2 and 3 sd, and beyond) if i understand correctly.
    Is there any way to have only the points beyond zone A or am i wrong in the understanding of the rule?
    I'm using MII 11.5
    Thanks in advance for the help,
    Jose Luis

    Hi Jose,
    I not an SPC expert by far, but from what I seen, it looks like the areas of standard deviation fall between the Control Limits.  Perhaps it would be possible to use the Control Limit Alarm, which would alarm a single point outside of the Control Limits.
    I know that Zone A is the region between two and three standard deviations from the centerline, so I don't see that this alarm would work for points outside of 3 standard deviations.
    Kind Regards,
    Diana Hoppe

  • Running MII on a Wintel virtual environment + hybrid architecture questions

    Hi, I have two MII Technical Architecture questions (MII 12.0.4).
    Question1:  Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
    Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris.  Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower.  While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies.  Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
    Thanks for your help
    Michel

    This is a great source for the ins/outs of SAP Virtualization:  https://www.sdn.sap.com/irj/sdn/virtualization

  • SAP MII - Inroduction

    Hi Team,
    I am new to this MII topic. What is MII? Is MII add-on in R/3 or ECC server?
    How to implement MII in our existing R/3 or ECC system?
    As a PP consultant, how can i contribute this MII implementation?
    Is MII seperate application server?
    Request for your valuable input.
    Thanks
    psk.

    Hi psk,
    Please google SAP MII and review some of the information available.  If you then have specific questions, come back and post them. There is a lot of information available already to review from both SAP and from successful partner implementations.
    Regards, Mike (moderator)

  • SAP standard roles for Mii inside of objects?

    Hi,
    It is our practice to rename SAP standard roles we plan to use "as is" to our company's naming convention.  I am being told by an Mii implementer that Mii uses the standard role names in objects and that by changing these names to our convention, I will create "complications" in their implementation process.  I find this hard to believe, it would be a departure from what (little) I know about SAP and how they handle authorizations and roles.  It also seems to be very limiting when it comes to customization in the future.
    Is this true?  Does Mii name standard roles inside of objects? (These "objects" were not clearly defined to me and I plan on calling a meeting so they may show me examples.)
    Anyone else on Mii have this issue?

    As far as I know, in Mii a user typically needs at least one of these roles:
    SAP_XMII_User
    SAP_XMII_Developer
    SAP_XMII_Administrator
    You can of course add additional roles with the authorization the different users require using your own naming convention.
    I think this is what the Mii implementer is talking about.
    Good luck!

  • Using MII in a telnet environment

    Hi,
    I want to use MII for various local application. Most of the time I'll have inrernet web browser for my user workstation or PDA but I also have some factories where mobile equipment is still telnet and don't want to change them because of MII!
    Anybody has some experience on application compatible with MII to emulate MII web pages into a telnet character based screen for VT100? I'm thinking of z/scope from Cybele http://www.cybelesoft.com/en/cybcontacts.htm.
    Regards,
    Aymeric de l'Hermuziere - SAP France

    Aymeric,
    Additionally you can drop the file and then use this action to kick-off a process in the Unix environment to retrieve the file:
    File Runner Custom Action  The custom action block referenced in this document will allow you to execute a file on the MII server and pass command line arguments to this file. You have the option to run the file synchronously and asynchronously to the MII application.
    Please let me know if this is something that is useful for you to leverage and we can use your feedback to get this rolled into the standard MII implementation.  I have yet to receive feedback on this action so it remains SDN only.
    Sam

  • User Management Strategy

    Hi everyone,
    I would like to discuss with you about User Management Strategy for multi-site MII implementations. What is the best architecture for the UME instances when you have MII users both on the corporate level and the shop floor level?
    Consider we don't have a central MII server.
    Regards,
    Henry

    User management can cause some difficulties, mixing disconnected operation support with distributed MII servers, but wanting to use LDAP from corporate.  We all have used the term 'when SAP is unavailable' but what about 'when LDAP in unavailable' - the application may be buffered but the user logins would cause the issue.
    Aside from having some form of federated/replicated LDAP I think the only option would be some essential backup local users in UME.  I would imagine this would have been encountered with Enterprise Portal, or any other NW java apps in the past, but the potential for a distributed NW server (plant or region based) may be a bit different.  The configuration of a solution would be done inside UME, but the best practices in this regard are what you're probably after.
    I hope that some customers with more clear strategies in this area can share their insight in this thread.

  • Web service and ejp enterprise in JBoss

    Is Java a compiled language?
    Actually, Java is a compiled/interpreted language. See the links below. This is the best classification for the Java language, in my opinion. Read [_this thread_|http://forums.sun.com/thread.jspa?threadID=5320643&start=0&tstart=0] and give your opinion, too! You are very welcome in this interesting discussion. The more I participate in this forum, the more I learn. The more you participate the more you learn, too! Thank you very much for this forum, Sun!
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+
    [_Introduction to scripting in Java, Part 1_|http://www.javaworld.com/javaworld/jw-07-2007/jw-07-awscripting1.html?page=2]
    +Many of today's interpreted languages are not interpreted purely. Rather, they use a hybrid compiler-interpreter approach, as shown in Figure 1.3.+
    +In this model, the source code is first compiled to some intermediate code (such as Java bytecode), which is then interpreted. This intermediate code is usually designed to be very compact (it has been compressed and optimized). Also, this language is not tied to any specific machine. It is designed for some kind of virtual machine, which could be implemented in software. Basically, the virtual machine represents some kind of processor, whereas this intermediate code (bytecode) could be seen as a machine language for this processor.+
    +This hybrid approach is a compromise between pure interpreted and compiled languages, due to the following characteristics:+
    Because the bytecode is optimized and compact, interpreting overhead is minimized compared with purely interpreted languages.
    The platform independence of interpreted languages is inherited from purely interpreted languages because the intermediate code could be executed on any host with a suitable virtual machine.
    Lately, just-in-time compiler technology has been introduced, which allows developers to compile bytecode to machine-specific code to gain performance similar to compiled languages. I mention this technology throughout the book, where applicable.
    [_Compiled versus interpreted languages_|http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/index.jsp?topic=/com.ibm.zappldev.doc/zappldev_85.html]
    Assembler, COBOL, PL/I, C/C++ are all translated by running the source code through a compiler. This results in very efficient code that can be executed any number of times. The overhead for the translation is incurred just once, when the source is compiled; thereafter, it need only be loaded and executed.
    Interpreted languages, in contrast, must be parsed, interpreted, and executed each time the program is run, thereby greatly adding to the cost of running the program. For this reason, interpreted programs are usually less efficient than compiled programs.
    +Some programming languages, such as REXX and Java, can be either interpreted or compiled.+

    Is Java a compiled language?
    Actually, Java is a compiled/interpreted language. See the links below. This is the best classification for the Java language, in my opinion. Read [_this thread_|http://forums.sun.com/thread.jspa?threadID=5320643&start=0&tstart=0] and give your opinion, too! You are very welcome in this interesting discussion. The more I participate in this forum, the more I learn. The more you participate the more you learn, too! Thank you very much for this forum, Sun!
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+
    [_Introduction to scripting in Java, Part 1_|http://www.javaworld.com/javaworld/jw-07-2007/jw-07-awscripting1.html?page=2]
    +Many of today's interpreted languages are not interpreted purely. Rather, they use a hybrid compiler-interpreter approach, as shown in Figure 1.3.+
    +In this model, the source code is first compiled to some intermediate code (such as Java bytecode), which is then interpreted. This intermediate code is usually designed to be very compact (it has been compressed and optimized). Also, this language is not tied to any specific machine. It is designed for some kind of virtual machine, which could be implemented in software. Basically, the virtual machine represents some kind of processor, whereas this intermediate code (bytecode) could be seen as a machine language for this processor.+
    +This hybrid approach is a compromise between pure interpreted and compiled languages, due to the following characteristics:+
    Because the bytecode is optimized and compact, interpreting overhead is minimized compared with purely interpreted languages.
    The platform independence of interpreted languages is inherited from purely interpreted languages because the intermediate code could be executed on any host with a suitable virtual machine.
    Lately, just-in-time compiler technology has been introduced, which allows developers to compile bytecode to machine-specific code to gain performance similar to compiled languages. I mention this technology throughout the book, where applicable.
    [_Compiled versus interpreted languages_|http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/index.jsp?topic=/com.ibm.zappldev.doc/zappldev_85.html]
    Assembler, COBOL, PL/I, C/C++ are all translated by running the source code through a compiler. This results in very efficient code that can be executed any number of times. The overhead for the translation is incurred just once, when the source is compiled; thereafter, it need only be loaded and executed.
    Interpreted languages, in contrast, must be parsed, interpreted, and executed each time the program is run, thereby greatly adding to the cost of running the program. For this reason, interpreted programs are usually less efficient than compiled programs.
    +Some programming languages, such as REXX and Java, can be either interpreted or compiled.+

  • I can�t understand this

    Is Java a compiled language?
    Actually, Java is a compiled/interpreted language. See the links below. This is the best classification for the Java language, in my opinion. Read [_this thread_|http://forums.sun.com/thread.jspa?threadID=5320643&start=0&tstart=0] and give your opinion, too! You are very welcome in this interesting discussion. The more I participate in this forum, the more I learn. The more you participate the more you learn, too! Thank you very much for this forum, Sun!
    [_CLDC HotSpot Implementation Architecture Guide Chapter 10_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/VFP.html]
    +The 1.1.3 release of CLDC HotSpot Implementation included limited VFP support. This feature was supported only when running in interpreted mode. In this release, full vector floating point support is provided when the virtual machine is running in compiled mode.+
    [_Java Virtual Machines_|http://java.sun.com/j2se/1.4.2/docs/guide/vm/index.html]
    +Adaptive compiler - Applications are launched using a standard interpreter, but the code is then analyzed as it runs to detect performance bottlenecks, or "hot spots". The Java HotSpot VMs compile those performance-critical portions of the code for a boost in performance, while avoiding unnecessary compilation of seldom-used code (most of the program). The Java HotSpot VMs also usesthe adaptive compiler to decide, on the fly, how best to optimize compiled code with techniques such as in-lining. The runtime analysis performed by the compiler allows it to eliminate guesswork in determining which optimizations will yield the largest performance benefit.+
    [_CLDC HotSpot Implementation Architecture Guide Chapter 4_|http://java.sun.com/javame/reference/docs/cldc-hi-2.0-web/doc/architecture/html/DynamicCompiler.html]
    +Two different compilers are contained in the CLDC HotSpot Implementation virtual machine: an adaptive, just-in-time (JIT) compiler and an ahead-of-time compiler. The JIT compiler is an adaptive compiler, because it uses data gathered at runtime to decide which methods to compile. Only the methods that execute most frequently are compiled. The other methods are interpreted by the virtual machine.+
    [_Java Tuning White Paper_|http://java.sun.com/performance/reference/whitepapers/tuning.html]
    +One of the reasons that it's challenging to measure Java performance is that it changes over time. At startup, the JVM typically spends some time "warming up". Depending on the JVM implementation, it may spend some time in interpreted mode while it is profiled to find the 'hot' methods. When a method gets sufficiently hot, it may be compiled and optimized into native code.+
    [_Frequently Asked Questions About the Java HotSpot VM_|http://java.sun.com/docs/hotspot/HotSpotFAQ.html]
    +Remember how HotSpot works. It starts by running your program with an interpreter. When it discovers that some method is "hot" -- that is, executed a lot, either because it is called a lot or because it contains loops that loop a lot -- it sends that method off to be compiled. After that one of two things will happen, either the next time the method is called the compiled version will be invoked (instead of the interpreted version) or the currently long running loop will be replaced, while still running, with the compiled method. The latter is known as "on stack replacement", or OSR.+
    [_Java Technology Fundamentals Newsletter Index - Making Sense of the Java Classes & Tools: Collection Interfaces, What's New in the Java SE 6 Platform Beta 2, and More_|http://java.sun.com/mailers/newsletters/fundamentals/2006/July06.html]
    +Java: A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture neutral, portable, high- performance, multithreaded, dynamic language.+
    [_Introduction to scripting in Java, Part 1_|http://www.javaworld.com/javaworld/jw-07-2007/jw-07-awscripting1.html?page=2]
    +Many of today's interpreted languages are not interpreted purely. Rather, they use a hybrid compiler-interpreter approach, as shown in Figure 1.3.+
    +In this model, the source code is first compiled to some intermediate code (such as Java bytecode), which is then interpreted. This intermediate code is usually designed to be very compact (it has been compressed and optimized). Also, this language is not tied to any specific machine. It is designed for some kind of virtual machine, which could be implemented in software. Basically, the virtual machine represents some kind of processor, whereas this intermediate code (bytecode) could be seen as a machine language for this processor.+
    +This hybrid approach is a compromise between pure interpreted and compiled languages, due to the following characteristics:+
    Because the bytecode is optimized and compact, interpreting overhead is minimized compared with purely interpreted languages.
    The platform independence of interpreted languages is inherited from purely interpreted languages because the intermediate code could be executed on any host with a suitable virtual machine.
    Lately, just-in-time compiler technology has been introduced, which allows developers to compile bytecode to machine-specific code to gain performance similar to compiled languages. I mention this technology throughout the book, where applicable.
    [_Compiled versus interpreted languages_|http://publib.boulder.ibm.com/infocenter/zoslnctr/v1r7/index.jsp?topic=/com.ibm.zappldev.doc/zappldev_85.html]
    Assembler, COBOL, PL/I, C/C+ are all translated by running the source code through a compiler. This results in very efficient code that can be executed any number of times. The overhead for the translation is incurred just once, when the source is compiled; thereafter, it need only be loaded and executed.+
    Interpreted languages, in contrast, must be parsed, interpreted, and executed each time the program is run, thereby greatly adding to the cost of running the program. For this reason, interpreted programs are usually less efficient than compiled programs.
    +Some programming languages, such as REXX and Java, can be either interpreted or compiled.+

    why this ; after the condition?
    and why do you put else + condition? Maybe you want to put else if?
    try this:
      if (a.length() == 1) {  // I always open and close { } to make my code more readable and avoid errors
        a = "0" + a;
      } else if (a.length() == 3) {
        a = a.substring(1, 3);
      }Hope this helps
    Zerjio

  • ORA-12154 Connection error from HFM to Oracle Database

    Hi,
    I am trying to configure Hyperion HFM but can write to HFM database.
    The implementation architecture:
    Hyperion 11.1.2.2 (with all the requiered patches for HFM, FDM, Shared Services, Workspace and Oracle Application Development)
    Server 1:
    Windows Server 2008 x64
    Installed products: Foundation (EPMA, CalcManager), BI, HFM web components and ADM driver
    Configured products: Foundation(EPMA, CalcManager), BI.
    Database Client: 11gR2 x64
    Server 2:
    Windows Server 2008 x64
    Installed products: HFM, FDQM
    Configured Products: FDQM, HFM
    Database Client: 11gR2 x32, 11gR2 x64 (x32 version installed first)
    Server 3:
    Database: Oracle 11.2.0.2
    All the products from server 1 are working fine, FDQM (server 2) is also working fine, but, when I try to do any action related with HFM database the system fails.
    I have tested the connection is these scenarios:
    1. SQLdeveloper: successfull!, I can create tables, views, etc. Double checking the user privileges it has all the required.
    2. tnsping: successfull!
    3. HFMApplicationCopy utility: Successfull using UDL file and writing the connection parameters.
    4. EPM System Configurator: the configurator successfully validates the database connection information, but does not create the tables on the database. No errors in the configtool log.
    5. EPM Diagnostic Tool: fails with this error message:
    ------------STARTING VALIDATION SCRIPTS----------
    LOGGING IN HFM....
    CREATING APPLICATION....
    ERROR: Unable to CreateApplicationCAS
    Number (dec) : -2147215936
    Number (hex) : &H800415C0
    Description  : <?xml version="1.0"?>
    +<EStr><Ref>{DC34A1FD-EE02-4BA6-86C6-6AEB8EF5E5A3}</Ref><AppName/><User/><DBUpdate>1</DBUpdate><ESec><Num>-2147467259</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>HfmADOConnection.cpp</File><Line>511</Line><Ver>11.1.2.2.300.3774</Ver><DStr>ORA-12154: TNS:could not resolve the connect identifier specified</DStr></ESec><ESec><Num>-2147215616</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxSQLConnectionPool.cpp</File><Line>585</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServerImpl.cpp</File><Line>8792</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServer.cpp</File><Line>90</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>1356</Line><Ver>11.1.2.2.300.3774</Ver><PSec><Param><server_name></Param></PSec></ESec><ESec><Num>-2147215936</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>936</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>4096</Line><Ver>11.1.2.2.300.3774</Ver></ESec></EStr>+
    Source       : Hyperion.HFMErrorHandler.1
    ERROR: while Application created
    7. HFM Classic application creation: fails with the following error:
    Error*11*<user_name+>*10/19/2012 08:30:52*CHsxServer.cpp*Line 90*<?xml version="1.0"?>+
    +<EStr><Ref>{DC34A1FD-EE02-4BA6-86C6-6AEB8EF5E5A3}</Ref><AppName/><User/><DBUpdate>1</DBUpdate><ESec><Num>-2147467259</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>HfmADOConnection.cpp</File><Line>511</Line><Ver>11.1.2.2.300.3774</Ver><DStr>ORA-12154: TNS:could not resolve the connect identifier specified</DStr></ESec><ESec><Num>-2147215616</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxSQLConnectionPool.cpp</File><Line>585</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServerImpl.cpp</File><Line>8792</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServer.cpp</File><Line>90</Line><Ver>11.1.2.2.300.3774</Ver></ESec></EStr>+
    8. EPMA Application deployment: fails with same message.
    Please help me with some insights on this problem, I have tried everything but nothing works.
    Regards
    Edited by: Otein on 19-oct-2012 14:04

    Hi,
    I Have solved one of my problems, the one that keep HFM from connecting to the Oracle database.
    I just change the TNSNAMES.ORA, like this:
    Initial tnsnames.ora
    PRUEBA.WORLD=
    +(DESCRIPTION_LIST =+
    +(DESCRIPTION =+
    +(LOAD_BALANACE = ON)+
    +(FAILOVER = ON)+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = <server_name>)(PORT = <port>))+
    +)+
    +(CONNECT_DATA =+
    +(SERVICE_NAME = <service_name>)+
    +)+
    +)+
    +)+
    Modified tnsnames.ora
    PRUEBA.WORLD=
    +(DESCRIPTION =+
    +(LOAD_BALANACE = ON)+
    +(FAILOVER = ON)+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = <server_name>)(PORT = <port>))+
    +)+
    +(CONNECT_DATA =+
    +(SERVICE_NAME = <service_name>)+
    +)+
    +)+
    I Just delete the line "+(DESCRIPTION_LIST =+" and its corresponding closing parenthesis, I did this cause in the configuration utility log I saw this line:
    +TNS parsing: Entry: DESCRIPTION_LIST [[Address: Protocol:(TCP) Host:(<server_name>) Port:(1521) SID:(<service_name>)]]+
    So, if the applications were trying to connect to connection descriptor DESCRIPTION_LIST, the driver could not recognize DESCRIPTION_LIST as a valid one.
    There is a lot going on behind the scenes when you work with Oracle Database as the repository, maybe there is some other way to address this issue, but it worked for me, hope it can help you too.

  • My company is looking for help

    My group is looking for a Security Architect with strong knowledge of Identity manager. we are a very distinguished group in NYC and need someone to help us put it together. Can anyone help? Thanks

    Feel Free to give me a call. We would be happy to speak to you.
    As a quick summary, our real differentiator is that we have a repeatable, scalable implementation architecture and installer that sits on top of your IDM solution. Not only can it reduce your coding time and risk greatly, it is also trainable and scalable. This way your future phases will require less work then your first phase (the way it should be), while still conforming and "plugging in" to a standard architecture. This, of course, maintains the consistency of the application as it matures.
    We have this implemented this architecture in many many clients nationwide from higher education to retail to defense, and would be happy to discuss with you ways in which we may be able to partner and assist you.
    Feel free to contact me. I'd be happy to share with you more about it.
    Dana Reed
    [email protected]

Maybe you are looking for

  • Updated Batch Characteristics on Order Release is not available in MSC3N

    Hi All, I have maintained the Class (023) and characteristics for Finished Material. When I am releasing the production order, I am assigning the values for batch characteristics (Custom defined characteristic) after that, If I go and see the corresp

  • Ctrl-Clic combination not working

    Hi! I'm developing dashboards in Oracle BPM Studio 10.3.1, in the properties tab I selected a method on the onClic property, but when the user clics twice on the graphic it shows a gray screen and the user has to exit the screen and excecute the glob

  • How to limit text amount in a 2013 SharePoint list column?

    I have a few lists that have a Remarks or Details column that have multiple lines of text.  In my old 2007 site, I had a small script that would hide all but the first 30 characters and then add ... and when you moused over it displayed the rest of t

  • File to IDOC : Code page error

    Hi I am working on Scenario  File to IDOC. T90CLNT90 is the rfcdest for SAP R3 System from XI. The file sender succesfully sends thefile to the XI Engine.From XI to R3 the adapter fails to send the idoc. What could be the reason for code page error?

  • InvokeAction

    Dear All: In my page has an lineTable with one master table and two detail talbes,and it runs well. but after i added ( <invokeAction id="forceTableRefresh" Binds="findAllInfoByStatus" RefreshCondition="${!adfFacesContext.postback}"/> "findAllInfoByS