10.1.3 Production deployments

We're doing a little internal intelligence gathering and trying to figure out how many customers that have already or will very soon deploy BPEL or ESB applications on 10.1.3 or later by the end of March 2008. Please send me an email with the customer name and please indicate if it is ESB, BPEL or both. Thanks very much.
dave.berry AT oracle DOT com

Thanks all. Keep em coming.

Similar Messages

  • ** Production Deployment Topologies ... **

    Hi all,
    I've been reading up on Coherence deployment topologies and would like some assistance on the best practice for deployment into a Production environment.
    In production deployments, what are the recommended deployment best practices:
    I understand that:
    * Coherence Cache Clusters should be configured to run on a separate Multicast/Unicast addresses to avoid impact with any other applications.
    However:
    * Should Coherence Cache Servers deployed into their own JVM separate from the Applications that use them?
    * Or, Should the Coherence Cache servers be configured to use the same JVM as the applications?
    * In a multi-Container environment, where many containers hosting many different applications is possible, what is the best deployment topology?
    * If Coherence Cache Servers and Applications are separated into different JVMs, how should they be configured to communicate with eachother (eg. Extend TCP Proxy??)
    Any help would be appreciated.

    Hello,
    I suggest taking a look at this document (especially towards the bottom):
    http://coherence.oracle.com/display/COH34UG/Best+Practices
    In general we do recommend separate JVMs to host cache servers. As you mention, you have the option of having cache client JVMs either join the cluster or connect to a proxy using Extend TCP. Here are the pros and cons of each approach:
    Cluster Membership
    Pros: less network "hops" per operation, highest performance
    Cons: for best results, requires clients to be "near" servers, preferably in the same subnet/switch; poorly tuned GC on clients can affect cluster
    Extend
    Pros: allows for more flexible network topology since it uses TCP (i.e. clients can be in a separate network or separated from storage nodes through a firewall), poorly tuned GC will not have as adverse an effect
    Cons: requires more configuration, more "hops" per operation (although affinity with a properly tuned near cache can make this moot)
    Thanks,
    Patrick

  • Oracle Forms 11gR2 - Cannot deploy locally from within Forms

    I cannot deploy locally from within forms.  The server is up and running and I can deploy the form by putting the correct URL in the address line of the browser window.  When I try to deploy from within forms it comes up with some crazy URL that differs everytime.
    This is what the URL should be and this works from the browser:
    http://machinename:7002/forms/frmservlet?form=WRD608ADMIN_11g.fmx&userid=&otherparams=useSDI=yes
    Here is one of the URLs it came up with when I try to run it from within Forms:
    http://localhost:60231/lysVL2VjqT33znjfvLwanktVRxTIc6dEwVeRNXXRmhYU2qjf
    Localhost is always there, but the rest varies.
    In Forms, I have the Preferences, Runtime set to:
    http://machinename:7002/forms/frmservlet
    Where machine name is my PC, it is the same for this address as the URL above that works directly from the browser.
    So what am I missing?
    Thank you in advance.

    Generally speaking, it is discouraged to manually edit any of the configuration files if they are managed by WLS Console or EM.  In this case, default.env is managed by EM.  Therefore, changes to the file should be done through EM.  If however, you want to alter the file manually, the following is likely the best way to accomplish this:
    1.  Stop the WLS Admin Server and Node Manager
    2.  Locate the proper file you wish to edit.  By proper I mean, there are several copies of most config files.  Most of the config files found in the Oracle Home are actually template files and are not used at runtime.  Altering these will not give you the change you want.  The default.env you want would be here (assuming Windows)
    C:\Oracle\Middleware\user_projects\domains\ClassicDomain\config\fmwconfig\servers\WLS_FORMS\applications\formsapp_11.1.2\config
    If you are using a "Development" installation type, the above path will reflect AdminServer instead of WLS_FORMS.  Remember that Development installations are not for multi-user purposes.  Production deployments require the "Deployment" installation type, which can also include the Builders.
    Do NOT make any changes yet.
    3.  Once you find the correct file, create a backup copy.  Then open the file for edit (not the backup).
    4.  Make the desired changes and save.
    5.  Restart Node Manager and Admin Server if you plan to use them.
    For more information about using EM to manage your configuration, refer to the product documentation:
    http://docs.oracle.com/cd/E38115_01/doc.111210/e24477/configure.htm#CHDCCGHI

  • 11.1.2.1.0. or 11.1.1.6.0 (and how to migrate)

    1. I am confused, which version of JDeveloper ADF and Weblogic is the right one. OTN on http://www.oracle.com/technetwork/developer-tools/jdev/downloads/index.html offers 11.1.2.1.0 as primary choice.
    But is 11.1.2.1.0 version meant for deploy in production Weblogic server or just as predecessor for 12c for playing with?
    Why I am asking this? Because I hear from Oracle people which make production deployments of Weblogic server, that it causes a lot of troubles and that only version 11.1.1.6.0 is suitable for production now.
    2.What will be in the future? Will 11.1.2.1.0 and 11.1.1.6.0 evolve into 12c or what.
    I don't know what to advise to my customer.
    3.I started project in 11.1.2.1.0. How can be it converted to 11.1.1.6.0 ?
    Sašo

    Answer 1 - I have heard from customers going live successfully on 11.1.2.x so that definitely is meant for production use though only with ADF applications . SOA and webcenter is not supported in it.
    Answer 2 - On roadmaps these posts will help -
    Good read , though a little old!
    Whats after JDeveloper 11? (Whats after JDeveloper 11? )
    choosing JDeveloper 11g R1 or JDeveloper 11g R2 (choosing JDeveloper 11g R1 or JDeveloper 11g R2)
    Roadmap for future releases by Oracle + compatible WLS versions for 11.1.1.6 release-
    https://blogs.oracle.com/onesizedoesntfitall/entry/adf_runtimes_vs_wls_versions
    I am not really sure about your 3rd question if its possible

  • Possible to get data from a partly optimized/stripped core file?

    Hello,
    This may not be possible, but I figured it was worth asking about.
    I've got a C/C++ GUI application compiled with Solaris Studio 12.3 that is experiencing an infrequent crash when compiled for production and running on production boxes.  This is on Solaris 10 for x86 running in 64-bit mode.  Most of the app is in libraries which are statically linked.
    I working on trying to replicate the issue in a development environment, but have not had luck yet. In any case, it would be interesting to know what kind of data can be gleaned postmortem from the core file I've got access to.
    The application is actually a small "main.c" file which is complied and linked in debug mode with "-g" and no optimization, but this thin wrapper calls into the main logic in statically linked libraries which are optimized and not built in debug mode.  (See the call stack below.)
    From the core file :
    1) For functions in the call stack that have names, can I get the value of one of the parameters?  I ask because several such functions take pointers to structs with data that should be very useful.
    2) For functions in the call stack that appear as ??????, is it possible to determine at least what .o or .a file they came from?  This could help narrow things down.
    Some basic Googling indicates that either of the above may not be trivial or even possible.  But I'm wondering if the fact that we've got a "main.c" debuggable wrapper might somehow help.
    As a related question, pstack produces sensible output, but dbx shows the error: "dbx: internal error: could not iterate over load objects -- link-maps are not initialized".  Is there some flag I need to supply to dbx?
    Thank you for any help,
    David
    Background info:
    I've been unable to replicate on non-production deployments, but the machines do differ a bit.   Eventually I will be able to borrow a production box to deploy an instrumented binary, but for now all I've got is a core file and access to source.
    The core was generated with gcore while the app was displaying a popup from it's SIGABRT cleanup handler.   The production build scripts do some binary stripping, but I'm not yet sure where it is getting done.
    Here is the (slightly cleaned up) output of pstack for the core file:
    fffffd7ffeb3244a nanosleep (fffffd7fffdfd4b0, 0)
    0000000000514485 ZWidget_ModalEventLoop () + 65
    00000000004f74a9 ZWidget_ShowPopup () + 4a9
    000000000049d2ab ???????? ()
    fffffd7ffeb2dd16 __sighndlr () + 6
    fffffd7ffeb225e2 call_user_handler () + 252
    fffffd7ffeb2280e sigacthandler (6, 0, fffffd7fffdfd640) + ee
    --- called from signal handler with signal 6 (SIGABRT) ---
    fffffd7ffeb3351a _lwp_kill () + a
    fffffd7ffead81b9 raise () + 19
    fffffd7ffeab6b4e abort () + 5e
    000000000052c3bc ZUtil_Query () + 3c
    000000000059b66e ZUtil_QueryString () + 3e
    00000000004a1e2a ???????? ()
    00000000004a0879 ???????? ()
    000000000058b303 ???????? ()
    000000000052d517 ZUtil_Set () + 767
    00000000004f4805 ZUtil_DBSet () + 35
    00000000005094b5 ZWidget_ProcessCallback () + 465
    0000000000516814 ???????? ()
    fffffd7fff242424 XtCallCallbackList () + 114
    fffffd7ffef84d2e ActivateCommon () + 126
    fffffd7ffef84b72 Activate () + 1e
    fffffd7fff244efa HandleActions () + 14a
    fffffd7fff24b1b7 HandleComplexState () + 177
    fffffd7fff243a9e _XtTranslateEvent () + 4e
    fffffd7fff24382a XtDispatchEventToWidget () + 2ea
    fffffd7fff2430ee _XtDefaultDispatcher () + 15e
    fffffd7fff242db6 XtDispatchEvent () + 106
    00000000005142df ZWidget_ProcessEvent () + ff
    0000000000514099 ZWidget_ProcessEvents () + 19
    00000000005ac67a ZEventLoop_ProcessEvents () + 5a
    00000000005ac528 ZEventLoop_Execute () + 48
    000000000049d133 Main () + c93
    000000000049bdf9 main () + 9
    000000000049bc7b ???????? ()

    Thanks for reporting this problem.
    >1) For functions in the call stack that have names, can I get the value of one of the parameters?  I ask because several such functions take pointers to structs with data that should be very useful.
    Use compiler option -preserve_argvalues={none|simple|complete} to preserve incoming argument values. Note that this feature was introduced in Oracle Solaris Studio 12.4.
    You may also be interested in a new option in Oracle Solaris Studio 12.4 which provides much finer-grained control over debug information, which allows you to choose how much information is provided and to reduce the amount of disk space needed for the executable. Dev Tip: How to Get Finer-Grained Control of Debugging Information.
    >2) For functions in the call stack that appear as ??????, is it possible to determine at least what .o or .a file they came from?  This could help narrow things down.
    The following 2 commands may help:
    where -l        
    # Include library name with function name.
    whereis -a <addr-of-?????> # Print location of an address expression
    >As a related question, pstack produces sensible output, but dbx shows the error: "dbx: internal error: could not iterate over load objects -- link-maps are not initialized".  Is there some flag I need to supply to dbx?
    This may be caused by corefile mismatch. See dbx online help: "help core mismatch" for suggestions.
    Hope this helps.

  • EJB 3.0 specification is finalized??

    Hi,
    We have a new J2EE proyect, we planning make it with EJB 3.0,but i dont know, this specification is finalized?? we are looking for application servers, we work with WebLogic, but yet no have full support to EJB 3.0. I dont nkow that use j2ee ejb 2.1 is better than ejb 3.0 because there isnt a application server that have full support to ejb 3.0

    Yes, the EJB 3.0 specification went final along with the release of the Java EE 5 platform
    last May. Here's the JCP page :
    http://www.jcp.org/en/jsr/detail?id=220
    SUN has a complete implementation of Java EE 5 available for free that can also be
    used for production deployments.
    http://java.sun.com/javaee/downloads/index.jsp
    At this time, there are only two other compatible Java EE 5 products : TmaxSoft and SAP.
    A number of other licensees are in the process of getting their products certified. You
    can also see the latest list of certified implementations on our compatibility page :
    http://java.sun.com/javaee/overview/compatibility.jsp
    --ken                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • NFS vs ISCSI for Storage Repositories

    Anyone have any good guidance in using NFS vs ISCSI for larger production deployments of OVM 3?
    My testing has been pretty positive with NFS but other than the documented "its not as fast as block storage" and the fact that there is no instant clones (no OCFS2), has anyone else contemplated between the two for OVM? If so, what did you choose and why?
    Currently we are testing using NFS thats presented from a Solaris HA Cluster servicing a ZFS pool (basically mimicking ZFS 73xx and 74xx appliances) but I don't know how the same setup would perform if the ZFS pool grew to 10TB of running virtual disk images.
    Any feedback?
    Thanks
    Dave

    Dave wrote:
    Would you personally recommend against using one giant NFS mount to storage VM disk images?I don't recommend against it, it's just most often the slowest possible storage solution in comparison to other mechanisms. NFS cannot take advantage of any of the OCFS2 reflinking, so guests must be fully copied from the template, which is time consuming. Loop-mounting a disk image on NFS is less efficient than loop-mounting it via iSCSI or directly in the guest. FC-SAN is the usually the most efficient storage, but bonded-10Gbps interfaces for NFS or iSCSI may now be faster. If you have dual-8Gpbs FC HBAs vs dual 1Gbps NICs for NFS/iSCSI, the FC SAN will win.
    Essentially, you have to evaluate what your critical success factors are and then make storage decisions based on that. As you have a majority of Windows guests, you need to present the block devices via Oracle VM, so you need to use either virtual disk images (which are the slowest, but easiest to manage) or FC/iSCSI LUNs presented to the guest (which are much faster, but more difficult to manage).

  • Diagnostic doesn't work properly

    Hi all,
    we have a problem with the diagnostic. Both locally and remotely the diagnostic tool seems to write the diagnostic information only when "it wants" (sometime it does it, sometimes doesn't).
    Here our configuration file:
    ServiceDefinition.csdef
    <ServiceDefinition name="Cryptobrand_compress" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-01.2.3">
      <WebRole name="Compression" vmsize="ExtraSmall">
        <Sites>
          <Site name="Web">
            <Bindings>
              <Binding name="Endpoint1" endpointName="Endpoint1" />
            </Bindings>
          </Site>
        </Sites>
        <Endpoints>
          <InputEndpoint name="Endpoint1" protocol="http" port="80" />
        </Endpoints>
        <Imports>
          <Import moduleName="Diagnostics" />
        </Imports>
        <LocalResources>
          <LocalStorage name="Compression.svclog" sizeInMB="1000" cleanOnRoleRecycle="false" />
        </LocalResources>
      </WebRole>
    </ServiceDefinition>
    ServiceConfiguration.Local.cscfg
    <ServiceConfiguration serviceName="Cryptobrand_compress" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="4" osVersion="*" schemaVersion="2014-01.2.3">
      <Role name="Compression">
        <Instances count="1" />
        <ConfigurationSettings>
          <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
        </ConfigurationSettings>
      </Role>
    </ServiceConfiguration>
    ServiceConfiguration.Cloud.cscfg
    <ServiceConfiguration serviceName="Cryptobrand_compress" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="4" osVersion="*" schemaVersion="2014-01.2.3">
      <Role name="Compression">
        <Instances count="1" />
        <ConfigurationSettings>
          <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=betatwo;AccountKey=OUR_ACCOUNT_KEY==" />
        </ConfigurationSettings>
      </Role>
    </ServiceConfiguration>
    Web.config
    <configuration>
      <!--  To collect diagnostic traces, uncomment the section below or merge with existing system.diagnostics section.
            To persist the traces to storage, update the DiagnosticsConnectionString setting with your storage credentials.
            To avoid performance degradation, remember to disable tracing on production deployments.  -->
      <system.diagnostics>     
        <sharedListeners>
          <add name="AzureLocalStorage" type="Compression.AzureLocalStorageTraceListener, Compression"/>
        </sharedListeners>
        <sources>
          <source name="System.ServiceModel" switchValue="Verbose, ActivityTracing">
            <listeners>
              <add name="AzureLocalStorage"/>
            </listeners>
          </source>
          <source name="System.ServiceModel.MessageLogging" switchValue="Verbose">
            <listeners>
              <add name="AzureLocalStorage"/>
            </listeners>
          </source>
        </sources> 
        <trace>
          <listeners>
            <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
              name="AzureDiagnostics">
              <filter type="" />
            </add>
          </listeners>
        </trace>
      </system.diagnostics>
      <system.web>
        <compilation debug="true" targetFramework="4.5.1" />
      </system.web>
      <system.serviceModel>
        <behaviors>
          <serviceBehaviors>
            <behavior>
              <!-- To avoid disclosing metadata information, set the value below to false before deployment -->
              <serviceMetadata httpGetEnabled="true"/>
              <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
              <serviceDebug includeExceptionDetailInFaults="false"/>
            </behavior>
          </serviceBehaviors>
        </behaviors>
        <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
      </system.serviceModel>
      <system.webServer>
        <modules runAllManagedModulesForAllRequests="true"/>
        <!--
            To browse web app root directory during debugging, set the value below to true.
            Set to false before deployment to avoid disclosing web app folder information.
        -->
        <directoryBrowse enabled="true"/>
      </system.webServer>
    </configuration>
    Note: if we try to "View/Update Diagnostic Data" from Server Explorer, we receive the following exception:
    Could not retrieve the current diagnostic configuration or this role instance.
    Thank you for your support.
    Attilio Gelosa

    Hi Attilio,
    Thanks for posting!
    From your description, I suggest you could try those approaches:
    1.enable and configure the setting:
    then, your projects could auto create a new file: diagnostics.wadcfg.
    2. you could add this configuration setting into your code, like this:
    var config = DiagnosticMonitor.GetDefaultInitialConfiguration();
    CloudStorageAccount cloudStorageAccount =
    CloudStorageAccount.Parse(
    RoleEnvironment.GetConfigurationSettingValue(
    "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"));
    // Get the perf counters
    config.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
    // Add the perf counters
    config.PerformanceCounters.DataSources.Add(
    new PerformanceCounterConfiguration
    CounterSpecifier = @"\Processor(_Total)\% Processor Time",
    SampleRate = TimeSpan.FromSeconds(30)
    config.PerformanceCounters.DataSources.Add(
    new PerformanceCounterConfiguration
    CounterSpecifier = @"\Memory\Available MBytes",
    SampleRate = TimeSpan.FromSeconds(30)
    DiagnosticMonitor diagMonitor = DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", config);
    You could add those code into your onstart() method.
    Please see this tutorials:
    http://msdn.microsoft.com/en-us/library/azure/dn186185.aspx
    http://channel9.msdn.com/Events/windowsazure/Windows-AzureConf-2013/Debugging-and-Monitoring-Windows-Azure-Cloud-Services
    http://msdn.microsoft.com/en-us/magazine/ff714589.aspx
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • RE: (forte-users) Sv: (forte-users) The Death ofForte

    This is what I got today:
    Statement of Direction
    Sun Microsystems, Inc.
    Fort&eacute; 4GL(tm) Product (formerly the Fort&eacute; Application Environment)
    Product Context
    &middot; Fort&eacute; 4GL is an award-winning, proven product with many unique
    advantages for building enterprise business systems that are distributed,
    that involve the integration of existing business systems as well as new
    functionality, and that target heterogeneous runtime environments.
    &middot; Fort&eacute; 4GL is recognized by Gartner Group as the most successful
    Enterprise Application Development Tool.
    &middot; The Sun Microsystems, Inc. (SMI) development tools group (formerly
    Fort&eacute; Software, Inc.) has a strong internal commitment to Fort&eacute; 4GL. Fort&eacute;
    Fusion is written with, and is currently being enhanced with Fort&eacute; 4GL.
    &middot; The SMI development tools group intends to actively enhance and
    promote Fort&eacute; 4GL for the indefinite future. The best opportunity for
    attracting new customers is to leverage the ability of Fort&eacute; 4GL to easily
    build powerful shared business services (server components) that can be
    accessed by non-Fort&eacute; clients (e.g., browsers, Java clients) and that can
    easily integrate with new and existing business systems.
    &middot; The product enhancement plan calls for continuing to issue
    incremental releases approximately twice a year. To speed the release of new
    functionality, new features will be included with "preview status." This
    means that the overall release can support production deployments, but that
    the features marked "preview" are certified for development and demos.
    &middot; The planned contents of the next two releases are indicated below.
    Users should not expect any features other than those on the list. The
    contents of subsequent releases will be determined approximately a year in
    advance.
    &middot; SMI has retained the Fort&eacute; field sales organization as an
    independent unit whose primary product offerings are Fort&eacute; 4GL and Fort&eacute;
    Fusion. Continued volume sales of Fort&eacute; 4GL remain the foundation of our
    business plan.
    Mid-Year Release
    &middot; Tentatively labeled "release 3.5" to be distributed as a free
    product enhancement for customers under maintenance
    &middot; Scheduled for Summer 2000
    &middot; Defining features
    &middot; Introspection (reflection) - the ability for an object to describe
    itself at runtime
    &middot; Improved integration with applications developed using
    Fort&eacute;-for-Java Community Edition(tm) (formerly NetBeans)
    &middot; Platform support improvements to track important operating system
    and database vendor activity
    &middot; Target features
    &middot; Display system enhancements (e.g., Motif 2 support, line arrowheads,
    window refresh control, editable outline fields)
    &middot; Dynamic library loading
    &middot; Improved CORBA/IIOP support
    &middot; Improved XML and XSLT class support
    &middot; JMQ support
    End-Year Release
    &middot; Tentatively labeled "release 3.6" to be distributed as a free
    product enhancement for customers under maintenance
    &middot; Scheduled for year end 2000
    &middot; Defining features
    &middot; Any Release 3.5 target features that were not included in 3.5
    &middot; Generation of EJB interfaces for R3 service objects
    &middot; Platform support improvements to track important operating system
    and database vendor activity
    &middot; Target features
    &middot; COBOL record handling as part of the OS390 transaction adapter
    &middot; Improved runtime security
    &middot; Interface classes for access to Netscape Server 4.0 and possibly
    other web servers
    Longer Term Product Directions
    1. TOOL code to Java code migration. Neither release 3.5 nor 3.6 will
    contain an automated solution in this area. Technical differences between
    TOOL and Java make a 100% automated conversion all but impossible. A
    workable solution is likely to involve a combination of tools and services.
    2. Common repository between the 4GL and Java products. The recently
    devised Java Tools Strategy has necessitated a change in the technology base
    for our Java products to make them compatible with both the iPlanet
    Application Server and the Fort&eacute; for Java Community Edition. This, in turn,
    has complicated our original vision of a common repository to the point that
    we will not embark on this project. Instead, we have elevated
    interoperability a short-term priority. In addition, we plan to migrate the
    Fusion process definition tools to Java, thereby enabling Fusion definitions
    to be stored in a common repository with Java code and components.
    3. Other long-term enhancements will be determined by additional
    customer and market feedback. A major criterion for new functionality will
    be enhancing the revenue generating ability of the product, thereby
    fostering its long-term health in the marketplace.
    As our products continue to evolve, the features and specifications
    described in this document are subject to change without notice. Sun
    Microsystems cannot guarantee the completion of any future products or
    product features mentioned in this Statement of Direction. By signing
    below, the receiving Company agrees that it has not relied on, is not
    relying on and will not rely on the potential availability of any future Sun
    product, functionality or feature in making any purchases from Sun.
    Executed by the Receiving Company Executed by Sun
    Microsystems, Inc.
    Signature:________________________
    Signature:________________________
    Name:___________________________
    Name:___________________________
    (Please Print) (Please
    Print)
    Title:____________________________
    Title:____________________________
    Date:____________________________
    Date:____________________________

    This is what I got today:
    Statement of Direction
    Sun Microsystems, Inc.
    Fort&eacute; 4GL(tm) Product (formerly the Fort&eacute; Application Environment)
    Product Context
    &middot; Fort&eacute; 4GL is an award-winning, proven product with many unique
    advantages for building enterprise business systems that are distributed,
    that involve the integration of existing business systems as well as new
    functionality, and that target heterogeneous runtime environments.
    &middot; Fort&eacute; 4GL is recognized by Gartner Group as the most successful
    Enterprise Application Development Tool.
    &middot; The Sun Microsystems, Inc. (SMI) development tools group (formerly
    Fort&eacute; Software, Inc.) has a strong internal commitment to Fort&eacute; 4GL. Fort&eacute;
    Fusion is written with, and is currently being enhanced with Fort&eacute; 4GL.
    &middot; The SMI development tools group intends to actively enhance and
    promote Fort&eacute; 4GL for the indefinite future. The best opportunity for
    attracting new customers is to leverage the ability of Fort&eacute; 4GL to easily
    build powerful shared business services (server components) that can be
    accessed by non-Fort&eacute; clients (e.g., browsers, Java clients) and that can
    easily integrate with new and existing business systems.
    &middot; The product enhancement plan calls for continuing to issue
    incremental releases approximately twice a year. To speed the release of new
    functionality, new features will be included with "preview status." This
    means that the overall release can support production deployments, but that
    the features marked "preview" are certified for development and demos.
    &middot; The planned contents of the next two releases are indicated below.
    Users should not expect any features other than those on the list. The
    contents of subsequent releases will be determined approximately a year in
    advance.
    &middot; SMI has retained the Fort&eacute; field sales organization as an
    independent unit whose primary product offerings are Fort&eacute; 4GL and Fort&eacute;
    Fusion. Continued volume sales of Fort&eacute; 4GL remain the foundation of our
    business plan.
    Mid-Year Release
    &middot; Tentatively labeled "release 3.5" to be distributed as a free
    product enhancement for customers under maintenance
    &middot; Scheduled for Summer 2000
    &middot; Defining features
    &middot; Introspection (reflection) - the ability for an object to describe
    itself at runtime
    &middot; Improved integration with applications developed using
    Fort&eacute;-for-Java Community Edition(tm) (formerly NetBeans)
    &middot; Platform support improvements to track important operating system
    and database vendor activity
    &middot; Target features
    &middot; Display system enhancements (e.g., Motif 2 support, line arrowheads,
    window refresh control, editable outline fields)
    &middot; Dynamic library loading
    &middot; Improved CORBA/IIOP support
    &middot; Improved XML and XSLT class support
    &middot; JMQ support
    End-Year Release
    &middot; Tentatively labeled "release 3.6" to be distributed as a free
    product enhancement for customers under maintenance
    &middot; Scheduled for year end 2000
    &middot; Defining features
    &middot; Any Release 3.5 target features that were not included in 3.5
    &middot; Generation of EJB interfaces for R3 service objects
    &middot; Platform support improvements to track important operating system
    and database vendor activity
    &middot; Target features
    &middot; COBOL record handling as part of the OS390 transaction adapter
    &middot; Improved runtime security
    &middot; Interface classes for access to Netscape Server 4.0 and possibly
    other web servers
    Longer Term Product Directions
    1. TOOL code to Java code migration. Neither release 3.5 nor 3.6 will
    contain an automated solution in this area. Technical differences between
    TOOL and Java make a 100% automated conversion all but impossible. A
    workable solution is likely to involve a combination of tools and services.
    2. Common repository between the 4GL and Java products. The recently
    devised Java Tools Strategy has necessitated a change in the technology base
    for our Java products to make them compatible with both the iPlanet
    Application Server and the Fort&eacute; for Java Community Edition. This, in turn,
    has complicated our original vision of a common repository to the point that
    we will not embark on this project. Instead, we have elevated
    interoperability a short-term priority. In addition, we plan to migrate the
    Fusion process definition tools to Java, thereby enabling Fusion definitions
    to be stored in a common repository with Java code and components.
    3. Other long-term enhancements will be determined by additional
    customer and market feedback. A major criterion for new functionality will
    be enhancing the revenue generating ability of the product, thereby
    fostering its long-term health in the marketplace.
    As our products continue to evolve, the features and specifications
    described in this document are subject to change without notice. Sun
    Microsystems cannot guarantee the completion of any future products or
    product features mentioned in this Statement of Direction. By signing
    below, the receiving Company agrees that it has not relied on, is not
    relying on and will not rely on the potential availability of any future Sun
    product, functionality or feature in making any purchases from Sun.
    Executed by the Receiving Company Executed by Sun
    Microsystems, Inc.
    Signature:________________________
    Signature:________________________
    Name:___________________________
    Name:___________________________
    (Please Print) (Please
    Print)
    Title:____________________________
    Title:____________________________
    Date:____________________________
    Date:____________________________

  • Deploying bpm 11g project sar file using ant task

    I am trying to deploy the bpm project using ant task file. The status I get is [deployComposite] ---->Deploying composite success. However when I check the deployments, they are not there. If I try to deploy this using Jdeveloper it works correctly. I need to get this to work for production deployments. Any suggestions?
    C:\Oracle\Middleware\Oracle_SOA1\bin>ant -f ant-sca-deploy.xml -DserverURL=http:
    //10.140.183.71:7001 -DsarLocation=N:\RuleBasedProjectInitiate\deploy\RequestPro
    ject.ear -Doverwrite=true -Duser=weblogic
    Buildfile: C:\Oracle\Middleware\Oracle_SOA1\bin\ant-sca-deploy.xml
    [echo] oracle.home = C:\Oracle\Middleware\Oracle_SOA1\bin/..
    deploy:
    [input] skipping input as property serverURL has already been set.
    [input] skipping input as property sarLocation has already been set.
    [deployComposite] created temp dir =C:\DOCUME~1\azeltov\LOCALS~1\Temp\deploy_cli
    ent_1279894885343
    [deployComposite] Creating HTTP connection to host:10.140.183.71, port:7001
    [deployComposite] Enter username and password for realm 'default' on host 10.140
    .183.71:7001
    [deployComposite] Authentication Scheme: Basic
    [deployComposite] Username:
    weblogic
    [deployComposite] Password:
    [deployComposite] Received HTTP response from the server, response code=200
    [deployComposite] clean up temp dir: C:\DOCUME~1\azeltov\LOCALS~1\Temp\deploy_cl
    ient_1279894885343
    [deployComposite] ---->Deploying composite success.
    BUILD SUCCESSFUL
    Total time: 4 seconds
    C:\Oracle\Middleware\Oracle_SOA1\bin>

    You can always deploy the ADF web apps from the Application (top menu) deploy option. Just make sure you're deploying the EAR profile for the project. Deploying the web projects from the composite deployment wizard can be convenient. But I think it's often the case that you deploy them (the composite and forms) separately (e.g. you make a series of changes to the composite without needed to redeploy the UI projects).
    Bottom line is you don't have to delete the projects to be able to modify/deploy them.

  • Are mutliple database calls really significant with a network call for a web API?

    At one of my employers, we worked on a REST (but it also applies to SOAP) API. The client, which is the application UI, would make calls over the web (LAN in typical production deployments) to the API. The API would make calls to the database.
    One theme that recurs in our discussions is performance: some people on the team believe that you should not have multiple database calls (usually reads) from a single API call because of performance; you should optimize them so that each API call has only
    (exactly) one database call.
    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory and execute reads very, very quickly (eg. SQL Server loads and
    keeps everything in RAM and consumes almost all your free RAM if it can).
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    To be clear, I'm talking about order of magnitude -- I know that it depends on specifics (machine hardware, choice of API and DB, etc.) If I have a call that takes O(milliseconds), does optimizing for DB calls that take an order of magnitude less, actually
    matter? Or is there more to the problem than this?
    Edit: for posterity, I think it's quite ridiculous to make claims that we need to improve performance by combining database calls under these circumstances -- especially
    with a lack of profiling. However, it's not my decision whether we do this or not; I want to know what the rationale is behind thinking this is a correct way of optimizing web API calls.

    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory
    and execute reads very, very quickly (eg. SQL Server loads and keeps everything in RAM and consumes almost all your free RAM if it can).
    The Logic
    In theory, you are correct. However, there are a few flaws with this rationale:
    From what you stated, it's unclear if you actually tested / profiled your app. In other words, do you actually know that
    the network transfers from the app to the API are the slowest component? Because that is intuitive, it is easy to assume that it is. However, when discussing performance, you should never assume. At my employer, I am the performance lead. When I first joined,
    people kept talking about CDN's, replication, etc. based on intuition about what the bottlenecks must be. Turns out, our biggest performance problems were poorly performing database queries.
    You are saying that because databases are good at retrieving data, that the database is necessarily running at peak performance, is being used optimally, and there is nothing that can be done
    to improve it. In other words, databases are designed to be fast, so I should never have to worry about it. Another dangerous line of thinking. That's like saying a car is meant to move quickly, so I don't need to change the oil.
    This way of thinking assumes a single process at a time, or put another way, no concurrency. It assumes that one request cannot influence another request's performance. Resources are shared,
    such as disk I/O, network bandwidth, connection pools, memory, CPU cycles, etc. Therefore, reducing one database call's use of a shared resource can prevent it from causing other requests to slow down. When I first joined my current employer, management believed
    that tuning a 3 second database query was a waste of time. 3 seconds is so little, why waste time on it? Wouldn't we be better off with a CDN or compression or something else? But if I can make a 3 second query run in 1 second, say by adding an index, that
    is 2/3 less blocking, 2/3 less time spent occupying a thread, and more importantly, less data read from disk, which means less data flushed out of the in-RAM cache.
    The Theory
    There is a common conception that software performance is simply about speed.
    From a purely speed perspective, you are right. A system is only as fast as its slowest component. If you have profiled your code and found that the Internet is the slowest component, then everything else is obviously not the slowest part.
    However, given the above, I hope you can see how resource contention, lack of indexing, poorly written code, etc. can create surprising differences in performance.
    The Assumptions
    One last thing. You mentioned that a database call should be cheap compared to a network call from the app to the API. But you also mentioned that the app and the API servers are in the same LAN. Therefore, aren't both of them comparable as network calls? In
    other words, why are you assuming that the API transfer is orders of magnitude slower than the database transfer when they both have the same available bandwidth? Of course the protocols and data structures are different, I get that, but I dispute the assumption
    that they are orders of magnitude different.
    Where it gets murkey
    This whole question is about "multiple" versus "single" database calls. But it's unclear how many are multiple. Because of what I said above, as a general rule of thumb, I recommend making as few database calls as necessary. But that is
    only a rule of thumb.
    Here is why:
    Databases are great at reading data. They are storage engines. However, your business logic lives in your application. If you make a rule that every API call results in exactly one database call, then your business logic may end up in the database. Maybe that
    is ok. A lot of systems do that. But some don't. It's about flexibility.
    Sometimes to achieve good decoupling, you want to have 2 database calls separated. For example, perhaps every HTTP request is routed through a generic security filter which validates from the DB that the user has the right access rights. If they do, proceed
    to execute the appropriate function for that URL. That function may interact with the database.
    Calling the database in a loop. This is why I asked how many is multiple. In the example above, you would have 2 database calls. 2 is fine. 3 may be fine. N is not fine. If you call the database in a loop, you have now made performance linear, which means it
    will take longer the more that is in the loop's input. So categorically saying that the API network time is the slowest completely overlooks anomalies like 1% of your traffic taking a long time due to a not-yet-discovered loop that calls the database 10,000
    times.
    Sometimes there are things your app is better at, like some complex calculations. You may need to read some data from the database, do some calculations, then based on the results, pass a parameter to a second database call (maybe to write some results). If
    you combine those into a single call (like a stored procedure) just for the sake of only calling the database once, you have forced yourself to use the database for something which the app server might be better at.
    Load balancing: You have 1 database (presumably) and multiple load balanced application servers. Therefore, the more work the app does and the less the database does, the easier it is to scale because it's generally easier to add an app server than setup database
    replication. Based on the previous bullet point, it may make sense to run a SQL query, then do all the calculations in the application, which is distributed across multiple servers, and then write the results when finished. This could give better throughput
    (even if the overall transaction time is the same).
    TL;DR
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    Yes, but only to a certain extent. You should try to minimize the number of database calls when practical, but don't combine calls which have nothing to do with each other just for the sake of combining them. Also, avoid calling the database in a loop at all
    costs.

  • Server Migration

    I'm having a problem setfing up migration on my Managed Servers on Welbogic 9.2 mp2. First, My problem.
    When I try to manually migrate ManagedServer1 from MachineA to MachineB:
    Nodemanger shutdowns ManagedServer1
    Nodemanager removes the interface with the floating IP (so far so good)
    (here is where the problem starts)
    Nodemanager brings up the interface back on MachineA
    Nodemanager starts ManagesServer1 on MachineA (all this is suppose to happen on MachineB).
    This doesn't help me if it migrates it to the same machine!!!
    I quick overview of how things are set up.
    1) I have two machines, MachineA and MachineB running Solaris 10
    2) I have the admin server running on MachineA listening on the static ip of 139 and ManagedServer1 listening on the floating IP of 169 (fyi, don't want give out IP, so I made a number up)
    3) I have ManagedServer2 running on MachineB with the floating IP of 170.
    4) I created a weblogic user that has ssh trust config between the two machines and has the proper authority to create and bring down an interface.
    5) Created 2 Unix machines, Type SSH with the shell command ssh -l legalint -o PasswordAuthentication=no -p %P %H /opt/bea/weblogic92/common/bin/wlscontrol.sh -d %D -n /legalint/mtdom/ndmgr -c -f startManagedWebLogic.sh -s %S %C
    6) Network Time Protocal Config is set
    7) Lease database schema is created with corresponding data source and has two tables ACTIVE and ACTIVE_MT
    8) Server migration progerties configured: cluster datasource is set (In my directions it tells me to leave the Canidate machines blank). Managed servers auto migration set to true and primary and back-up machine set and auto-restart set to true.
    Any help is appreciated.
    Tom.

    The best way to deal with certificates and FQDN's in ZESM is to use commercial issued certificates or from the company's internal CA. Don't use the built-in product self-signed certificates under any reason for production deployments. The reason behind that, is that the internal ZESM certificate has its private key marked as non-exportable. So you'll not be able to export it from one server and move to another.
    For the server FQDN, create an A or Alias record in DNS to something like: "zesm.company.com", and point it to the actual ZESM server, that way, once you need to move all users from server, all that you need to do is to import the commercial certificate and point the DNS record to the new server IP Address.
    I did a couple of deployments in that way, with no issues.
    Hope this helps.
    >>>
    From: mauro vaccari<[email protected]>
    To:novell.support.zenworks.endpoint-security-management
    Date: 1/19/2010 7:36 AM
    Subject: Server Migration
    Hi,
    planning the upgrade to the imminent release of ZESM 4.1 we would like
    to reinstall the server because it has undergone many installations.
    For security reason and for don't stop production we want to install
    ZESM 4.1 in a new server with the same ip and name of the actual then we
    want try It in a test VLAN.
    My question is:
    when we want to pass the new server in production or before ZESM
    instalation could we migrate the IIS certificate from the old server to
    new one? so the client should not be involved.
    Thank you for help
    mauro_vaccari
    mauro_vaccari's Profile: http://forums.novell.com/member.php?userid=16902
    View this thread: http://forums.novell.com/showthread.php?t=398639

  • HTML POST events in JTextPane

    Hi all
    I've got a real problem here:
    I've implemented a website which is to be used both by a browser directly, and also using an embedded browser within our application.
    I finished the standalone browser version, and assumed that the embedded one would be as easy as pointing a JEditorPane at the site.
    But how wrong I was :-0
    I have used a JTextPane and pointed it at the URL of the site, and added a hyperlink listener to handle the link clicks. At first I had some problems with CSS, but I've managed to overcome those by using the methods of HTMLEditorKit. My big problem now is that the JTextPane doesn't seem to provide support for SUBMIT, POST events etc, and my site relies heavily on these.
    Does anyone know of a component that supports this? Or some code which can be plugged in to a JEditorPane to make it work?
    I found some information on this here:
    http://forum.java.sun.com/thread.jspa?forumID=257&threadID=414212
    The user nfalck very helpfully demonstrates how to gain access to the Java representations of the OPTION, SUBMIT elements etc within the Document object, and suggests that you can manually store them and pass them to the URL.
    I am under severe time pressure though, so I wonder if anyone knows a way to achieve this using existing code / components.
    Thanks in advance for any help.
    Cheers,
    Paul

    Thanks.
    I've seen that thread already actually. I did try using the jdic library and I got it working well, but I found that I got a few native crashes in the DLL's. I cannot risk that kind of thing happening in production deployments...
    From looking around it seems that the POST stuff worked in JDK 1.4 but not in 1.5 - I think that explains why it works for some users and not for others. That's annoying, as I need to embed this browser in two different applications - one of which uses 1.4 and one uses 1.5!
    The last link on that page is dead but I discovered that it points to a component called CalPane, but I can't find out if this is a viable browser component to use. It doesn't look like it.
    Message was edited by:
    moschops

  • OIM installation with Oracle db Standard Edition

    The 11g version of OIM, OAM works with Oracle 11g db Enterprise Edition. Will these components work with Oracle db 11g Standard Edition ?
    If this works then I am planning to use only for prototype purpose, where I can use a low end server.

    Thanks Thiago,
    Looks like the Enterprise Edition is required only for OAAM. rest of the components like OAM, OIM, OID and OVD will work with Standard Edition. Is my assumption correct ?
    Thanks!
    Kabi
    Followings are the extract from the certification matrix.
    +. The Oracle databases listed in this column are supported on all configurations (including RAC) and platforms that the database team supports. Check Certify for details.+
    +. Oracle recommends using latest Oracle DB PSU's. For latest recommended patch information, refer to https://support.oracle.com/+
    +. For OAAM, Oracle recommends Oracle Database Enterprise Edition for production deployments.+

  • Error Creating HFM 9.3.1 Application

    I am having problem with workspace. Each time I tried to open an applicaation that is already registered in SharedServices, I receceived this error "Please make sure the application you are trying to lauch is registered with SharedServices. Also, I received this error "config.error.required credentials" while using Weblogic 9.2.
    Any help would be greatly appreciated.
    Note: My OS environment is Window 2000 Advanced Server and MSSQL 2000

    In fact it would not be wise to get rid of Weblogic completely, especially if you have already paid for the licenses. Therefore, for resolving this case, as it is already apparent that this was the case here, one should use Apache Tomcat for deploying SharedServices and keep Weblogic for deploying other components like Reporting and Analysis.
    Oracle-Hyperion ships Apache Tomcat for test and development purposes as it is suggested in the documentation. For production deployments, especially where a great many of users are to be using the system, WebLogic, WebSphere or Oracle Application Server is recommended. However, SharedServices is a system module that will never be accessed by that many users. For this reason the work around of using Tomcat for deploying SharedServices is valid.
    One final remark on the use of Apatche Tomcat. In pre-Oracle releases, Hyperion products were recommended to be deployed on WebLogic or WebSphere. However, these products need additional licenses, and Hyperion shipped the "free" Apatche Tomcat, apparently without warranty as was implied by the recommendations. In 9.3.1 where Oracle Application Server was added (for free), it is included as an option for application deployment in all products but Shared Services, where the free option remains Apache Tomcat. It can be assumed that Oracle Application Server will be supported for the complete Hyperion product suite in the versions to follow.
    KN

Maybe you are looking for

  • Pricing Condition in scheduling agreement

    Hi Gurus, I'm working with scheduling agreements and I figured that whenever I create scheduling agreements, pricing condition PB00 gets defaulted to the scheduling agreement, Can you guys be kind enough to tell me what configuration is driving this?

  • If I download a series that is too large for my iPad on my pc, can I select what individual episode I want to put on my iPad?

    Was just wondering if I buy a tv box set in iTunes on my pc, can I transfer it a couple of episodes at a time to my iPad as the box set is to large for my iPads memory?

  • No Picture to my TV

    I recently hooked my imac to my Vizio 55' via a mini DVI to HDMI adaptor. The computer detects the TV and gives several display options. However none of these options bring a picture onto the TV.  Any suggestion??

  • Dying Light Season Pass

    Hi BB Unboxed, Me and my son picked up our copies of Dying Light yesterday and both bought the season pass. The receipt says the code for the season pass will be emailed to you. Well neither one of us have received the email for the season pass code

  • Credit Check at Ship to Party Level

    Hi Mark My Client requirement is do Credit Check at Ship to Party level. How can we achieve Credit Check at Ship to Party. Please explain. Thanks Rajanikanth