Re: Multipe Environment Configured as ServiceObjects

In the registry, hkey_local_machine\SYSTEM\currentcontrolset\services\Forte
Environment Manager 3.0.F.2\Parameters, there are three values. One of
these is "Command line". You can add specific values there. For example,
you might want to specify the name server address there with the "-fns"
flag. You will also need to specify a different node name for each
environment manager, using the -fnd flag.
Don
At 04:35 PM 3/11/98 +0100, Ampollini Michele wrote:
Hello everybody.
We're trying to configure a NT server with multiple environments.
The environment managers need to be configured as NT services , and we
use the srvcinst utility to install them. Does anybody know how to pass
additional environment settings to server partitions, keeping them
differentiated among different environments ? We cannot use the system
variable settings , because they are shared among the different
environments. We cannot use the
mycomputer\hkey_local_machine\software\fortesoftwareinc\forte\3.0.f.2,
because of the same reason ?
We're also experiencing an error when trying starting a partition up,
because the server partitions don't seem to recognize the
FORTE_NS_ADDRESS environment variable.
Has anybody ever experienced an error such as that one?
Thank you very much.
Mik & Frank
Ds Data Systems Parma
============================================
Don Nelson
Regional Consulting Manager - Rocky Mountain Region
Forte Software, Inc.
Denver, CO
Phone: 303-265-7709
Corporate voice mail: 510-986-3810
aka: [email protected]
============================================
"Until you learn to stalk and overrun, you can't devour anyone" - Hobbes

Another way that I have done this is to use the SRVANY tool included with
the NT resource kit to run a .BAT file as an NT service. The .BAT file sets
the appropriate environment variables and then starts the node manager. The
benefit of this approach is that any FTEXECs started by the node manager
inherit this environment.
Kevin Klein
Millennium Partners, Inc.
Milwaukee, Wisconsin, USA
Mankind, when left to themselves, are unfit for their own government.
-- George Washington
-----Original Message-----
From: Ampollini Michele <[email protected]>
To: '[email protected]' <[email protected]>
Date: Wednesday, March 11, 1998 11:04 AM
Subject: Multipe Environment Configured as Service Objects
Hello everybody.
We're trying to configure a NT server with multiple environments.
The environment managers need to be configured as NT services , and we
use the srvcinst utility to install them. Does anybody know how to pass
additional environment settings to server partitions, keeping them
differentiated among different environments ? We cannot use the system
variable settings , because they are shared among the different
environments. We cannot use the
mycomputer\hkey_local_machine\software\fortesoftwareinc\forte\3.0.f.2,
because of the same reason ?
We're also experiencing an error when trying starting a partition up,
because the server partitions don't seem to recognize the
FORTE_NS_ADDRESS environment variable.
Has anybody ever experienced an error such as that one?
Thank you very much.
Mik & Frank
Ds Data Systems Parma

Similar Messages

  • Best environment configuration practice

    we are deploying three different web applications which use bdbxml. they do not share data at all. we currently have it configured for all three apps to use the same environment but with different containers. we are running into problems where if one app goes down, it could take the entire environment down.
    is a 1:1 application to environment configuration a best practice? or is sharing one environment the best practice?
    thanks.

    Hi,
    This is the normal recovery process. If process 2 was in the middle of something, there is potential corruption on the db. So when process 2 rejoins the env with DB_RECOVERY, it will set the panic bit and start recovery. Process 1 is detecting that and getting out of the environment. After process 2 finishes up with recovery, process 1 can rejoin. This is the normal recovery process. Since we are a library, we have to be cautious about what we are doing and assume when some process terminates abnormally that something could be wrong.
    You can put different Container into different environments. OR adding that the DB_REGISTER and DB_FAILCHK flags can help reduce the number of occurrences when such panic events happen. It's worth following the reference guide documentation starting here:
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/transapp_fail.html
    Thanks,
    Rucong Zhao
    Oracle Berkeley DB XML

  • Coherence with berkeley db environment configuration problem in weblogic

    Hi
    i am new to coherence and i developed a web application. in my app coherence is a cache and berkely db is a backend store. i configured the coherence-config.xml correctly as per the instructions in the oracle site.the problem is when i try to put my data in the cache i am getting an exception like this
    java.lang.NoSuchMethodError: com/sleepycat/je/EnvironmentConfig.setAllowCreate(Z)V
         at com.tangosol.io.bdb.DatabaseFactory$EnvironmentHolder.configure(DatabaseFactory.java:544)
         at com.tangosol.io.bdb.DatabaseFactory$EnvironmentHolder.(DatabaseFactory.java:262)
         at com.tangosol.io.bdb.DatabaseFactory.instantiateEnvironment(DatabaseFactory.java:157)
         at com.tangosol.io.bdb.DatabaseFactory.(DatabaseFactory.java:59)
         at com.tangosol.io.bdb.DatabaseFactoryManager.ensureFactory(DatabaseFactoryManager.java:74)
         at com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager.createBinaryStore(BerkeleyDBBinaryStoreManager.java:176)
         at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateExternalBackingMap(DefaultConfigurableCacheFactory.java:2620)
         at com.tangosol.net.DefaultConfigurableCacheFactory.configureBackingMap(DefaultConfigurableCacheFactory.java:1449)
         at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.instantiateBackingMap(DefaultConfigurableCacheFactory.java:3904)
         at com.tangosol.coherence.component.util.CacheHandler.instantiateBackingMap(CacheHandler.CDB:7)
         at com.tangosol.coherence.component.util.CacheHandler.setCacheName(CacheHandler.CDB:35)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.instantiateCacheHandler(ReplicatedCache.CDB:16)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.ensureCache(ReplicatedCache.CDB:152)
         at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
         at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:33)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:875)
         at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:1223)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:290)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:735)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:712)
         at com.coherence.cachestore.ReadFromFile.putValuesInCache(ReadFromFile.java:149)
         at com.coherence.cachestore.ReadFromFile.doPost(ReadFromFile.java:78)
         at com.coherence.cachestore.ReadFromFile.doGet(ReadFromFile.java:43)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:183)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3717)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    i google it but i got the answer for setting the environment in the java class(even this is also not succeed).i dont know how to do it for coherence. how coherence will take the berkely db configuration when i load the data in catch.please help me. if you know the answer please show the code how to configure the berkely db environment configuration for coherence and where i need to chanage/create and what should i have to do coherence will invoke the berkely db environment and store the data in the local disk. i am using coherence 3.6 with weblogic 10.3.5 server
    Edited by: 875786 on Dec 2, 2011 4:37 AM
    Edited by: 875786 on Dec 2, 2011 4:39 AM

    Hi Thank you very much. Its works fine with je 3.3 version. I have several doubts.as per my application configuration its stores the data in local disk using berkely db. when i restart the server the cache data is no more but the stored data is available in the disk , is there any configuration or technique is available for pre load the data from disk to catch. (i am using replicated cache scheme).if yes menas please provide the full detail and sample code. thanks in advance.

  • Environment configuration for Hot Backups

    Hi all,
    1. I am trying to create a hot backup tool based on the read-only Environment strategy ([discussed in a previous thread|http://forums.oracle.com/forums/message.jspa?messageID=3674008#3674008] ).
    Now, leaving aside the EnvironmentConfig.setReadOnly(true), I have found quite a few possible other configuration options in the EnvironmentParams class and I'm wondering if there are some that I should be using.
    Here are a couple of examples:
    - ENV_RECOVERY
    - ENV_RUN_INCOMPRESSOR
    - ENV_RUN_CHECKPOINTER
    - ENV_RUN_CLEANER
    Would it make sense to configure any of these?
    2. After creating a hot backup I have tried to test its state. Basically, the approach was quite simple:
    - open a read-only env on the backup
    - try to access the databases in the env
    My idea is that if the above 2 ops are succeeding then there is a very good chance that the backup is correct.
    Now, while playing with the above configuration options I have noticed that if I'm setting ENV_RECOVERY to false in this test environment, then any attempt to access the databases within results in a DatabaseNotFoundException.
    Can someone help me understand what is happening? (basically, I cannot make a connection between recovery and access to the DBs in the environment)
    Many thanks in advance,
    ./alex
    PS: I've forgot to mention that I'm running a quite old version: 2.1.30
    Edited by: Alex Popescu on Aug 13, 2009 5:50 AM

    ENV_RECOVERY - suppresses running recovery at Environment creation. Internal parameter.
    ENV_RUN_INCOMPRESSOR, ENV_RUN_CHECKPOINTER, ENV_RUN_CLEANER - disable the INCompressor, Checkpointer, and Cleaner Daemon threads.
    You should not need to adjust any of these parameters for your DbBackup utility. In fact, ENV_RECOVERY is an "internal use only" parameter.
    PS: I've forgot to mention that I'm running a quite old version: 2.1.30
    I'm sorry to be the bearer of bad news, but as my colleague Mark Hayes stressed in a previous post, you really need to upgrade from 2.1.30 to 3.3.latest. It is highly probable that you will eventually run into bugs with 2.x and we are unlikely to (1) be willing to diagnose them, and (2) fix them. As Mark pointed out, 2.1 is 3.5 years old and the product has had a lot of improvements in that time. We are happy to answer questions on this forum relating to the latest major release, but dealing with old and crusty code is certainly going to be well below our allowable priority level.
    Charles Lamb

  • Overwhelmed "newbie" with an environment configuration question...

    This will surely display my lack of experience when it comes to both MIDI and DAW software, but:
    Page 84 of the "Getting Started With Logic" document states:
    On the left hand you see an Object named +Physical Input+. Only one of these Objects exists in the environment.
    I have multiple MIDI input devices (M-Audio Keystation 88es, Keystation 49e, and Trigger Finger). They're all connected via USB. In Audio/MIDI setup they're all recognized separately. Before I read this, I was trying to assign each device it's own Physical Input, but no matter what I did, messages from all three devices showed up on the same Keyboard Object in the environment.
    How can I (hopefully) configure Logic Express to recognize them separately, because I'd like to assign specific devices to separate Audio Instrument tracks?
    Much, much thanks (and props) for the wisdom and sharing it!

    I noticed that in my audio/midi set-up utility, all three controllers have been set to a "port 1" without the option to select a different port...
    You might just have hit it on the head... you're onto something there. I only have one USB midi interface, and it is not supported anymore, so I use it's MIDI ports instead, into my Unitor8mkii. No USB...
    Is my problem that I'm using the controllers hooked up with USB cables versus MIDI cables (say with a USB MIDI interface)?
    That might be it. I'm not 100% sure, but that might be why. You could check in the Audio Midi Setup Application (in the utilities folder that is in the main applications folder) to see if each Midi controller is setup correctly, as far as number of MIDI channels to transmit and receive on.
    And thanks for clearing up the channel splitter thing for me, if / when I get another controller, I'll be sure to try this.
    Another thought, have you tried setting up each controller to transmit on a different MIDI channel, instead of OMNI? OMNI means the device transmits on ALL MIDI channels ALL the time... this would undoubtedly cause further troubles. Try, if possible to set each device internally to it's own MIDI Xmit and Rcv channel, say 1, 2, and 3. See what happens...
    As far as the output is concerned, if you are triggering internal VIs, you don't need to concern yourself as much, except for the EVB3, which accepts different messages on chs 1,2,and 3, for different parts of the plugin. (upper,middle and foot pedals)
    Cheers

  • Building with different environment configuration

    I currently work on a Portal project and have the need for building towards different environment - such as production and test. These different environment have their own configuration settings. For instance would I like to benefit of this configuration in test and production, but not in development:
    http://dev2dev.bea.com/blog/gnunn/archive/2005/06/a_no_brainer_pe_1.html
    Another issue is security setup. I would like to be able to run the portal file during development, but being more restrictive in test and production.
    Have anyone handled this in a good automated way? Any codeshare project that might help me?
    Any input appriciated.... Thanks
    Trond Andersen, Invenia AS, http://www.invenia.no

    Hello Jinsoo,
    Yes, it should be fine to set the cache size different on the master
    versus the replica.
    There are several configuration items that must be set the same
    across all sites, such as the size of the log file, or ack policy, etc.
    You should carefully read the pages for any configuration items that
    you want to have different across sites to see if it is allowed or not.
    Sue LoVerso
    Oracle

  • Optimal environment configuration

    Hi,
    Can you please let us know if the following configuration options are optimal for good performance? We are interested in getting the best read performance without affecting the validity of writes. The BDB version is 5.0.21 (native edition). Thanks.
    Non replicated environment
    ===========================
    envConfig.setErrorStream(System.out)
    envConfig.setErrorPrefix("BDBEnvironment")
    envConfig.setAllowCreate(true)
    envConfig.setInitializeLogging(true)
    envConfig.setInitializeCache(true)
    logFlush is invoked after creating objects in bulk
    Replicated Environment
    =====================================
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT)
    envConfig.setAllowCreate(true)
    envConfig.setRunRecovery(true)
    envConfig.setThreaded(true)
    envConfig.setInitializeReplication(true)
    envConfig.setInitializeLocking(true)
    envConfig.setInitializeLogging(true)
    envConfig.setInitializeCache(true)
    envConfig.setTransactional(true)
    envConfig.setTxnNoSync(true)
    ReplicationManagerAckPolicy=NONE
    request Min Wait Time = 20000
    Request Max Wait Time = 500000
    bulk = false
    priority = 100
    totalSites = 0
    cacheSize = 32 * 1024 * 1024
    writeTransactionTimeout = 10 * 1000 * 1000
    readTransactionTimeout = 750 * 1000
    totalThreads = 3
    heartBeatSendInterval = 5000000
    heartBeatWaitTimeout = 5 * 60 * 1000 * 1000

    Hi,
    Can you please let us know if the following configuration options are optimal for good performance? We are interested in getting the best read performance without affecting the validity of writes. The BDB version is 5.0.21 (native edition). Thanks.
    Non replicated environment
    ===========================
    envConfig.setErrorStream(System.out)
    envConfig.setErrorPrefix("BDBEnvironment")
    envConfig.setAllowCreate(true)
    envConfig.setInitializeLogging(true)
    envConfig.setInitializeCache(true)
    logFlush is invoked after creating objects in bulk
    Replicated Environment
    =====================================
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT)
    envConfig.setAllowCreate(true)
    envConfig.setRunRecovery(true)
    envConfig.setThreaded(true)
    envConfig.setInitializeReplication(true)
    envConfig.setInitializeLocking(true)
    envConfig.setInitializeLogging(true)
    envConfig.setInitializeCache(true)
    envConfig.setTransactional(true)
    envConfig.setTxnNoSync(true)
    ReplicationManagerAckPolicy=NONE
    request Min Wait Time = 20000
    Request Max Wait Time = 500000
    bulk = false
    priority = 100
    totalSites = 0
    cacheSize = 32 * 1024 * 1024
    writeTransactionTimeout = 10 * 1000 * 1000
    readTransactionTimeout = 750 * 1000
    totalThreads = 3
    heartBeatSendInterval = 5000000
    heartBeatWaitTimeout = 5 * 60 * 1000 * 1000

  • Problem with X environment configuration

    I have Desktop PC -> DELL OptiPlex GX260 with monitor LCD DELL 2405FPW (24" optimal Preset Resolution -> 1920 x 1200). Next I installed Solaris 10 x86 version march 2005. All devices from OptiPlex GX260 configured succesfully including integrated graphics card Intel 82845G. But I dont can correctly installed and configured graphics display device -> DELL 2405FPW. After this I cant run Xserver in runlevel 3 (CDE or Gnome) logs: Cant open Xserver... Question is: Its possible correct configure graphics card Intel 82845G and display device DELL 2405FPW (of course I try configuring this by used command kdmconfig?
    Any sugestion, someone have this or similar problem?

    http://docs.info.apple.com/article.html?artnum=304424

  • Runtime environment configuration and spell check applet

    We have a spell check applet that checks ASP form data for correct spelling. An error occurs where the applet is being called. I have tried to adjust the user's IE settings for Security (ActiveX) as well as several combinations of Java Sun or Microsoft VM under the Advanced tab. The user's system has 1.4.2_08. No combinations of setting changes have helped. Does anyone know of a good JRE source to help me to determine if the user's system is properly finding Java for the applet? Thanks.

    I will check the console. Does the execution status appear on a particular tab of the console?
    The web application runs the applet following clicks of the nav buttons to go between screens. Upon click the JavaScript error is stating that an object is invalid, as if it cannot find the applet or it was not loaded. In the JavaScript code, that is where a function inside the applet is being called.

  • Configure Apps for SharePoint 2013 in dev environment without DNS

    I have a SP 2013 dev env
    http://spitlab/ .I want to configure app store in this environment 
    will I be able to do it without access to a DNS server 
    I followed the below two articles 
    http://www.ashokraja.me/post/Develop-SharePoint-2013-Napa-App-In-Local-Dev-Environment-Configuring-On-Premises-without-DNS.aspx
    I am able to install third party apps but when I click on it . it gets redirected to sfs.in 
    next i try this 
    http://sharepointconnoisseur.blogspot.com/2013/07/shortcut-to-prepare-sharepoint-2013-app.html
    same thing I am able to install 3rd party apps but when i click on it .. it goes to intranet.com 
    so is it possible to install third party apps on a dev box without DNS and check out third party apps and if what steps am i missing ?

    Hi,
    If you click the 3rd party apps and it is redirected to sfs.in or intranet.com, this means you configured app domain correctly.
    You can read the official document per the following first link to understand what app domain is (with DNS configured), app domain format is as bellow image (borrowed from this
    article), and app domain is defined as you want(e.g. ContosoApps.com).
    Without DNS, as your above two articles described, the app domain (e.g. apps.com, or apps.sfs.in) is written manually in hosts file directly, you can construct an app domain as your own, then after you install a custom developed app, it should be the following
    app url format.
    http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx
    http://www.ashokraja.me/post/Develop-SharePoint-2013-Napa-App-In-Local-Dev-Environment-Configuring-On-Premises-without-DNS.aspx
    http://sharepointconnoisseur.blogspot.jp/2013/07/shortcut-to-prepare-sharepoint-2013-app.html
    Thanks
    Daniel Yang
    TechNet Community Support

  • Configuring Environment for receiving discrete multichannel midi in Logic 8

    I am using ipMidi to send multichannel midi via ethernet to Logic 8 on my G5.
    The midi is showing up fine at Logic 8. I can run one instrument track no problem - but when I try to use more than one instrument track at a time I have the following problem: The multiple channel data is getting combined or summed, I think, by the default environment configuration in such a way that each instrument channel is receiving all the incoming midi channels and not just the one it is set to receive. Can someone help me with configuring the environment for multichannel midi? My goal is to have 8 instrument tracks in Logic respond discretely and simultaneously to 8 channels of incoming data.
    My Apple Audio Midi Setup recognizes the ipMidi port and so does Logic - but I have yet to figure out how to properly cable my environment so that each instrument channel only sees the midi channel it is set to receive.
    Thanks,
    SB

    mystrobray wrote:
    That works beautifully! Thanks very much - it required no changes tot the default environment, which makes it pretty foolproof. That gets me 16 channels but if for some crazy reason I needed more than 16 channels and had to go to second ipMidi port of 16 channels will the demix still demix by port as well as channel?
    Thanks again,
    SB
    In a bit of a hurry, off to a gig so I may not be thinking this through.
    It looks like software instruments are set up to use the merged ports as I don't see anyplace to set the port for a virtual instrument, external MIDI devices yes, virtual instruments, no. However.. in the environment you can set up a multi for multi-timbral softsynth/sample player and that would give you more
    channels but on a single instrument, you can select a port when using a multi-instrument.
    Also, "settings" are on a per-song basis.
    pancenter-

  • How to install and configure WebLogic 10.3 to use 64-bit JDK?

    Hi,
    Is there a preferred/supported way to configure WebLogic to use the 64-bit JRockit that comes with JRockit Mission Control?
    I can't find any official documentation about how to configure WebLogic to use a 64-bit JDK. From other forum posts I can tell we need to use the 64-bit JRockit that is bundled with JRockit Mission Control (available at [http://www.oracle.com/technology/software/products/jrockit/index.html]) but then we are kind of stuck - the configure script which let me select a JDK for a domain doesn't seem to let me re-configure an existing domain, and I can't find anything within the admin console that would let me set the JDK to use.
    I was able to rename the existing jrockit_160_05 directory to e.g. jrockit_160_05_original and then rename the jrmc directory to "jrockit_160_05", effectively taking the place of the JDK being used but this seemed like a bad way to do it. I also found via Google this page: [http://java.sodeso.nl/application-servers/bea-weblogic/how-can-i-change-the-jdk-installation-that-weblogic-uses] which basically says to modify the JAVA_HOME environment variable in the start scripts, but this seems like the same thing.
    With the plethora of products and the different ways they are bundled, combined with all the documentation about everything, I thought I would find something but I keep coming up blank. Does anyone have suggestions or pointers?
    Thanks,
    KaJun

    Make sure your environment configuration is supported here:
    http://www.oracle.com/technology/software/products/ias/files/fusion_certification.html
    Then, you need to have the correct 64bit JDK installed. Sounds like you have done that.
    Then download and run the generic platform installer as described here with your 64bit supported JDK:
    http://download.oracle.com/docs/cd/E12839_01/doc.1111/e14142/start.htm#i1077535
    Notice how the link you post has the generic installer for all the 64bit JVM rows, but the other installer for the 32bit JVMs on the 64bit OS?

  • What are Best Practice Recommendations for Java EE 7 Property File Configuration?

    Where does application configuration belong in modern Java EE applications? What best practice(s) recommendations do people have?
    By application configuration, I mean settings like connectivity settings to services on other boxes, including external ones (e.g. Twitter and our internal Cassandra servers...for things such as hostnames, credentials, retry attempts) as well as those relating business logic (things that one might be tempted to store as constants in classes, e.g. days for something to expire, etc).
    Assumptions:
    We are deploying to a Java EE 7 server (Wildfly 8.1) using a single EAR file, which contains multiple wars and one ejb-jar.
    We will be deploying to a variety of environments: Unit testing, local dev installs, cloud based infrastructure for UAT, Stress testing and Production environments. **Many of  our properties will vary with each of these environments.**
    We are not opposed to coupling property configuration to a DI framework if that is the best practice people recommend.
    All of this is for new development, so we don't have to comply with legacy requirements or restrictions. We're very focused on the current, modern best practices.
    Does configuration belong inside or outside of an EAR?
    If outside of an EAR, where and how best to reliably access them?
    If inside of an EAR we can store it anywhere in the classpath to ease access during execution. But we'd have to re-assemble (and maybe re-build) with each configuration change. And since we'll have multiple environments, we'd need a means to differentiate the files within the EAR. I see two options here:
    Utilize expected file names (e.g. cassandra.properties) and then build multiple environment specific EARs (eg. appxyz-PROD.ear).
    Build one EAR (eg. appxyz.ear) and put all of our various environment configuration files inside it, appending an environment variable to each config file name (eg cassandra-PROD.properties). And of course adding an environment variable (to the vm or otherwise), so that the code will know which file to pickup.
    What are the best practices people can recommend for solving this common challenge?
    Thanks.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • Lucreate - „Cannot make file systems for boot environment“

    Hello!
    I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
    --($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207  -m /:/dev/md/dsk/d200:ufs
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <d100> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Searching /dev for possible boot environment filesystem devices
    Updating system configuration files.
    The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <S10U6_20081207>.
    Source boot environment is <d100>.
    Creating boot environment <S10U6_20081207>.
    Creating file systems on boot environment <S10U6_20081207>.
    Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
    Mounting file systems for boot environment <S10U6_20081207>.
    Calculating required sizes of file systems              for boot environment <S10U6_20081207>.
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
    Well - why's that?
    I can do a "newfs /dev/md/dsk/d200" just fine.
    When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
    /bin/nawk: can't open file /etc/lu/ICF.2
    Quellcodezeilennummer 1
    Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
    lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
    Would anyone maybe have an idea about what's broken there?
    --($ ~)-- LC_ALL=C metastat
    d200: Mirror
        Submirror 0: d20
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d20: Submirror of d200
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s0          0     No            Okay   Yes
    d100: Mirror
        Submirror 0: d10
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d10: Submirror of d100
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s0          0     No            Okay   Yes
    d201: Mirror
        Submirror 0: d21
          State: Okay        
        Submirror 1: d11
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 2097414 blocks (1.0 GB)
    d21: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s1          0     No            Okay   Yes
    d11: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s1          0     No            Okay   Yes
    hsp001: is empty
    Device Relocation Information:
    Device   Reloc  Device ID
    c1t1d0   Yes    id1,sd@THITACHI_DK32EJ-36NC_____434N5641
    c1t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
    --($ ~)-- /bin/df -k | grep md
    /dev/md/dsk/d100     15490539 10772770 4562864    71%    /Thanks,
    Michael

    Hello.
    (sys01)root# devfsadm -Cv
    (sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
    --($ ~)-- sudo env LC_ALL=C LANG=C lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    d100                       yes      yes    yes       no     -        
    --($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
                   boot environment name: d100
                   This boot environment is currently active.
                   This boot environment will be active on next system boot.
    Filesystem              fstype    device size Mounted on          Mount Options
    /dev/md/dsk/d100        ufs       16106660352 /                   logging
    /dev/md/dsk/d201        swap       1073875968 -                   -In the rebooted system, I re-did the original lucreate:
    <code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
    Copying.
    *{color:#ff0000}Excellent! It now works!{color}*
    Thanks a lot,
    Michael

  • File Adapter vs BPEL interaction issue on high availability environment

    Hi all,
    i would really appreciate your help on a matter i'm facing about a composite (SCA) deployed on a clustered environment configured for high availability. To help you better understand the issue i briefly describe what my composite does. Composite's instances are started by means of an Inbound File Adapter which periodically polls a directory in order to check if any file with a well defined naming convention is available. The adapter is not meant to read the file content but only its properties. Furthermore, the adapter automatically makes a backup copy of the file and doesn't delete the file. Properties read by the adapter are provided to a BPEL process which obtains them using the various "jca.file.xyz" properties (configurable in any BPEL receive activity) and stores them in some of its process variables. How the BPEL process uses these properties is irrilevant to the issue i'd like to pose to your attention.
    The just described interaction between the File Adapter and the BPEL process has always worked in other non-HA environments. The problem i'm facing is that this interaction stops to work when i deploy the composite in a clustered environment configured for high availability: the File Adapter succeeds to read the file but no BPEL process instance gets started and the composite instance gets stuck (that is, it keeps always running until you don't manually abort it!).
    Interesting to say, if I put a Mediator between the File Adapter and the BPEL, the Mediator instance gets started, that is the file's properties read by the adapter are passed to the mediator, but then the composite gets stuck again 'cos even the mediator doesn't seem to be able to initiate the BPEL process instance.
    I think the problem lies in the way i configured either the SOA infrastructure for HA or the File Adapter or BPEL process in my composite. To configure the adapter, i followed the instructions given here:
    http://docs.oracle.com/cd/E14571_01/integration.1111/e10231/adptr_file.htm#BABCBIAH
    but maybe i missed something. Instead, i didn't find anything about BPEL configuration for HA with SOA Suite 11g (all the material i found refers to SOA Suite 10g).
    I've also read in some posts that for using the db as a coordinator between the file adapters deployed on the different nodes of the cluster, the db must be a RAC! Is that true or is possible to use even another type of oracle db?
    Please, let me know if someone of you has already encountered (and solved :)) a problem like this!
    Thanks in advance,
    Bye!

    Hi,
    thanks for your prompt reply. Anyway, i had already read through out that documentation and tried all settings suggested in it without any luck! I'm thinking the problem could be related to the Oracle DB used in the clustered environment, which is not RAC while all documentation i read about high availability configuration always refers to a RAC db. Anyone knows if a RAC Oracle DB is strictly needed for file adapter configuration in HA cluster?
    Thanks, bye!
    Fabio

Maybe you are looking for

  • Can't get my plsql working, ORA-05602 error

    Hello all, I am experiencing an issue with my plsql and I have check all my variables and it seems to be hidden from my eyes. This is my package: create or replace package body PKG_TESTSTR is    type segment is record (       bbold  boolean,       bu

  • Placed PDF file prints jagged

    My company recently upgraded from Adobe Suite CS2 to CS4.  We also purchased a new printer.  I am still figuring out things but I haven't found a solution to my problem yet.  I am not sure if it is the program or the printer (Ikon Business Pro 560c -

  • PO Quantity Rounding Off

    Hi All            In the output of purchase order i have to avoid rounding-off of quantity value.Its coming in my output by default. Please help me. Regards Biju

  • Upgrading adobe forms

    I am a co-author on our account and I tryed to upgrade our account as I am the end user.  While adbe gladly has been charging 14.99 a month to my CC I was not informed it had to be the original author that had to do the upgrade. We would like to upgr

  • BT ID web app or page issues

    For the last month, I cannot access my BT account online due to a "Web App" or page requiring me to setup / link to a new BT ID. I have tried on numerous occasions to setup / link with BT ID automatic app process however at "Step 3" where you put a n