Connectivity Requirements for BI-A

Hi,
Does anyone have any information on what the network connectivity requirements are to connect the BI-A appliance to the BW instance please?

Gigabit Ethernet, (if that's what you would like to know).
Regards, Klaus

Similar Messages

  • IME connection requirements for ASA-SSM module

    I am looking into monitoring an IPS device at a remote site over the internet. I would like to install IME at my main site and have IME connect to the SSM module at the remote site over the internet.  Can anyone offer advice on the TCP ports required (is it simply tcp 443) and if this is advisable?  Any ideas on how much traffic this would generate over the internet connection?
    My thoughts are that I could create a static translation on the ASA for the IP address of the management interface of the SSM module and restrict access through the ASA to my main site public IP address.
    Any assistance is much appreciated.

    Hi,
    All you need is to open up TCP/443 for the IME to be able to successfully connect. So, you can have a static for the SSMs management IP address on TCP port 443 and put in an access-list entry to allow that traffic.
    I am not really sure of how much traffic this will take up but it should not be much.
    Hope this helps. let me know how it goes!!
    Thanks and Regards,
    Prapanch

  • FRM-10060: Database connection required for retrieveing referenced objects

    Hi,
    When I open a form, I get this error. This form is referencing a form stored in the database for some item properties. How do I get find out which item is being referenced from the database?. This is quite a big form with lots of blocks & canvases.
    Thanks

    after pressing OK connect to the database.
    I think, Check for the pre-form trigger code.
    The objects used in the pre-form trigger might be referenced by the database.

  • DB connect no data found is rfc connection required

    Hi Gurus,
    I want to retrieve data from oracle database, i have all the table with data in it, but through the source system i.e db connect it does not display any contents of the table, it says no data found, the installation was all done perfecttly fine, i can read the table fields but i am not able to read the data from the table. what could be the problem.
      Is an RFC connection required for this db connect to retrieve the data from oracle. or else what is the issue. Can anyone please help me out on this, I already posted long back on this still no reply on it. From the osss notes it allgives information about the intallation but it there is no data found then it does not give any information onthis. Can anyone please help me out on this.
    Thanks,
    Hem

    Hi Hem,
    You want to retrieve data from Oracle Database using BW and you are having a problem?
    Try isolation technique to solve your problem.
    Try to use third party softwares e.g. MS Excel. Retrieve data from Oracle using MSExcel. If MSExcel can retrieve data then you can conclude:
        a.) The network connection to Oracle is fine.
        b.) Oracle is up and running and is able to serve data from your specified table.
    RFC Connection is not required. RFC is only useful for SAP to SAP communication. This connection uses a different technique and uses a different driver to communicate. SAP provided DB Connect Engine. A similar approach as what Microsoft's OLEDB is doing. This to make a transparent connection between various database systems. (whether MSSQL, Oracle, MSACCESS, SYBASE etc.)
    As part of isolation technique, are you sure you are accessing the correct table? Try to access other tables. If you can access data of different tables then something is wrong with the table you want to access.
    Have you check authorization issues? Are you allowed to extract data?
    Hope I have helped. If I did, please grant points...
    --Jkyle

  • Requirements for Creative Suite 4

    We have a Apple iMac G4/1.0, w/ OS X upgrade & an added 1GB RAM. Do you think we have the minimum requirements for Creative Suite 4?

    Hello Sgtgold30 and welcome to discussions,
    " Do you think we have the minimum requirements for Creative Suite 4? "
    I doubt that buying, installing and using this software is going to be a pleasing experience on that particular Mac. The following " specs " are pasted from the Adobe website...
    Mac OS
    PowerPC® G5 or multicore Intel® processor
    Mac OS X v10.4.11–10.5.4
    Java™ Runtime Environment 1.5 required for Adobe Version Cue® Server
    1GB of RAM or more recommended
    10.3GB of available hard-disk space for installation; additional hard-disk space required during installation (cannot install on a volume that uses a case-sensitive file system or on flash-based storage devices)
    1,024x768 display (1,280x800 recommended) with 16-bit video card
    Some GPU-accelerated features require graphics support for Shader Model 3.0 and OpenGL 2.0
    DVD-ROM drive
    QuickTime 7.4.5 software required for multimedia features
    Broadband Internet connection required for online services
    It might just work. It might take hours and hours to work though.
    Regards.
    Ian.

  • HT4106 I have a camera connection kit for my ipad 2. Can this kit be used with my new ipad air?... If so what adapter is required please

    Camera connection kit for ipad 2. Can this kit be used with ipad air and if so what adapter(s) are required please

    You can try the 30 pin to lightning adapter but I'm not sure if that'll work. So if you get it to experiment, make sure you can return it. Apple does sell a SD to lightning adapter for the newer devices. So that should be a sure bet. The main downside is that your 30 pin camera kit came with both USB and SD for 29 and now the USB adapter is 19 and the SD adapter 29, so more expensive.
    You may want to look into third party adapters, but make sure you use a reputable manufacturer (ie not a cheap no-name off amazon) because iOS7 brought with it software that looks for a signed chip in the adapter. And a chip is 'signed' when a company gets apple's permission to make accessories. I would suggest sticking with companies like Belkin, Kensington and Targus. They seem to be of a better reputation when making things and would have a signed chip.

  • Program ID Required for RFC Connection

    Hi All,
    I'm trying to create a RFC Connection betwen SAP BI and SSIS (SQL Server Integration Services). In SM59 screen i'm unable to get the appropriate Program ID required for the connection. Could someone please help me in knowing how to get it.

    Yogesh,
    While creating TCP\IP type RFC destination, when you select Radio button "Registered server program", there you have to provide Program Id there. So for your information:
    Program Id is a string which can be anything even your name.
    i know only R3 and XI integration and in that case you will make your RFC communication channel active first with a program Id(ant thing) and with logon credentials of other system + gateway host and service.
    then use that program id in RFC destination and test it.
    If get empty request then RFC destination is ok.
    BR,
    Alok

  • Requirements for participants - must they have Acrobat Connect Pro for meetings?

    I am soon participating in a "eLearning" program and we have been told that the meetings are within "Adobe Acrobat Connect Pro Meeting".
    It is not clearly stated anywhere if I need to subscribe and have this program installed myself or if it is actually only the host that needs the program.
    Is having Adobe Acrobat Connect Pro Meeting installed/subscribed a requirement for the participants?
    Thanks for any help - hope Adobe will state this much clearer on their webpages in the future.

    All the techical requirements for joining a meeting are listed here: Adobe Connect system requirements, web conferencing, Mac | Adobe Connect
    Basically you need to have Flash Player 11.2 or newer.

  • Unity Connections 8.6.x - Independent-persistent disk requirement for VM?

    Does anyone know why Unity Connections has a requirement for it's vDisks to be set to independant-persistant mode? I don't see this requirement for any other UC application and would prefer to not set it in this mode as it prevents us for taking a snapshot when the machine is shutdown. I know that snapshots are not officially supported, but it is a great back-out plan in the case of doing maintenance off hours and deleting the snapshot before using the VM in production.
    Expanation of Independent Disks:
    Independent Disks are not affected by snapshots.
    Persistent - Changes are immediately and permanently written to the disk.
    Where I found the Cisco requirements:
    http://www.cisco.com/en/US/docs/voice_ip_comm/connection/8x/requirements/8xcucsysreqs.html
    "All virtual disks assigned to the Connection virtual machine must be configured  in independent-persistent mode, which provides the best storage performance."

    Yea I also noticed that the OVA did not apply this setting. I have turned this on manually as this document states it is a requirement. Just wondering if this requirement has been droped at some point since that secton of the document was last modified June 9th 2011. I can't seem to find the requirement anywhere else beside this document and it is not stated anywhere that it was dropped.

  • Connection pool for ldap

    Hi
    My application is an interface to ldap directory. I have not used any ldap open source api to retrieve data from ldap. I have written connection pool that will help the application to connect to the ldap. It's working fine, but it's creating threads which are not invited.
    ConnectionPool class takes care of the connection storage and creation, while Housekeeping thread relases these connection when idle after a given time.
    Can someone please help in finding the problem in the code that creates additional threads.
    package com.ba.cdLookup.manager;
    import com.ba.cdLookup.exception.CDLookupException;
    import com.ba.cdLookup.server.CdLookupProperties;
    import java.util.Vector;
    import javax.naming.Context;
    import javax.naming.NamingException;
    public class HouseKeeperThread extends Thread {
             * Apache Logger to log erro/info/debug statements.
        protected static org.apache.commons.logging.Log log = org.apache.axis.components.logger.LogFactory
             .getLog(HouseKeeperThread.class.getName());
        private static HouseKeeperThread houseKeeperThread;
             * Close all connections existing.
             * @param connections
             *                void
        private void closeConnections(Vector connections) {
         String methodIdentifier = "closeConnections";
         int numOfConn = connections.size();
         try {
             for (int i = 0; i < numOfConn; i++) {
              Context context = (Context) connections.get(i);
              if (context != null) {
                  context.close();
                  context = null;
                  connections.remove(i);
                  numOfConn--;
                  log.info(" connection name:" + context
                       + " removed. Threadcount =" + (connections.size()));
         } catch (NamingException e) {
             String errMsg = "CDLdapBuilder connect() - failure while releasing connection "
                  + " Exception is " + e.toString();
             log.error(errMsg);
         } catch (Exception e) {
             String errMsg = "CDLdapBuilder connect() - failure while releasing connection "
                  + " Exception is " + e.toString();
             log.error(errMsg);
             * Thread run method
        public void run() {
         String methodIdentifier = "run";
         try {
             while(true){
              log.debug("house keeping :" + this + " ---sleep");
              //sleep(100000);
              log.debug("house keeping :" + this + " startd after sleep");
               sleep(CdLookupProperties.getHouseKeepConnectionTime());
              ConnectionPool connectionPool = ConnectionPool
                   .getConnectionPool();
              Vector connList = connectionPool.getAvailableConnections();
              closeConnections(connList);
         } catch (CDLookupException cde) {
             log.error(methodIdentifier + " " + cde.getStackTrace());
         } catch (InterruptedException ie) {
             log.error(methodIdentifier + " " + ie.getStackTrace());
         * @param connectionPool
         * @return
         * Thread
        public static Thread getInstance() {
         if(houseKeeperThread==null){
             houseKeeperThread = new HouseKeeperThread();
         return houseKeeperThread ;
    package com.ba.cdLookup.manager;
    import com.ba.cdLookup.exception.CDLookupException;
    import com.ba.cdLookup.server.CdLookupProperties;
    import com.ba.cdwebservice.schema.cdLookupPacket.LookupFailureReasons;
    import java.util.Properties;
    import java.util.Vector;
    import javax.naming.Context;
    import javax.naming.NamingException;
    import javax.naming.directory.DirContext;
    import javax.naming.directory.InitialDirContext;
    * ConnectionPool class manages, allocates LDAP connections. It works as a lazy
    * binder and retrieves connections only when required. It doesn't allow
    * connection greater then the maximum connection stated.
    * To retrieve a connection the singelton method getConnectionPool is to used,
    * which retruns thread safe singleton object for the connection.
    public class ConnectionPool implements Runnable {
        private int initialConnections = 0;
        private int maxConnections = 0;
        private boolean waitIfBusy = false;
        private Vector availableConnections, busyConnections;
        private boolean connectionPending = false;
        private static int threadCount = 0;
             * classIdentifier
        private final String classIdentifier = "ConnectionPool";
             * Apache Logger to log erro/info/debug statements.
        protected static org.apache.commons.logging.Log log = org.apache.axis.components.logger.LogFactory
             .getLog(CDLdapBuilder.class.getName());
             * To get the attribute a systemaccessfor out of the search result
        private String vendorContextFactoryClass = "com.sun.jndi.ldap.LdapCtxFactory";// "com.ibm.jndi.LDAPCtxFactory";
             * context factory to use
        private String ldapServerUrl = "LDAP://test.ldap.com"; // default ldap
             * server live used by default
        private String searchBase;
             * environment properties.
        private Properties env;
             * DirContext
        private javax.naming.directory.DirContext ctx;
             * default search base to be used in Corporate Directory searches
        private String defaultSearchBase = "dc=Pathway";
             * search criteria
        private String searchAttributes;
             * search filter to retrieve data from CD
        private String searchFilter;
             * CorporateDirectoryLookup Constructor
             * <p>
             * loads the setup parameters from the properties file and stores them
             * Makes a connection to the directory and sets default search base
             * @throws CDLookupException
             * @throws CDLookupException
        private ConnectionPool() throws CDLookupException {
         this.maxConnections = CdLookupProperties.getMaxConnection();// maxConnections;
         this.initialConnections = CdLookupProperties.getInitialConnection();
         this.waitIfBusy = CdLookupProperties.isWaitIfBusy();
         this.searchBase = CdLookupProperties.getDefaultSearchBase();
         //for local env testing
    //      this.maxConnections = 5;
    //      this.initialConnections = 1;
    //      this.waitIfBusy = true;
             * For keeping no of connections in the connection pool if
             * (initialConnections > maxConnections) { initialConnections =
             * maxConnections; }
         availableConnections = new Vector(maxConnections);
         busyConnections = new Vector(maxConnections);
         for (int i = 0; i < maxConnections; i++) {
             availableConnections.add(makeNewConnection());
             *  ConnectionPoolHolder provide Thread safe singleton
             *         instance of ConnectionPool class
        private static class ConnectionPoolHolder {
             * connection pool instance
         private static ConnectionPool connectionPool = null;
             * If no ConnectionPool object is present, it creates instance of
             * ConnectionPool class and initiates thread on that.
             * @return ConnectionPool Returns singleton object of ConnectionPool
             *         class.
             * @throws CDLookupException
         private static ConnectionPool getInstance() throws CDLookupException {
             if (connectionPool == null) {
              connectionPool = new ConnectionPool();
              new Thread(connectionPool).start();
              // Initiate house keeping thread.
              HouseKeeperThread.getInstance().start();
             return connectionPool;
             * Returns singleton object of ConnectionPool class.
             * @return ConnectionPool
             * @throws CDLookupException
        public static ConnectionPool getConnectionPool() throws CDLookupException {
         return ConnectionPoolHolder.getInstance();
             * getConnection retrieves connections to the corp directory. In case
             * there is no available connections in the pool then it'll try to
             * create one, if the max connection limit for the connection pool
             * reaches then this waits to retrieve one.
             * @return Context
             * @throws CDLookupException
        public synchronized Context getConnection() throws CDLookupException {
         String methodIdentifier = "getConnection";
         if (!availableConnections.isEmpty()) {
             int connectionSize = availableConnections.size() - 1;
             DirContext existingConnection = (DirContext) availableConnections
                  .get(connectionSize);
             availableConnections.remove(connectionSize);
                     * If connection on available list is closed (e.g., it timed
                     * out), then remove it from available list and repeat the
                     * process of obtaining a connection. Also wake up threads that
                     * were waiting for a connection because maxConnection limit was
                     * reached.
             if (existingConnection == null) {
              notifyAll(); // Freed up a spot for anybody waiting
              return (getConnection());
             } else {
              busyConnections.add(existingConnection);
              return (existingConnection);
         } else {
                     * Three possible cases: 1) You haven't reached maxConnections
                     * limit. So establish one in the background if there isn't
                     * already one pending, then wait for the next available
                     * connection (whether or not it was the newly established one).
                     * 2) You reached maxConnections limit and waitIfBusy flag is
                     * false. Throw SQLException in such a case. 3) You reached
                     * maxConnections limit and waitIfBusy flag is true. Then do the
                     * same thing as in second part of step 1: wait for next
                     * available connection.
             if ((totalConnections() < maxConnections) && !connectionPending) {
              makeBackgroundConnection();
             } else if (!waitIfBusy) {
              throw new CDLookupException("Connection limit reached", 0);
                     * Wait for either a new connection to be established (if you
                     * called makeBackgroundConnection) or for an existing
                     * connection to be freed up.
             try {
              wait();
             } catch (InterruptedException ie) {
              String errMsg = "Exception raised =" + ie.getStackTrace();
              log.error(errMsg);
              throw new CDLookupException(classIdentifier, methodIdentifier,
                   errMsg, ie);
             // connection freed up, so try again.
             return (getConnection());
             * You can't just make a new connection in the foreground when none are
             * available, since this can take several seconds with a slow network
             * connection. Instead, start a thread that establishes a new
             * connection, then wait. You get woken up either when the new
             * connection is established or if someone finishes with an existing
             * connection.
        private void makeBackgroundConnection() {
         connectionPending = true;
         try {
             Thread connectThread = new Thread(this);
             log.debug("background thread created");
             connectThread.start();
         } catch (OutOfMemoryError oome) {
             log.error("makeBackgroundConnection ="+ oome.getStackTrace());
             * Thread run method
        public void run() {
         String methodIdentifier = "run";
         try {
             Context connection = makeNewConnection();
             synchronized (this) {
              availableConnections.add(connection);
              connectionPending = false;
              notifyAll();
         } catch (Exception e) { // SQLException or OutOfMemory
             // Give up on new connection and wait for existing one
             // to free up.
             String errMsg = "Exception raised =" + e.getStackTrace();
             log.error(errMsg);   
             * This explicitly makes a new connection. Called in the foreground when
             * initializing the ConnectionPool, and called in the background when
             * running.
             * @return Context
             * @throws CDLookupException
        private Context makeNewConnection() throws CDLookupException {
         String methodIdentifier = "makeNewConnection";
         Context context = null;
         env = new Properties();
         log.debug("inside " + methodIdentifier);
         try {
             env.put(Context.INITIAL_CONTEXT_FACTORY,
                  getVendorContextFactoryClass());
             env.put(Context.PROVIDER_URL, getLdapServerUrl());
             env.put("com.sun.jndi.ldap.connect.pool", "true");
             context = new InitialDirContext(env);
         } catch (NamingException e) {
             String errMsg = "CDLdapBuilder connect() - failure while attempting to contact "
                  + ldapServerUrl + " Exception is " + e.toString();
             throw new CDLookupException(classIdentifier, methodIdentifier,
                  errMsg, e, LookupFailureReasons.serviceUnavailable);
         } catch (Exception e) {
             String errMsg = "CDLdapBuilder connect() - failure while attempting to contact "
                  + ldapServerUrl + " Exception is " + e.toString();
             throw new CDLookupException(classIdentifier, methodIdentifier,
                  errMsg, e, LookupFailureReasons.serviceUnavailable);
         log.info("new connection :" + (threadCount++) + " name =" + context);
         log.debug("exit " + methodIdentifier);
         return context;
             * releases connection to the free pool
             * @param context
        public synchronized void free(Context context) {
         busyConnections.remove(context);
         availableConnections.add(context);
         // Wake up threads that are waiting for a connection
         notifyAll();
             * @return int give total no of avail connections.
        public synchronized int totalConnections() {
         return (availableConnections.size() + busyConnections.size());
             * Close all the connections. Use with caution: be sure no connections
             * are in use before calling. Note that you are not <I>required</I> to
             * call this when done with a ConnectionPool, since connections are
             * guaranteed to be closed when garbage collected. But this method gives
             * more control regarding when the connections are closed.
        public synchronized void closeAllConnections() {
         closeConnections(availableConnections);
         availableConnections = new Vector();
         closeConnections(busyConnections);
         busyConnections = new Vector();
             * Close all connections existing.
             * @param connections
             *                void
        private void closeConnections(Vector connections) {
         String methodIdentifier = "closeConnections";
         try {
             for (int i = 0; i < connections.size(); i++) {
              Context context = (Context) connections.get(i);
              if (context != null) {
                  log.info(" connection name:" + context
                       + " removed. Threadcount =" + (threadCount++));
                  context.close();
                  context = null;
         } catch (NamingException e) {
             String errMsg = "CDLdapBuilder connect() - failure while attempting to contact "
                  + ldapServerUrl + " Exception is " + e.toString();
             log.error(errMsg);
        public synchronized String toString() {
         String info = "ConnectionPool(" + getLdapServerUrl() + ","
              + getVendorContextFactoryClass() + ")" + ", available="
              + availableConnections.size() + ", busy="
              + busyConnections.size() + ", max=" + maxConnections;
         return (info);
             * @return the defaultSearchBase
        public final String getDefaultSearchBase() {
         return defaultSearchBase;
             * @param defaultSearchBase
             *                the defaultSearchBase to set
        public final void setDefaultSearchBase(String defaultSearchBase) {
         this.defaultSearchBase = defaultSearchBase;
             * @return the ldapServerUrl
        public final String getLdapServerUrl() {
         return ldapServerUrl;
             * @param ldapServerUrl
             *                the ldapServerUrl to set
        public final void setLdapServerUrl(String ldapServerUrl) {
         this.ldapServerUrl = ldapServerUrl;
             * @return the vendorContextFactoryClass
        public final String getVendorContextFactoryClass() {
         return vendorContextFactoryClass;
             * @param vendorContextFactoryClass
             *                the vendorContextFactoryClass to set
        public final void setVendorContextFactoryClass(
             String vendorContextFactoryClass) {
         this.vendorContextFactoryClass = vendorContextFactoryClass;
         * @return the availableConnections
        public final Vector getAvailableConnections() {
            return availableConnections;
    }

    Hi,
    As the connection pool implmentation has the bug of not extending more than the min size, workaround I use is MIN_CONN=100 and MAX_CONN=101,and just waiting for the bug to get fixed. (using Netscape SDK for java4.0)

  • List of Manual Setup required for iSetup to work

    Hi All,
    This is Mugunthan from iSetup development. Based on my interaction with customers and Oracle functional experts, I had documented list of manual setups that are required for smooth loading of selection sets. I am sharing the same. Please let me know if I anyone had to enter some manual setup while using iSetup.
    Understanding iSetup
    iSetup is a tool to migrate and report on your configuration data. Various engineering teams from Oracle develop the APIs/Programs, which migrates the data across EBS instances. Hence all your data is validated for all business cases and data consistency is guarantied. It requires good amount of setup functional knowledge and bit of technical knowledge to use this tool.
    Prerequisite setup for Instance Mapping to work
    ·     ATG patch set level should be same across all EBS instances.
    ·     Copy DBC files of each other EBS instances participating in migration under $FND_SECURE directory (refer note below for details).
    ·     Edit sqlnet.ora to allow connection between DB instacnes(tcp.invited_nodes=(<source>,<central>))
    ·     Make sure that same user name with iSetup responsibility exists in all EBS instances participating in migration.
    Note:- iSetup tool is capable of connecting to multiple EBS instances. To do so, it uses dbc file information available under $FND_SECURE directory. Let us consider three instances A, B & C, where A is central instance, B is source instance and C is target instances. After copying the dbc file on all nodes, $FND_SECURE directory would look like this on each machine.
    A => A.dbc, B.dbc, C.dbc
    B => A.dbc, B.dbc
    C => A.dbc, C.dbc
    Prerequisite for registering Interface and creating Custom Selection Set
    iSetup super role is mandatory to register and create custom selection set. It is not sufficient if you register API on central/source instance alone. You must register the API on all instances participating in migration/reporting.
    Understanding how to access/share extracts across instances
    Sharing iSetup artifacts
    ·     Only the exact same user can access extracts, transforms, or reports across different instances.
    ·     The “Download” capability offers a way to share extracts, transforms, and loads.
    Implications for Extract/Load Management
    ·     Option 1: Same owner across all instances
    ·     Option 2: Same owner in Dev, Test, UAT, etc – but not Production
    o     Extract/Load operations in non-Production instances
    o     Once thoroughly tested and ready to load into Production, download to desktop and upload into Production
    ·     Option 3: Download and upload into each instance
    Security Considerations
    ·     iSetup does not use SSH to connect between instances. It uses Concurrent Manager framework to lunch concurrent programs on source and target instances.
    ·     iSetup does not write password to any files or tables.
    ·     It uses JDBC connectivity obtained through standard AOL security layer
    Common Incorrect Setups
    ·     Failure to complete/verify all of the steps in “Mapping instances”
    ·     DBC file should be copied again if EBS instance has been refreshed or autoconfig is run.
    ·     Custom interfaces should be registered in all EBS instances. Registering it on Central/Source is not sufficient.
    ·     Standard Concurrent Manager should up for picking up iSetup concurrent requests.
    ·     iSetup financial and SCM modules are supported from 12.0.4 onwards.
    ·     iSetup is not certified on RAC. However, you may still work with iSetup if you could copy the DBC file on all nodes with the same name as it had been registered through Instance Mapping screen.
    Installed Languages
    iSetup has limitations where it cannot Load or Report if the number and type of installed languages and DB Charset are different between Central, Source and Target instances. If your case is so, there is a workaround. Download the extract zip file to desktop and unzip it. Edit AZ_Prevalidator_1.xml to match your target instance language and DB Charset. Zip it back and upload to iSetup repository. Now, you would be able to load to target instance. You must ensure that this would not corrupt data in DB. This is considered as customization and any data issue coming out this modification is not supported.
    Custom Applications
    Application data is the prerequisite for the most of the Application Object Library setups such as Menus, Responsibility, and Concurrent programs. iSetup does not migrate Custom Applications as of now. So, if you have created any custom application on source instance, please manually create them on the target instance before moving Application Object Library (AOL) data.
    General Foundation Selection Set
    Setup objects in General foundation selection set supports filtering i.e. ability to extract specific setups. Since most of the AOL setup data such as Menus, Responsibilities and Request Groups are shipped by Oracle itself, it does not make sense to migrate all of them to target instance since they would be available on target instance. Hence, it is strongly recommended to extract only those setup objects, which are edited/added, by you to target instance. This improves the performance. iSetup uses FNDLOAD (seed data loader) to migrate most of the AOL Setups. The default behavior of FNDLOAD is given below.
    Case 1 – Shipped by Oracle (Seed Data)
    FNDLOAD checks last_update_date and last_updated_by columns to update a record. If it is shipped by Oracle, the default owner of the record would be Oracle and it would skip these records, which are identical. So, it won’t change last_update_by or last_updated_date columns.
    Case 2 – Shipped by Oracle and customized by you
    If a record were customized in source instance, then it would update the record based on last_update_date column. If the last_update_date in the target were more recent, then FNDLOAD would not update the record. So, it won’t change last_update_by column. Otherwise, it would update the records with user who customized the records in source instance.
    Case 3 – Created and maintained by customers
    If a record were newly added/edited in source instance by you, then it would update the record based on last_update_date column. If the last_update_date of the record in the target were more recent, then FNDLOAD would not update the record. So, it won’t change last_update_by column. Otherwise, it would update the records with user who customized the records in source instance.
    Profiles
    HR: Business Group => Set the name of the Business Group for which you would like to extract data from source instance. After loading Business Group onto the target instance, make sure that this profile option is set appropriately.
    HR: Security Profile => Set the name of the Business Group for which you would like to extract data from source instance. After loading Business Group onto the target instance, make sure that this profile option is set appropriately.
    MO: Operating Unit => Set the Operating Unit name for which you would like to extract data from source instance. After loading Operating Unit onto the target instance, make sure that this profile option is set if required.
    Navigation path to do the above setup:
    System Administrator -> Profile -> System.
    Query for the above profiles and set the values accordingly.
    Descriptive & Key Flex Fields
    You must compile and freeze the flex field values before extracting using iSetup.
    Otherwise, it would result in partial migration of data. Please verify that all the data been extracted by reporting on your extract before loading to ensure data consistency.
    You can load the KFF/DFF data to target instance even the structures in both source as well as target instances are different only in the below cases.
    Case 1:
    Source => Loc1 (Mandate), Loc2 (Mandate), Loc3, and Loc4
    Target=> Loc1, Loc2, Loc3 (Mandate), Loc4, Loc5 and Loc6
    If you provide values for Loc1 (Mandate), Loc2 (Mandate), Loc3, Loc4, then locations will be loaded to target instance without any issue. If you do not provide value for Loc3, then API will fail, as Loc3 is a mandatory field.
    Case 2:
    Source => Loc1 (Mandate), Loc2 (Mandate), Loc3, and Loc4
    Target=> Loc1 (Mandate), Loc2
    If you provide values for Loc1 (Mandate), Loc2 (Mandate), Loc3 and Loc4 and load data to target instance, API will fail as Loc3 and Loc4 are not there in target instance.
    It is always recommended that KFF/DFF structure should be same for both source as well as target instances.
    Concurrent Programs and Request Groups
    Concurrent program API migrates the program definition(Definition + Parameters + Executable) only. It does not migrate physical executable files under APPL_TOP. Please use custom solution to migrate executable files. Load Concurrent Programs prior to loading Request Groups. Otherwise, associated concurrent program meta-data will not be moved even through the Request Group extract contains associated Concurrent Program definition.
    Locations - Geographies
    If you have any custom Geographies, iSetup does not have any API to migrate this setup. Enter them manually before loading Locations API.
    Currencies Types
    iSetup does not have API to migrate Currency types. Enter them manually on target instance after loading Currency API.
    GL Fiscal Super user--> setup--> Currencies --> rates -- > types
    Associating an Employee details to an User
    The extract process does not capture employee details associated with users. So, after loading the employee data successfully on the target instance, you have to configure them again on target instance.
    Accounting Setup
    Make sure that all Accounting Setups that you wish to migrate are in status “Complete”. In progress or not-completed Accounting Setups would not be migrated successfully.
    Note: Currently iSetup does not migrate Sub-Ledger Accounting methods (SLA). Oracle supports some default SLA methods such as Standard Accrual and Standard Cash. You may make use of these two. If you want to use your own SLA method then you need to manually create it on target instances because iSetup does not have API to migrate SLA. If a Primary Ledger associated with Secondary Ledgers using different Chart of Accounts, then mapping rules should be defined in the target instance manually. Mapping rule name should match with XML tag “SlCoaMappingName”. After that you would be able to load Accounting Setup to target instance.
    Organization API - Product Foundation Selection Set
    All Organizations which are defined in HR module will be extracted by this API. This API will not extract Inventory Organization, Business Group. To migrate Inventory Organization, you have to use Inventory Organization API under Discrete Mfg. and Distribution Selection Set. To extract Business Group, you should use Business Group API.
    Inventory Organization API - Discrete Mfg & Distribution Selection Set
    Inventory Organization API will extract Inventory Organization information only. You should use Inventory Parameters API to move parameters such as Accounting Information. Inventory Organization API Supports Update which means that you can update existing header level attributes of Inventory Organization on the target instance. Inventory Parameters API does not support update. To update Inventory Parameters, use Inventory Parameters Update API.
    We have a known issue where Inventory Organization API migrates non process enabled organization only. If your inventory organization is process enabled, then you can migrate them by a simple workaround. Download the extract zip file to desktop and unzip it. Navigate to Organization XML and edit the XML tag <ProcessEnabledFlag>Y</ProcessEnabledFlag> to <ProcessEnabledFlag>N</ProcessEnabledFlag>. Zip it back the extract and upload to target instance. You can load the extract now. After successful completion of load, you can manually enable the flag through Form UI. We are working on this issue and update you once patch is released to metalink.
    Freight Carriers API - Product Foundation Selection Set
    Freight Carriers API in Product Foundation selection set requires Inventory Organization and Organization Parameters as prerequisite setup. These two APIs are available under Discrete Mfg. and Distribution Selection Set. Also,Freight Carriers API is available under Discrete Mfg and Distribution Selection Set with name Carriers, Methods, Carrier-ModeServ,Carrier-Org. So, use Discrete Mfg selection set to load Freight Carriers. In next rollup release Freight Carriers API would be removed from Product Foundation Selection Set.
    Organization Structure Selection Set
    It is highly recommended to set filter and extract and load data related to one Business Group at a time. For example, setup objects such as Locations, Legal Entities,Operating Units,Organizations and Organization Structure Versions support filter by Business Group. So, set the filter for a specific Business Group and then extract and load the data to target instance.
    List of mandatory iSetup Fwk patches*
    8352532:R12.AZ.A - 1OFF:12.0.6: Ignore invalid Java identifier or Unicode identifier characters from the extracted data
    8424285:R12.AZ.A - 1OFF:12.0.6:Framework Support to validate records from details to master during load
    7608712:R12.AZ.A - 1OFF:12.0.4:ISETUP DOES NOT MIGRATE SYSTEM PROFILE VALUES
    List of mandatory API/functional patches*
    8441573:R12.FND.A - 1OFF:12.0.4: FNDLOAD DOWNLOAD COMMAND IS INSERTING EXTRA SPACE AFTER A NEWLINE CHARACTER
    7413966:R12.PER.A - MIGRATION ISSUES
    8445446:R12.GL.A - Consolidated Patch for iSetup Fixes
    7502698:R12.GL.A - Not able to Load Accounting Setup API Data to target instance.
    Appendix_
    How to read logs
    ·     Logs are very important to diagnose and troubleshoot iSetup issues. Logs contain both functional and technical errors.
    ·     To find the log, navigate to View Detail screens of Extracts/ Transforms/Loads/Standard/Comparison Reports and click on View Log button to view the log.
    ·     Generic Loader (FNDLOAD or Seed data loader) logs are not printed as a part of main log. To view actual log, you have to take the request_id specified in the concurrent log and search for the same in Forms Request Search Window in the instance where the request was launched.
    ·     Functional errors are mainly due to
    o     Missing prerequisite data – You did not load one more perquisite API before loading the current API. Example, trying to load “Accounting Setup” without loading “Chart of Accounts” would result in this kind of error.
    o     Business validation failure – Setup is incorrect as per business rule. Example, Start data cannot be greater than end date.
    o     API does not support Update Records – If the there is a matching record in the target instance and If the API does not support update, then you would get this kind of errors.
    o     You unselected Update Records while launching load - If the there is a matching record in the target instance and If you do not select Update Records, then you would get this kind of errors.
    Example – business validation failure
    o     VONAME = Branches PLSQL; KEY = BANKNAME = 'AIBC‘
    o     BRANCHNAME = 'AIBC'
    o     EXCEPTION = Please provide a unique combination of bank number, bank branch number, and country combination. The 020, 26042, KA combination already exists.
    Example – business validation failure
    o     Tokens: VONAME = Banks PLSQL
    o     BANKNAME = 'OLD_ROYAL BANK OF MY INDIA'
    o     EXCEPTION = End date cannot be earlier than the start date
    Example – missing prerequisite data.
    o     VONAME = Operating Unit; KEY = Name = 'CAN OU'
    o     Group Name = 'Setup Business Group'
    o     ; EXCEPTION = Message not found. Application: PER, Message Name: HR_ORG_SOB_NOT_FOUND (Set of books not found for ‘Setup Business Group’)
    Example – technical or fwk error
    o     OAException: System Error: Procedure at Step 40
    o     Cause: The procedure has created an error at Step 40.
    o     Action: Contact your system administrator quoting the procedure and Step 40.
    Example – technical or fwk error
    o     Number of installed languages on source and target does not match.
    Edited by: Mugunthan on Apr 24, 2009 2:45 PM
    Edited by: Mugunthan on Apr 29, 2009 10:31 AM
    Edited by: Mugunthan on Apr 30, 2009 10:15 AM
    Edited by: Mugunthan on Apr 30, 2009 1:22 PM
    Edited by: Mugunthan on Apr 30, 2009 1:28 PM
    Edited by: Mugunthan on May 13, 2009 1:01 PM

    Mugunthan
    Yes we have applied 11i.AZ.H.2. I am getting several errors still that we trying to resolve
    One of them is
    ===========>>>
    Uploading snapshot to central instance failed, with 3 different messages
    Error: An invalid status '-1' was passed to fnd_concurrent.set_completion_status. The valid statuses are: 'NORMAL', 'WARNING', 'ERROR'FND     at oracle.apps.az.r12.util.XmlTransmorpher.<init>(XmlTransmorpher.java:301)
         at oracle.apps.az.r12.extractor.cpserver.APIExtractor.insertGenericSelectionSet(APIExtractor.java:231)
    please assist.
    regards
    girish

  • Lync 2013 certificate requirements for multiple SIP domains

    Hi All,
    I am engaged with a client in respect of a Lync 2013 implementation initially as a conferencing platform with a view to enabling EV functions (inc. PSTN conferencing) in the future. They initially need to support 30 SIP domains and eventually
    around 100 SIP domains which is proving to be either not possible or severely cost prohibitive. Their current certificate provider, Thawte, can only support up to 25 SANs and have quoted them 5 figures. We tend to use GeoTrust as they are cheaper but they
    appear to have a limit of 25 SANs. GoDaddy appear to support up to 100 SANs for a pretty reasonable cost. My questions are as follows:
    Is there a way that I’m missing of reducing the number of SANs required on the Edge server?
    Use aliases for access edge FQDNs - Supported by desktop client but not by other devices so not really workable
    Don’t support XMPP federation therefore removing the need for domain name FQDNs for each SIP domain
    Is there a way that I’m missing of reducing the number of SANs required on the Reverse Proxy server?
    Friendly URL option 3 from this page:
    http://technet.microsoft.com/en-us/library/gg398287.aspx
    Client auto-configuration:
    i.     
    Don’t support mobile client auto-configuration in which case no lyncdiscover.sipdomain1.com DNS records or SANs would be required.
    ii.     
    Support mobile client auto-configuration over HTTP only in which case CNAME records are required for each SIP domain (lyncdiscover.sipdomain1.com, etc. pointing to lyncdiscover.designateddomain.com) but no SANs are required.
    iii.     
    Support mobile client auto-configuration over HTTPS in which case DNS records are required for each SIP domain and a SAN entry for each SIP domains is also required. This is because a DNS CNAME to another domain is not supported over
    HTTPS.
    If the answer to 1 and/or 2 is no, are there certificate providers that support over 100 SANs?
    How do certificate requirements differ when using the Lync 2013 hosting pack? I would think that this issue is something that a hosting provider would need to overcome.
    Would the Lync 2013 Hosting Pack work for this customer? The customer uses SPLA licensing so I think is eligible to use the hosting pack but not 100% sure it will work in their environment given that client connections are supposed
    to all come through the Edge where their tenants will be internal and also given the requirement for an ACP for PSTN conferencing.
    Many thanks,

    Many thanks for the response.
    I was already planning to use option 3 from the below page for simple URLs to cut down on SAN requirement.
    http://technet.microsoft.com/en-us/library/gg398287.aspx
    What are the security concerns for publishing autodiscover over port 80? I.e. Is this only used for the initial download of the discovery record and then HTTPS is used for authentication? This seems to be the case from the following note on the below page:
    http://technet.microsoft.com/en-gb/library/hh690030.aspx
    Mobile device clients do not support multiple Secure Sockets Layer (SSL) certificates from different domains. Therefore, CNAME redirection to different domains is not supported over HTTPS. For example, a DNS CNAME record for lyncdiscover.contoso.com that redirects
    to an address of director.contoso.net is not supported over HTTPS.
    In such a topology, a mobile device client needs to use HTTP for the first request, so that the CNAME redirection is resolved over HTTP. Subsequent requests then use HTTPS. To support this scenario, you need to configure your reverse proxy with a web publishing
    rule for port 80 (HTTP).
    For details, see "To create a web publishing rule for port 80" in Configuring the Reverse Proxy for Mobility. CNAME redirection to the same domain is supported over HTTPS. In this case, the destination domain's certificate covers the originating
    domain.”
    I don’t think SRV records for additional SIP domain access edge is a workable solution as this is not supported by some devices.
    As per the below article:
    http://blog.schertz.name/2012/07/lync-edge-server-best-practices/
    “The recommended approach for external client Automatic Sign-In when supporting multiple SIP domains is to include a unique Access Edge FQDN for each domain name in the SAN field.  This is no longer a requirement (it was in OCS) as it is possible to
    create a DNS Service Locator Record (SRV) for each additional SIP domain yet have them all point back to the same original FQDN for the Access Edge service (e.g. sip.mslync.net). 
    This approach will trigger a security alert in Windows Lync clients which can be accepted by the user, but some other clients and devices are unable to connect when the Automatic Sign-In process returns a pair of SRV and Host (A) records which do not share
    the same domain namespace.  Thus it is still best practice to define a unique FQDN for each additional SIP domain and include that hostname in the external Edge certificate’s SAN field”.
    ===================
    1. Basically the requirement is to initially provide Lync conferencing services (minus PSTN conferencing) to internal, external, federated and anonymous participants with a view to providing PSTN conferencing and therefore enterprise voice services later.
    2. The customer currently supports close to 100 SMTP domains and wants to align their SIP domains with these existing domains. The structure of their business is such that “XXX IT Services” provide the IT infrastructure for a collection of companies who
    fall under the XXX umbrella but are very much run as individual entities.
    Question:
    Would you agree that I’m going to need a SAN for every SIP domain’s access edge FQDN?
    Thanks.

  • Has anyone run the connection pooling for mysql & tomcat successfully?

    I'm trying to set up connection pooling. I'm following the how-to page at
    http://jakarta.apache.org/tomcat/tomcat-4.1-doc/jndi-datasource-examples-howto.html
    But when i test the DBTest/test.jsp file, tomcat displays an error =
    could not load jdbc driver class 'null'(msdos)
    i have placed all the required .jar files in the tomcat lib.
    below is the cofiguration i did to the server.xml file
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN" debug="0">
    <!-- Uncomment these entries to enable JMX MBeans support -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
    debug="0"/>
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
    debug="0"/>
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved">
    </Resource>
    <ResourceParams name="UserDatabase">
    <parameter>
    <name>factory</name>
    <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
    </parameter>
    <parameter>
    <name>pathname</name>
    <value>conf/tomcat-users.xml</value>
    </parameter>
    </ResourceParams>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Tomcat-Standalone">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 4.0 documentation bundle for more detailed
    instructions):
    * Download and install JSSE 1.0.2 or later, and put the JAR files
    into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 8081 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8080" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="100" debug="0" connectionTimeout="20000"
    useURIValidationHack="false" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to -1 -->
    <!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8443" minProcessors="5" maxProcessors="75"
    enableLookups="true"
         acceptCount="100" debug="0" scheme="https" secure="true"
    useURIValidationHack="false" disableUploadTimeout="true">
    <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory"
    clientAuth="false" protocol="TLS" />
    </Connector>
    -->
    <!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8009" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" connectionTimeout="20000"
    useURIValidationHack="false"
    protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/>
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <!--
    <Connector className="org.apache.ajp.tomcat4.Ajp13Connector"
    port="8009" minProcessors="5" maxProcessors="75"
    acceptCount="10" debug="0"/>
    -->
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8082" minProcessors="5" maxProcessors="75"
    enableLookups="true" disableUploadTimeout="true"
    acceptCount="100" debug="0" connectionTimeout="20000"
    proxyPort="80" useURIValidationHack="false" />
    -->
    <!-- Define a non-SSL legacy HTTP/1.1 Test Connector on port 8083 -->
    <!--
    <Connector className="org.apache.catalina.connector.http.HttpConnector"
    port="8083" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- Define a non-SSL HTTP/1.0 Test Connector on port 8084 -->
    <!--
    <Connector className="org.apache.catalina.connector.http10.HttpConnector"
    port="8084" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Standalone" defaultHost="localhost" debug="0">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.3
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Global logger unless overridden at lower levels -->
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="catalina_log." suffix=".txt"
    timestamp="true"/>
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    debug="0" resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host -->
    <Host name="localhost" debug="0" appBase="webapps"
    unpackWARs="true" autoDeploy="true">
    <Context path="/my-jsp" docBase="c:\JSP-Files" debug="0"
    privileged="true" reloadable="true" />
         <Context path="" docBase="c:\Inetpub\wwwroot" debug="0" privileged="true" />
    <Context path="/sharon" docBase="C:\Tomcat 4.1\webapps\sharon" debug="0" privileged="true" />
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn"
    debug="0"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Logger shared by all Contexts related to this virtual host. By
    default (when using FileLogger), log files are created in the "logs"
    directory relative to $CATALINA_HOME. If you wish, you can specify
    a different directory with the "directory" attribute. Specify either a
    relative (to $CATALINA_HOME) or absolute path to the desired
    directory.-->
    <Logger className="org.apache.catalina.logger.FileLogger"
    directory="logs" prefix="localhost_log." suffix=".txt"
         timestamp="true"/>
    <!-- Define properties for each web application. This is only needed
    if you want to set non-default properties, or have web application
    document roots in places other than the virtual host's appBase
    directory. -->
    <!-- Tomcat Root Context -->
    <!--
    <Context path="" docBase="ROOT" debug="0"/>
    -->
    <!-- Tomcat Examples Context -->
    <Context path="/examples" docBase="examples" debug="0"
    reloadable="true" crossContext="true">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="localhost_DBTest_log." suffix=".txt"
    timestamp="true"/>
    <Ejb name="ejb/EmplRecord" type="Entity"
    home="com.wombat.empl.EmployeeRecordHome"
    remote="com.wombat.empl.EmployeeRecord"/>
    <!-- If you wanted the examples app to be able to edit the
    user database, you would uncomment the following entry.
    Of course, you would want to enable security on the
    application as well, so this is not done by default!
    The database object could be accessed like this:
    Context initCtx = new InitialContext();
    Context envCtx = (Context) initCtx.lookup("java:comp/env");
    UserDatabase database =
    (UserDatabase) envCtx.lookup("userDatabase");
    -->
    <!--
    <ResourceLink name="userDatabase" global="UserDatabase"
    type="org.apache.catalina.UserDatabase"/>
    -->
    <!-- PersistentManager: Uncomment the section below to test Persistent
              Sessions.
    saveOnRestart: If true, all active sessions will be saved
    to the Store when Catalina is shutdown, regardless of
    other settings. All Sessions found in the Store will be
    loaded on startup. Sessions past their expiration are
    ignored in both cases.
    maxActiveSessions: If 0 or greater, having too many active
    sessions will result in some being swapped out. minIdleSwap
    limits this. -1 means unlimited sessions are allowed.
    0 means sessions will almost always be swapped out after
    use - this will be noticeably slow for your users.
    minIdleSwap: Sessions must be idle for at least this long
    (in seconds) before they will be swapped out due to
    maxActiveSessions. This avoids thrashing when the site is
    highly active. -1 or 0 means there is no minimum - sessions
    can be swapped out at any time.
    maxIdleSwap: Sessions will be swapped out if idle for this
    long (in seconds). If minIdleSwap is higher, then it will
    override this. This isn't exact: it is checked periodically.
    -1 means sessions won't be swapped out for this reason,
    although they may be swapped out for maxActiveSessions.
    If set to >= 0, guarantees that all sessions found in the
    Store will be loaded on startup.
    maxIdleBackup: Sessions will be backed up (saved to the Store,
    but left in active memory) if idle for this long (in seconds),
    and all sessions found in the Store will be loaded on startup.
    If set to -1 sessions will not be backed up, 0 means they
    should be backed up shortly after being used.
    To clear sessions from the Store, set maxActiveSessions, maxIdleSwap,
    and minIdleBackup all to -1, saveOnRestart to false, then restart
    Catalina.
    -->
              <!--
    <Manager className="org.apache.catalina.session.PersistentManager"
    debug="0"
    saveOnRestart="true"
    maxActiveSessions="-1"
    minIdleSwap="-1"
    maxIdleSwap="-1"
    maxIdleBackup="-1">
    <Store className="org.apache.catalina.session.FileStore"/>
    </Manager>
              -->
    <Environment name="maxExemptions" type="java.lang.Integer"
    value="15"/>
    <Parameter name="context.param.name" value="context.param.value"
    override="false"/>
    <Resource name="jdbc/EmployeeAppDb" auth="SERVLET"
    type="javax.sql.DataSource"/>
    <Resource name="jdbc/TestDB"
         auth="Container"
         type="javax.sql.DataSource"/>
    <ResourceParams name="jdbc/TestDB">
    <parameter>
         <name>factory</name>
         <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
         </parameter>
         <!-- Maximum number of dB connections in pool. Make sure you
         configure your mysqld max_connections large enough to handle
         all of your db connections. Set to 0 for no limit.
         -->
         <parameter>
         <name>maxActive</name>
         <value>100</value>
         </parameter>
         <!-- Maximum number of idle dB connections to retain in pool.
         Set to 0 for no limit.
         -->
         <parameter>
         <name>maxIdle</name>
         <value>30</value>
         </parameter>
         <!-- Maximum time to wait for a dB connection to become available
         in ms, in this example 10 seconds. An Exception is thrown if
         this timeout is exceeded. Set to -1 to wait indefinitely.
         -->
         <parameter>
         <name>maxWait</name>
         <value>10000</value>
         </parameter>
    <!-- MySQL dB username and password for dB connections -->
    <parameter>
    <name>user</name>
    <value>javauser</value>
    </parameter>
    <parameter>
    <name>password</name>
    <value>javadude</value>
    </parameter>
    <!-- Class name for mm.mysql JDBC driver -->
    <parameter>
    <name>driverClassName</name>
    <value>org.gjt.mm.mysql.Driver</value>
    </parameter>
    <!-- The JDBC connection url for connecting to your MySQL dB.
         The autoReconnect=true argument to the url makes sure that the
         mm.mysql JDBC Driver will automatically reconnect if mysqld closed the
         connection. mysqld by default closes idle connections after 8 hours.
         -->
         <parameter>
         <name>url</name>
         <value>jdbc:mysql://localhost:3306/javatest?autoReconnect=true</value>
    </parameter>
    </ResourceParams>
    <Resource name="mail/Session" auth="Container"
    type="javax.mail.Session"/>
    <ResourceParams name="mail/Session">
    <parameter>
    <name>mail.smtp.host</name>
    <value>localhost</value>
    </parameter>
    </ResourceParams>
    <ResourceLink name="linkToGlobalResource"
    global="simpleValue"
    type="java.lang.Integer"/>
    </Context>
    </Host>
    </Engine>
    </Service>
    <!-- The MOD_WEBAPP connector is used to connect Apache 1.3 with Tomcat 4.0
    as its servlet container. Please read the README.txt file coming with
    the WebApp Module distribution on how to build it.
    (Or check out the "jakarta-tomcat-connectors/webapp" CVS repository)
    To configure the Apache side, you must ensure that you have the
    "ServerName" and "Port" directives defined in "httpd.conf". Then,
    lines like these to the bottom of your "httpd.conf" file:
    LoadModule webapp_module libexec/mod_webapp.so
    WebAppConnection warpConnection warp localhost:8008
    WebAppDeploy examples warpConnection /examples/
    The next time you restart Apache (after restarting Tomcat, if needed)
    the connection will be established, and all applications you make
    visible via "WebAppDeploy" directives can be accessed through Apache.
    -->
    <!-- Define an Apache-Connector Service -->
    <!--
    <Service name="Tomcat-Apache">
    <Connector className="org.apache.catalina.connector.warp.WarpConnector"
    port="8008" minProcessors="5" maxProcessors="75"
    enableLookups="true" appBase="webapps"
    acceptCount="10" debug="0"/>
    <Engine className="org.apache.catalina.connector.warp.WarpEngine"
    name="Apache" debug="0">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="apache_log." suffix=".txt"
    timestamp="true"/>
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    </Engine>
    </Service>
    -->
    </Server>
    Pleas4 help!!!

    you have your driver jar in Tomcat\common\lib?
    if so, check your classpath, it could be that.

  • Connection pool for big web application

    hi,
    i am developing an application which had more then 20 classes and each class use databases.
    i made an connection bean and use it for database connectivity but as my code expand and my application run for longer time i start getting error of connection limit.
    then i start using connection pool. in this i start using one connection for whole application.
    now i want that my
    1-whole application use that connection pool and i dont need to initialize it in different classes.
    2- The main problem is that may at a time i have to connect database with different user and these connections should remain active.
    my application will be used for enterprise level and remain active for months .
    following is my connection pool
    public class ConnectionPool implements Runnable
          private Log log=new Log();
          private String driver, url, username, password;
          private int maxConnections;
          private boolean waitIfBusy;
          private Vector availableConnections, busyConnections;
          private boolean connectionPending = false;
          public ConnectionPool(String driver, String url,String username, String password,int initialConnections, int maxConnections, boolean waitIfBusy) 
              this.driver         =     driver;
              this.url            =     url;
              this.username       =     username;
              this.password       =     password;
              this.maxConnections =     maxConnections;
              this.waitIfBusy     =     waitIfBusy;
              if (initialConnections > maxConnections)
                initialConnections = maxConnections;
              availableConnections = new Vector(initialConnections);
              busyConnections = new Vector();
              try
                  for(int i=0; i<initialConnections; i++)
                    availableConnections.addElement(makeNewConnection());
              catch(Exception e)
                log.write(e.getMessage());
          }//EO constructor
      public synchronized Connection getConnection()
             if (!availableConnections.isEmpty())
                  log.write("Total connections="+availableConnections.size());
                  Connection existingConnection =  (Connection)availableConnections.lastElement();
                  int lastIndex = availableConnections.size() - 1;
                  availableConnections.removeElementAt(lastIndex);
                  // If connection on available list is closed (e.g.,it timed out), then remove it from available list
                  // and repeat the process of obtaining a connection. Also wake up threads that were waiting for a connection because maxConnection limit was reached.
                        try
                            if (existingConnection.isClosed())
                              notifyAll(); // Freed up a spot for anybody waiting
                              return(getConnection());
                            else
                              log.write(  "in available connection"  );
                              busyConnections.addElement(existingConnection);
                              return(existingConnection);
                        catch(SQLException se)
                            log.write(se.getMessage());
              } else {
                        // Three possible cases:
                        // 1) You haven't reached maxConnections limit. So establish one in the background if there isn't
                        //    already one pending, then wait for the next available connection (whether or not it was the newly established one).
                        // 2) You reached maxConnections limit and waitIfBusy flag is false. Throw SQLException in such a case.
                        // 3) You reached maxConnections limit and waitIfBusy flag is true. Then do the same thing as in second
                        //    part of step 1: wait for next available connection.
                        if ((totalConnections() < maxConnections) &&  !connectionPending)
                          makeBackgroundConnection();
                        else if (!waitIfBusy)
                            log.write("Connection limit reached");
                        // Wait for either a new connection to be established (if you called makeBackgroundConnection) or for
                        // an existing connection to be freed up.
                        try {
                          wait();
                        catch(InterruptedException ie)
                          log.write(ie.getMessage());
                  // Someone freed up a connection, so try again.
                return(getConnection());
              }//EO main if-else
          return null;
    }//EO getconnection method
      // You can't just make a new connection in the foreground
      // when none are available, since this can take several
      // seconds with a slow network connection. Instead,
      // start a thread that establishes a new connection,
      // then wait. You get woken up either when the new connection
      // is established or if someone finishes with an existing
      // connection.
      private void makeBackgroundConnection() {
        connectionPending = true;
        try {
          Thread connectThread = new Thread(this);
          connectThread.start();
        } catch(OutOfMemoryError oome) {
          // Give up on new connection
          log.write(oome.getMessage());
      public void run() {
        try {
          Connection connection = makeNewConnection();
          synchronized(this) {
            availableConnections.addElement(connection);
            connectionPending = false;
            notifyAll();
        } catch(Exception e) { // SQLException or OutOfMemory
          // Give up on new connection and wait for existing one free up.
          log.write(e.getMessage());
      // This explicitly makes a new connection. Called in
      // the foreground when initializing the ConnectionPool,
      // and called in the background when running.
      private Connection makeNewConnection()
        //log.write("make new connection with  "+url+" "+username+" "+password +" and driver= "+driver);
        Connection connection=null;
          try
            // Load database driver if not already loaded
            //Class.forName(driver);
             DriverManager.registerDriver(new OracleDriver());
            // Establish network connection to database
            connection =  DriverManager.getConnection(url, username, password);
                    if( connection.isClosed() )
                      log.write("ooooooops no connection");
                    else
                      log.write("yahoooo get connection");
          catch(Exception e)
            log.write(e.getMessage());
        return(connection);
      public synchronized void free(Connection connection) {
        busyConnections.removeElement(connection);
        availableConnections.addElement(connection);
        // Wake up threads that are waiting for a connection
        notifyAll();
      public synchronized int totalConnections() {
        return(availableConnections.size() + busyConnections.size());
      /** Close all the connections. Use with caution:
       *  be sure no connections are in use before
       *  calling. Note that you are not <I>required</I> to
       *  call this when done with a ConnectionPool, since
       *  connections are guaranteed to be closed when
       *  garbage collected. But this method gives more control
       *  regarding when the connections are closed.
      public synchronized void closeAllConnections() {
        closeConnections(availableConnections);
        availableConnections = new Vector();
        closeConnections(busyConnections);
        busyConnections = new Vector();
      private void closeConnections(Vector connections) {
        try {
          for(int i=0; i<connections.size(); i++) {
            Connection connection =
              (Connection)connections.elementAt(i);
            if (!connection.isClosed()) {
              connection.close();
        } catch(SQLException sqle) {
          // Ignore errors; garbage collect anyhow
          log.write(sqle.getMessage());
      }i get this code from internet, in which it also mention that use following code
    public class BookPool extends ConnectionPool {
    private static BookPool pool = null;
    private BookPool(...) {
    super(...); // Call parent constructor
    public static synchronized BookPool getInstance() {
    if (pool == null) {
    pool = new BookPool(...);
    return(pool);
    }if some one want to use it for whole application then use connection pool by BookPool,
    now main point is i m not getting the BookPool class could some one explain it to me???
    and second by using it can i connect to different users of DB at a time, if cant then how i can.
    its good if some one explain it by little code.
    thxxxxxxxxxxxxxxxxxx

    If this is a real, serious application and not just for practice, then why are you trying to write your own connection pool implementation?
    You'd better use Jakarta's Commons Pool and Commons DBCP, which are widely used, well tested implementations of connection pools.
    If this is a web application running in a J2EE container, you'd normally configure the connection pool in the container. The container will register the pool in JNDI and you'd look it up in JNDI from your application.
    Here's some documentation on how to do that in Tomcat 5.5: JNDI Datasource HOW-TO

  • Printing requirements for Bi publisher report

    Hi all
    Can any one tell me the steps required for me to look at the setups on my server side so that i can print the reports using the bursting API.The problem is i am not able to debug the log file as there is no connection error or any other exception but still i am not able to see the documents printed.
    this is part of my log file
    POST /printers/u260 HTTP/1.1Host: u260:631
    User-Agent: Oracle XML Publisher 5.6.3
    Connection: Keep-Alive
    Transfer-Encoding: chunked
    Content-Type: application/ipp
    <<<
    IPP version: 10
    operation id: 410
    charset: utf-8
    request id: 1
    -- operation attrs --
    [1]attributes-charset:utf-8
    [1]attributes-natural-language:en-us
    -- printer attrs --
    -- job attrs --
    From this i understood that there is no problem with the connection but some how it is not printing.
    Any help will be appreciated as i am having tough time figuring out this.
    Thanks
    Harsha

    You need about 40 GB total. Also make sure that your /tmp and /var/tmp have about 2GB free in them, otherwise the installation will fail. (I can't remember the exact numbers, though.)
    I agree, the system requirements do not exist anywere in any straightforward documentation. You need to read at least three different documents to piece together the requirements, and the results will still be incomplete.

Maybe you are looking for

  • Mail problems BIG TIME

    So I posted earlier about how every time I go into Signatures to add / change text it's unbelievably slow. Inserting the cursor takes forever and if I try to highlight a line of text and erase it it a) takes forever for the "highlighted" text to show

  • Change GMT time and 12 hours format in the Callback Properties dialog box

    Hello! There are CAD 7.6.3 for UCCE. In the Callback Properties dialog box for personal CallBack displays the Customer's Current Time in GMT 0 and the data format of 12 hours. How to display in this window, the time is GMT +3 and change time in 24-ho

  • Using USB port for dial up networking with mobile phone?

    I am thinking of using my mobile phone as a dial up modem when i am away from home and it uses a data cable to the USB port. is there a way to use the USB port on a Mac as your source for the internet? Thanks Jason

  • The icon "Settings" is not working in my ipad2

    The Icon "Settings" of my ipad 2is not working. If you click it does not respond. It just started like that yesterday but today anoher problem that i can not slide to restart, and if i t works it just open with a very low speed. Someone tell me waht

  • Error when linking report to stored procedure defined with one input parm

    Error when linking report to stored procedure defined with one input parameter The report will work ok, when the parameter is removed from the stpred procedure An unhandled win32 exception occurred in crw32.exe[4480] Stored Proc (sql server 2005) USE