Connecting to multiple schemas using datasources file

I am trying to connect to multiple oracle schemas on the same database using a 10.1.3 version of datasources.xml file. Has any one done this. If yes, can you please post an example.

I am trying to connect to multiple oracle schemas on the same database using a 10.1.3 version of datasources.xml file. Has any one done this. If yes, can you please post an example.

Similar Messages

  • Connecting to multiple clusters using WAR file

    I am trying to connect to multiple clusters by creating separate WAR(web archive) files through Apache tomcat.
    My intention is to have each 'war' act as a node to a separate cluster. I started with the 'jmx-console' example available on the Tangosol website.
    So, my understanding is that even though all the war files are loaded in the same JVM (tomcat), since each of these war's has its own classloader, we can have multiple tangosol ndoes from the same JVM.
    So I have two basic questions:
    *) If I use system.setProperty("tangosol.coherence.override", myOverrideFile-x) in the class loader of each WAR, can I have the nodes connect to different clusters. is'nt the System.setProperty() setting the property on the JVM? If so, how can multiple classloaders running in the same JVM connect to different clusters?
    *) Where should I place config/override xml files under the tomcat directory structure for it to be accesible to the class loader.

    Hi sumax,
    Thanks for your response Robert,
    Maybe I did not understand your solution fully, I
    thought your idea mostly talks about how to configure
    the override files, and not how to configure the
    individual nodes to use different override files.
    That part is taken care of by the fact, that each war file loads the tangosol-coherence-override.xml from its own WEB-INF/classes directory.
    Classloading from a web app in Tomcat resolves classpath classes/resources in the following order:
    1. java.* and javax.* and com.sun.* packages are loaded from the system classloader. This, I believe, cannot be circumvented on a Sun JVM with a classloader extending a JDK classloader class.
    2. Reloadable JSP classloader (loads servlet classes generated from JSP pages). This is webapp specific, each webapp sees its own generated servlet classes.
    3. Reloadable Webapp classloader (loads from WEB-INF/classes and WEB-INF/lib of the war). This is webapp specific, each webapp sees its own WEB-INF/classes and WEB-INF/lib.
    4. Common classloader (loads from ${TOMCAT_HOME}/common/classes and ${TOMCAT_HOME}/common/lib). This is used by all webapps and the Tomcat container itself. You should not put anything here, except for libraries for resource drivers, etc. JDBC drivers or anything which needs to be published in the JNDI server in a non-application-specific way.
    5. Shared classloader (loads from ${TOMCAT_HOME}/shared/classes and ${TOMCAT_HOME}/shared/lib). This is shared between all webapps but not by the Tomcat container. You can put libraries here with which you want to communicate between webapps. Any singletons in classes in this classloader will be shared between webapps. A place for configuration files which are shared or if they have the application name in their filenames (so that they are not accidentally used by another application).
    The question still lingering in my mind is:
    How do I tell the classloader which override file to
    load.
    A tangosol-coherence-override.xml with different content should be put in the WEB-INF/classes directory of each war file. Each of those files will be loaded only by its containing webapp. This filename is defined in the deployment-mode-specific override files in coherence.jar or tangosol.jar (I don't remember which, off my head) which all chain to this same tangosol-coherence-override.xml file.
    That override file can contain all your overrides, if you do not want to change those between restarts, or it can chain with its xml-override attribute (as shown in the example) to another file with the application name in its filename.
    This chaining allows each webapp to refer to a differently named override file, so all those override files can be put into shared/classes folder. The uniquely named override file in shared/classes allows you to change the override configuration between restarts.
    As for cluster address and port, you can specify those in any of the override files.
    The full chaining path is the following (if you used the tangosol-coherence-override.xml example in my first reply):
    tangosol-coherence.xml -> tangosol-coherence-prod.xml -> tangosol-coherence.xml (you provide this file in your webapp WEB-INF/classes) -> tangosol-coherence-override-warfilename.xml (you provide this file in the ${TOMCAT_HOME}/shared/classes)
    I hope this makes it clear.
    Best regards,
    Robert

  • How can i create  excel sheet with multiple tabs using utl file?

    how can i create excel sheet with multiple tabs using utl file?
    any one help me?

    Jaggy,
    I gave you the most suitable answer on your own thread yesterday
    Re: How to Generating Excel workbook with multiple worksheets

  • Object reference not set to an instance of an object error when generating a schema using flat file schema wizard.

    I have a csv file that I need to generate a schema for. I am trying to generate a schema using flat file schema wizard but I keep getting "Object reference not set to an instance of an object." error when I am clicking on the Next button after
    specifying properties of the child elements on the wizard. At the end I get schema file generated but it contains an empty root record with no child elements.
    I thought may be this is because I didn't have my project checked out from the Visual SourceSafe db first but I tried again with the project checked out and got the same error.
    I also tried creating a brand new project and generating a schema for it but got the same error.
    I am not sure what is causing Null Reference exception to be thrown and there is nothing in the Windows event log that would tell me more about the problem.
    I am using Visual Studio 2008 for my BizTalk development.
    I would appreciate if some has any insides on this issue.

    Hi,
    To test your environment, create a new BizTalk project outside of source control.
    Create a simple csv file on the file system.
    Name,City,State
    Bob,New York,NY
    Use the Flat file schema Wizard to create the flat file schema from your simple csv instance.
    Validate the schema.
    Test the schema using your csv instance.
    This will help you determine if everything is ok with you environment.
    Thanks,
    William

  • Save file in multiple directories using receiver file adapter?

    Hi,
    Is it posible to save file in multiple directories using receiver file adapter?
    Regards,
    Ashish

    Well, there is a round about way to do that -
    The idea is to use multi mapping. 1:n mapping
    1) Map ur message to 2 different record set nodes. Since you want to use the same file both the mapping will look exactly the same. make sure that the filepath and filename are a part of the output payload message
    2) In the file adapter config. make sure the the file name and file path are from these payload fields. You can use a context object to refer these fields. Voila...the files are created in the 2 direcoties you mentioned.
    of course the simplest way is to route the same message to 2 business systems/services and write them out using 2 ccs.,
    Arvind R

  • Connection to multiple databases using a single EJB

    How can I connect to multiple Databases (using @PersistenceContext) using an EJB?
    Did I need to connect various Entity Managers corresponding to the each database and simply send my Queries?
    I am using Glassfish Application Server
    Netbeans IDE
    Java Derby Database
    Oracle Database
    Java Persistence API
    Thanks in Advance

    Yes, you need a persistence context and thus entity manager per database. Depending on what you want to achieve you may also need to go to the next level in your skill set and learn all about distributed transactions.

  • Connecting to multiple schemas

    am trying to connect to multiple oracle schemas on the same database using a 10.1.3 version of datasources.xml file. Has any one done this. If yes, can you please post an example.

    I tried it but the application behaviour is weird. For example when a save a record, the save data is not displayed. Some times old data is displayed

  • Multiple JNDI connections and connecting to multiple schemas

    Currently I have some code that makes a connection to a specific schema in the oracle database.
    I need to somehow connect to more than one schema and do some resultset processing on tables in two different schemas.
    How can I do I establish the connection to two schemas with JNDI?
    Is this possible?
    Thanks.

    have u looked at XA datasources and distributed transactions

  • Can i drop the connection user and schema used for Accociate repository

    Hi
    I have created a connection , user and schema and Associated repository to it then migrated a database from diffrent product. to oracle
    using the above mentioned connection, user ,schema and asscociated repositroy.
    11g XE.
    can i delete it after the migration.
    yours sincerely.

    Yes you can !. Permission to drop those artifacts is granted except the target schema where the converted objects are. The migration repository user/connection can be dropped.

  • Unable to create sample schema using response file

    Installed oracle 11g r1 on Linux server using response file (silent mode). I wanted to install the sample schema for testing so I updated the sample_schemas section in $ORACLE_HOME/assistants/dbca/templates/dbcreate.dbc
    <option name="SAMPLE_SCHEMA" value="true">
    <tablespace id="USERS"/>
    </option>
    After the installation I noticed sample schemas HR, SH were not created.
    -     Do we have to manually create the sample schemas by running the individual scripts?
    -     Also could not find the master script mksample.sql in the $ORACLE_HOME/demo/schema directory.
    Please let me know if I am missing any parameter in dbcreate.dbc.
    Thank you,

    user504183 wrote:
    Installed oracle 11g r1 on Linux server using response file (silent mode). I wanted to install the sample schema for testing so I updated the sample_schemas section in $ORACLE_HOME/assistants/dbca/templates/dbcreate.dbc
    <option name="SAMPLE_SCHEMA" value="true">
    <tablespace id="USERS"/>
    </option>
    After the installation I noticed sample schemas HR, SH were not created.
    -     Do we have to manually create the sample schemas by running the individual scripts?
    -     Also could not find the master script mksample.sql in the $ORACLE_HOME/demo/schema directory.
    Please let me know if I am missing any parameter in dbcreate.dbc.
    Thank you,You can't install manually the sample schemas since they come in a "pluggable tablespace" Examples which is automatically appended to your database by DBCA. If you would miss choosing the option for the same, you would need to get them created through the Examples Media CD and follow the instructions given here.
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28328/installation.htm#I4367
    HTH
    Aman....

  • Multiple schema used on apex?

    hi all,
    i posted a thread here need some comments or suggestions for database designing
    just wanted to ask comments from APEX developers as well. is it recommended to have more than one schema if i use Oracle Application Express as front-end tool?
    considering that having multiple schema means multiple workspace. then there will be features that i won't be able to use such as a single sign on for all applications in this case?
    what i want to know is are there any other ways to have a user sign in to one application and be authenticated in all applications despite that those applications are on different workspaces?
    thanks
    allen

    Allen,
    One Workspace doesn't mean one schema. You can have multiple schemas assigned
    to one workspace.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • Connect JBOSS to MySQL using datasource,both servers are on different machi

    HI,
    In my application, I use JBOSS server and MySQL database.JBOSS and MySQL server are on two different machines.I am using mysql datasource,not driver manager to get the connection. i configured mysql-ds.xml and put it in %JBOSS_HOME%\server\default\deploy directory.
    I have also put the mysql connector jar in the %JBOSS_HOME%\server\default\lib directory.
    mysql-ds.xml file
    <?xml version="1.0" encoding="UTF-8"?>
    <!-- $Id: mysql-ds.xml 41016 2006-02-07 14:23:00Z acoliver $ -->
    <!-- Datasource config for MySQL using 3.0.9 available from:
    http://www.mysql.com/downloads/api-jdbc-stable.html
    -->
    <datasources>
    <local-tx-datasource>
    <jndi-name>JBossAtWorkDS</jndi-name>
    <connection-url>jdbc:mysql://192.168.1.3:3306/JBossAtWorkDS</connection-url>
    <driver-class>com.mysql.jdbc.Driver</driver-class>
    <user-name>sa</user-name>
    <password></password>
    <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name>
    <!-- should only be used on drivers after 3.22.1 with "ping" support
    <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name>
    -->
    <!-- sql to call when connection is created
    <new-connection-sql>some arbitrary sql</new-connection-sql>
    -->
    <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers
    <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql>
    -->
    <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) -->
    <metadata>
    <type-mapping>mySQL</type-mapping>
    </metadata>
    </local-tx-datasource>
    </datasources>
    code to make the connection is as follows...
    public class JDBCCarDAO implements CarDAO
    private List carList;
    private static final String DATA_SOURCE="java:comp/env/jdbc/JBossAtWorkDS";
    public JDBCCarDAO()
    public List findAll()
    List carList = new ArrayList();
    DataSource dataSource = null;
    Connection conn = null;
    Statement stmt = null;
    ResultSet rs = null;
    try
    dataSource = ServiceLocator.getDataSource(DATA_SOURCE);
    conn = dataSource.getConnection();
    stmt = conn.createStatement();
    stmt.execute("select * from CAR");
         rs = stmt.getResultSet();
    while(rs.next())
    CarDTO car = new CarDTO();
    car.setMake(rs.getString("MAKE"));
    car.setModel(rs.getString("MODEL"));
    car.setModelYear(rs.getString("MODEL_YEAR"));
    carList.add(car);
    catch (Exception e)
    System.out.println("No SQL connection is made"+e);
    finally
    try
    if(rs != null){rs.close();}
    if(stmt != null){stmt.close();}
    if(conn != null){conn.close();}
    catch(Exception e)
    System.out.println(e);
    return carList;
    I am getting an error as follows...
    No SQL connection is made.
    org.Jboss.util.NestedSQLException:Could not create connection; -nested throwable: (java.sql.SQLException:Unable to connect to any host due to exception:java.net.ConnectException:Connection Time out:connect......
    how to get rid of this problem????plz...someone help me....

    If you have access to a copy of TimesTen 6.0 you can look in the AppServer Configuration Guide (appsrv.pdf) that was shipped with that release. This guide is currently being rewritten and so is not yet available for 7.0.
    If you can't get hold of a copy, e-mail me at [email protected] and I'll send it to you.
    Chris

  • Mac OSX desktop dropping connection with multiple copy processes & large files

    The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2 updated. The
    volume the macs are using is part of a cluster. The users mount the volumes
    on their macs and everying is for the most part fine. If they grab a bunch
    of files and copy them from desktop to server it's fine as long as it's only
    a single copy process. The users are part of the hi-res department and the
    files can be 1GB or larger. If they drag one or more large files, and then
    while that's copying they drag some more files, so both copy processes are
    running at once....quite often the volume will dismount from the desktop and
    you will get unable to copy because some resource is unavailable. Sometimes
    the finder crashes, sometimes not. Often the files that were partially
    copied get locked and the users needs to reboot their Mac in order to delete
    them. I'm getting pretty desperate hear, anyone have an idea what's going
    on. I don't know if this is a Tiger thing or a large file thing or a
    multiple copy stream thing, a netware thing or a mac thing.....we have
    hundreds of other users running OSX 10.3 and earlier who are not reporting
    this problem, but they also don't copy files that size. Someone please tell
    me they have seen this before....thanks very much. Oh, before going to 6.5
    and NFAP the servers were 5.1 with Prosoft server and they never had the
    problem.
    Jake

    Thanks for your help, I have incidents open now with Apple and Novell, I
    hope one of them can provide something for us. We tried applying 6.5 SP4 to
    a test server....the problem still happened but was "better", the copy
    operations still quit but with SP4 applied the volume did not dismount....or
    if it did it remounted automatically because it was still connected after
    OKing through the copy errors.
    "Jeffrey D Sessler" <[email protected]> wrote in message
    news:[email protected]...
    >I tried two 2GB files. No problems at all but I'm in a 100% end-to-end
    >Gigabit environment. My server storage is also a very-fast SAN.
    >
    > Best,
    > Jeff
    >
    >
    > "Jacob Shorr" <[email protected]> wrote in message
    > news:[email protected]...
    >> Jeffrey,
    >>
    >> Have you tried the exact same test, dragging say two 500MB files in
    >> seperate
    >> copy operations? I hear what you're saying about the 10/100 link, but we
    >> don't run gigabit to the desktops, and we're not going to anytime soon.
    >> Even if that could resolve the issue we need something kind of other fix
    >> for
    >> our infrastructure. I will look into any errors on the switch.
    >>
    >> "Jeffrey D Sessler" <[email protected]> wrote in message
    >> news:[email protected]...
    >>> Well, considering that I'm not seeing the issue on my 10.4.2 machines
    >>> against my 6.5Sp3 servers, I'm not sure what you should do at this
    >>> point.
    >>> Since you say that the 10.3 machines don't have an issue, it makes it
    >> sound
    >>> to me like this is an Apple issue.
    >>>
    >>> The logs point at a communication issue... Is there anyway to get that
    >>> Mac
    >>> on to a Gigabit connection to see if you can duplicate it?
    >>>
    >>> The other option is to wait for 10.4.3 to be released and see if the
    >> problem
    >>> goes away.
    >>>
    >>> Again, on only a 10/100 link, one copy of a large file _will_ saturate
    >>> the
    >>> link.Perhaps 10.4.2 has an issue with this?
    >>>
    >>> Also, when you're doing the copy, what to the error counters in the
    >> switches
    >>> say?
    >>>
    >>> Jeff
    >>>
    >>> "Jacob Shorr" <[email protected]> wrote in message
    >>> news:[email protected]...
    >>> > There are definately no mis-matches. This has been checked and
    >> re-checked
    >>> > a
    >>> > dozen times. It's only on 10.4......we can replicate it on every 10.4
    >>> > machine, and we cannot replicate it on any machine that is 10.3. What
    >>> > should I do to go about getting this fixed, should I be contacting
    >>> > Apple
    >>> > or
    >>> > Novell? The speed is always good until it actually decides to drop
    >>> > and
    >>> > cut
    >>> > off.
    >>> >
    >>> >
    >>> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> > news:7jj%[email protected]...
    >>> >> Looks like communication between the Mac and the Netware server is
    >>> > dropping.
    >>> >> AFP in 10.3 and 10.4 support auto-reconnection but I'm sure that it
    >> will
    >>> >> fail the copy process.
    >>> >>
    >>> >> I'd first check to make sure that there are not any mis-matches on
    >>> >> the
    >>> >> switch e.g. the Mac is set to Auto (as it should be) but someone has
    >> set
    >>> > the
    >>> >> switch to a forced mode. Both should be auto. A duplex miss-match
    >>> >> could
    >>> >> cause the Mac not to see the heart beat back from the Novell server.
    >>> >>
    >>> >> Like I said, if the workstation is only on 10/100, a single copy
    >> process
    >>> > on
    >>> >> a G5 Mac will saturate that link. Adding more concurrent copies will
    >> only
    >>> >> result in everything slowing down and taking longer, or you'll get
    >>> >> the
    >>> >> dropped connections.
    >>> >>
    >>> >> Best,
    >>> >> Jeff
    >>> >>
    >>> >>
    >>> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> news:Ybc%[email protected]...
    >>> >> > Take a look at the last entries in the system log right after it
    >>> > happened,
    >>> >> > let me know if it means anything to you. Thanks.
    >>> >> >
    >>> >> > Sep 29 13:26:10 yapostolides kernel[0]: AFP_VFS afpfs_mount:
    >>> >> > /Volumes/FP04SYS11, pid 210
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >> doing
    >>> >> > reconnect on /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > connect
    >>> >> > to
    >>> >> > the server /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Opening
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Logging
    >>> >> > in
    >>> >> > with uam 2 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > Restoring
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS
    >>> >> > afpfs_MountAFPVolume:
    >>> >> > GetVolParms failed 0x16
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > afpfs_MountAFPVolume failed 22 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > received
    >>> >> > VQ_DEAD event (32)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > posting
    >>> >> > to
    >>> >> > KEA to unmount /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > type
    >>> >> > 'afpfs', mounted on '/Volumes/FP04SYS11', from
    >>> >> > 'afp_0TQCV10QsPgy0TShVK000000-4340.2c000006', dead
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > found
    >> 1
    >>> >> > filesystem(s) with problem(s)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_unmount:
    >>> >> > /Volumes/FP04SYS11, flags 524288, pid 43
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> >> > news:GH%[email protected]...
    >>> >> >> We move large files all the time under SP3 with no issues however,
    >>> > there
    >>> >> > are
    >>> >> >> several finder/copy/afp issues in Tiger that are do to be fixed in
    >>> >> >> 10.4.3.
    >>> >> >>
    >>> >> >> Also, if you have any type of network issue such as duplex
    >> mis-matches
    >>> > or
    >>> >> >> are running say, only a 10/100 network, a single Mac can not only
    >>> >> >> transfer
    >>> >> >> more than 10MB/sec (filling the network pipe) or generate so many
    >>> >> > collisions
    >>> >> >> (duplex mis-match) that you could drop communication to the
    >>> >> >> server.
    >>> >> >>
    >>> >> >> What type of server (speed, disks, raid level, NIC speed) and what
    >>> >> >> type
    >>> >> >> of
    >>> >> >> network (switched gigabit, switched 10/100, shared 10/100, etc.)
    >>> >> >>
    >>> >> >> How long does it take to copy that single 1GB file to the server?
    >>> >> >>
    >>> >> >> Does a single copy process always work?
    >>> >> >>
    >>> >> >> Jeff
    >>> >> >>
    >>> >> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> >> news:[email protected]...
    >>> >> >> > The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2
    >> updated.
    >>> >> > The
    >>> >> >> > volume the macs are using is part of a cluster. The users mount
    >> the
    >>> >> >> > volumes
    >>> >> >> > on their macs and everying is for the most part fine. If they
    >> grab
    >>> >> >> > a
    >>> >> >> > bunch
    >>> >> >> > of files and copy them from desktop to server it's fine as long
    >>> >> >> > as
    >>> > it's
    >>> >> >> > only
    >>> >> >> > a single copy process. The users are part of the hi-res
    >> department
    >>> > and
    >>> >> >> > the
    >>> >> >> > files can be 1GB or larger. If they drag one or more large
    >>> >> >> > files,
    >>> > and
    >>> >> >> > then
    >>> >> >> > while that's copying they drag some more files, so both copy
    >>> > processes
    >>> >> > are
    >>> >> >> > running at once....quite often the volume will dismount from the
    >>> >> >> > desktop
    >>> >> >> > and
    >>> >> >> > you will get unable to copy because some resource is
    >>> >> >> > unavailable.
    >>> >> >> > Sometimes
    >>> >> >> > the finder crashes, sometimes not. Often the files that were
    >>> > partially
    >>> >> >> > copied get locked and the users needs to reboot their Mac in
    >>> >> >> > order
    >>> >> >> > to
    >>> >> >> > delete
    >>> >> >> > them. I'm getting pretty desperate hear, anyone have an idea
    >> what's
    >>> >> > going
    >>> >> >> > on. I don't know if this is a Tiger thing or a large file thing
    >> or
    >>> >> >> > a
    >>> >> >> > multiple copy stream thing, a netware thing or a mac
    >>> >> >> > thing.....we
    >>> > have
    >>> >> >> > hundreds of other users running OSX 10.3 and earlier who are not
    >>> >> > reporting
    >>> >> >> > this problem, but they also don't copy files that size. Someone
    >>> > please
    >>> >> >> > tell
    >>> >> >> > me they have seen this before....thanks very much. Oh, before
    >> going
    >>> > to
    >>> >> >> > 6.5
    >>> >> >> > and NFAP the servers were 5.1 with Prosoft server and they never
    >> had
    >>> >> >> > the
    >>> >> >> > problem.
    >>> >> >> >
    >>> >> >> > Jake
    >>> >> >> >
    >>> >> >> >
    >>> >> >>
    >>> >> >>
    >>> >> >
    >>> >> >
    >>> >>
    >>> >>
    >>> >
    >>> >
    >>>
    >>>
    >>
    >>
    >
    >

  • FTP connection error whil,e using flat file adapter error

    while using file adapter in the reciever end for proxy to file
    i am giving  paranters like server IP address and the port , i dont know which port to give by default it is giving me 21 , anyway how to check whether the connection is correct
    and more over
    Message processing failed. Cause: com.sap.aii.af.ra.ms.api.RecoverableException: Error when getting an FTP connection from connection pool: com.sap.aii.af.service.util.concurrent.ResourcePoolException: Unable to create new pooled resource: ConnectException: Connection timed out: connect
    please help me in this thanks

    HI Sridhar ,
    First check wether server started or not and then check you are connecting to FTP server by
    go to run -> cmd and write ping and ipaddress which is used and see whether u r getting reponse from teh FTP server.
    Try to login to the ftp server which you have mentioned in CC using the user name and pwd, to chk whether you have the permissions to login to the server.
    Also the check whether the folder you are trying to access is having permission for delete/read/write.
    Restart the FTP server and try it again.
    Regards
    Sridhar Goli

  • Accessing multiple directories using single file adapter

    Hi All,
              I have data coming from two banks. I need to put them in SAP file server. Now i have two sender adapters for each bank, now is there a way i can use same receiver adapter but create diff directories to store them.
    XIer

    Hi,
    In file sender communication channel there is a check box for Advanced Selection for Source file. Check that and you dont put anything in exclusion which means it takes all the files from multiple direcotires. Please check this weblog where he has excluded some files in the directory. But you dont need to exclude anything.
    /people/mickael.huchet/blog/2006/09/18/xipi-how-to-exclude-files-in-a-sender-file-adapter
    Regards,
    ---Satish

Maybe you are looking for

  • Acrobat Pro 9 opens multiple files (50) on every start

    Start the program or click on a pdf file and - pro 9 opens and loads every ******* pdf file I've had open in the past however long.  Seems to be a limit of 50 files - tries to load a few more and then stops.  What's changed since I last used it norma

  • Upgrade to Solaris 2.7 from 2.6

    Hello, How can I obtain Solaris 2.7 upgrade media from SUN? Do I have to pay for this or does it come with my support agreement with SUN? Where can I get some info also on upgrade process? Thanks in Advance.

  • No credit, Deposit questions?

    Hello I have a few questions if anybody can shed some light or experience.... I'm 20 years old and was planning on selecting Verizon as my first (and hopefully last) cellphone provider. The problem is that I have no credit and was looking to order th

  • Do I need to uninstall Reader 8 before installing Reader X?

    Do I need to uninstall Reader 8 before installing Reader X?

  • Performance Manager doesn't work

    I defined the preferred credential for database and Node, OEM works fine. Then I tried to use Performance Manager with OEM, but every time when I tried to connect to database, I always got the same error message that said: VTD-0123: Error connecting