Configuring Non-emulated dataSources

I have try to configure a non-emulated data source because my emulated data source complains about open connections (see earlier posting), but when I start up the server it throws an exception:
DataSource logwriter activated...
java.lang.ClassCastException: oracle.jdbc.pool.OracleXAConnectionCacheImpl
at com.evermind.server.Application.initDataSource(Application.java:1360)
at com.evermind.server.Application.initDataSources(Application.java:1842
at com.evermind.server.Application.preInit(Application.java:388)
at com.evermind.server.Application.setConfig(Application.java:126)
at com.evermind.server.Application.setConfig(Application.java:111)
at com.evermind.server.ApplicationServer.initializeApplications(Applicat
ionServer.java:1316)
at com.evermind.server.ApplicationServer.setConfig(ApplicationServer.jav
a:1087)
at com.evermind.server.ApplicationServerLauncher.run(ApplicationServerLa
uncher.java:65)
at java.lang.Thread.run(Thread.java:484)
at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:47)
My data-sources.xml file contains the following:
<?xml version="1.0" standalone='yes'?>
<!DOCTYPE data-sources PUBLIC "Orion data-sources" "http://xmlns.oracle.com/ias/dtds/data-sources.dtd">
<data-sources>
     <data-source
          class="com.evermind.sql.OrionCMTDataSource"
          name="OracleDS"
          location="jdbc/OracleCMTDS1"          
          connection-driver="oracle.jdbc.driver.OracleDriver"
          username="sym"
          password="drive"
          url="jdbc:oracle:thin:@eureka:1521:sym"
          inactivity-timeout="30"          
     />
</data-sources>
Also in the example in the spec no ejb-location or xa-location is defined for the non-emulated data source, are these required.
My bean transactions are container managed.
Thanks in advance

Hi Paul,
What's an "emulated data source"?
The only things I changed in the "data-sources.xml" file were the "username",
"password" and "url" attributes -- and I am using the default "data-sources.xml"
file that came with my copy of OC4J version 1.0.2.2 (which I run on a
SUN Solaris 7 with Oracle 8.1.7.2)
Despite the portability claims, I don't think that Weblogic apps can
be ported unmodified to OC4J.
Unfortunately I can't offer anything else because of the lack of information
you have supplied. What environment are you using and what are you trying
to do?
Currently I have successfully deployed and executed a J2EE application
(to OC4J -- version details above), consisting of a java client that
modifies the database via a BMP entity bean.
I can only suggest that you look at the following web sites which
may give you some more insight into how to develop, deploy and execute
J2EE applications using OC4J (not in any particular order):
http://www.atlassian.com
http://www.orionserver.com
http://www.orionsupport.com
http://www.elephantwalker.com
Good Luck,
Avi.

Similar Messages

  • Updaing 2 schema's in 1 db = non-emulated datasource

    When updating two databases from within a single tx, you must use a non-emulated datasource (i.e. JTA support is required).
    Is this true of updating two database schema's (same database) in the same tx as well ? The Oracle documentation seems not to discuss this.
    Regards,
    Manoj.

    Manoj -- My apologies. I misread the question. If you reference another schema but only use a single data source definition then it is a 1 PC. You only need to configure 2 PC if you will be creating transactions across two different data sources and you want true 2 PC semantics. In fact, I'm not sure if in the pure Oracle database view of the world you wouldn't have to do the same thing but it has just been a while since I looked at that. Don't forget that you can also use basic database mechanisms like synonyms and views to reference the other schema.
    Thanks -- Jeff

  • Glassfish configuration for SQlite DataSource, Need a How-To

    I'm a netbeans user (NB) and got into the admin page for glassfish, to create one.
    Unfortunately, I'm trying to configure a SQlite datasource, and I didn't see it in the list.
    I probably could have done it, editing the .xml and context.xml file, myself, so I'm not really sure how the wizard works, what files are involved ( be it a .properties file or what).
    So please clue me in, if there's something non-standard about this ( and there obviously is )
    btw - I'm using NB 6.5 and Glassfish 2
    and I'll need to know how to configure it by hand, bypassing the wizard if possible.
    ( or adding SQlite capabilities to it )
    Here was a very good link on the topic of Glassfish and datasource, so it applies pretty well, all except for the SQlite connection.
    http://forums.java.net/jive/thread.jspa?threadID=30807
    Suggestions welcome, and most appreciated.

    Still interested in how to do this, but like I said, it likely needs to be done directly to the Deployment Descriptor and server.xml
    From what I can see, it won't work through the wizard, that is provided.

  • Concept: How can I handle a hole in a non-sap datasource?

    Hello,
    I want to discuss here a problem with non-sap datasource:
    We load data from oracle db with db-connect technique. Each record we load has an idate(record is created) and a udate(record is changed). Based on the udate we create a kind of delta load:
    The udate is selection criteria of the Infopackage(full upload)
    In the start routine of the transferrule we detect the oldest udate and store it to the tvarc table. This UDATE is the low selection criteria for udate of the next load. So we reload only the records, which are change after the last load.
    The data are transferred to ODS (overwrite) and so on..
    That works perfect!
    But now we find out, that in this non-sap datasource it’s possible to delete records directly. In SAP we have usually the procedure that a reversal document is created to delete a record(for example FI documents). In fact this non-sap datasource creates a hole in the database. A record is deleted – > no udate change -> no change in BW. That means the record is still in BW!
    Do you had a similar problem? Or do you have an idea how we could fix this problem.
    I can not load all data every day by full upload – that needs to much time more than 2 million data from five different datasources….
    Thank you for your attention
    Ralf

    Hello,
    To close my post, here is my solution:
    - ODS A is filled by delta using the UDATE field.
    - ODS B is filled by full upload. It contents only the key fields of ODS A (load needs
      only 20 minutes)
    - ODS C is filled by ODS A(only key fields) per full upload. In the startroutine of the
      update rule is a check: Delete all data, which are in ODS B.
      Result is: ODS C contents only the data, which have to be deleted in ODS A.
    - Full upload ODS C to ODS A. Set in the start routine recordmode to 'R'.
      Result is: these records are delete from the ODS A and change log of ODS A is
      updated.
    Before the next load the content of ODS B and C is deleted.
    (cool picture:)
    ..........CUBE X
    ..............|
    .(delta)....|
    ..............|........ODS C(diff. A - B)
    ..............|........|....|
    ..............|..(R).|....|.\  (check)
    ..............|........|....|..\
    ............ODS A....ODS B(keys)
    .(delta)...|................| (full)
    ..........non sap systems
    It's followed the principle of Sudhi's idea. But I did it not with PSA, because I have five different DataSources, so I need five calls to identify the records, which have to be deleted and five calls to edit the PSA ....
    In this way I load everything in ODS and do the procedure only once time.
    All the best Ralf

  • Creating a non transaction datasource

    All,
    I am using Quartz to fire Hibernate Jobs. Quartz Scheduler runs as a servlet in tomcat, the Hibernate job is a standalone app.
    The Scheduler calls the app like so:
    Quartz>Shell>Hibernate.
    When I start the Hibernate jobs, they instantly go into a blocked state, basically db dealocks. When view the connections with
    mysql admin, I can see the connections are sleep!
    I heard thru the grapevine that a non-transactional datasource would solve this problem.
    How do I do this? Is it done in tomcat, or do I need to change my hibernate db connection?
    Any ideas?

    You're deadlocking with another process - quite possibly with one of your earlier tasks that stalled for some reason.
    Turning off transactions is a dumb way to fix this - it avoids you finding out what the real problem is, thus curing the symptom but probably not the disease. Find out what's deadlocking and why, then fix that problem properly.

  • Migration of emulated datasource from 3.5 to BI 7

    Hi Experts,
        For a requirement I want to migrate a emulated datasource 0material_attr   to bi 7.Previously some one already developed complete dataflow in 3.x, now i need to migrate every thing in bi 7.In order to make sure do i need to follow any sequence for this migration ( means first transferrule and then updaterule and then datasource) or I can directly migrate datasouce in BI 7 and then create transformation.Please any body suggest me.
    Thanks & Regards
    Vinod Kumar

    Hi,
    1.     Migrate all UpdateRules
    Select the UpdateRule and press the right mouse button to get the context menu.
    Choose u201CAdditional Functionsu201D and select u201CCreate Transformationu201D.
    The UpdateRule is based on a 3.x InfoSource. The Transformation has to be created on a InfoSource (called u201CNew InfoSourceu201D ). A pop-up will ask you if you want to create a new InfoSource as a copy of the existing 3.x InfoSource. We need to have the new InfoSource, therefore choose u201CCopy InfoSource 3.x to New InfoSourceu201D and press enter.
    2.     Migrate all TransferRules, for each relevant Source System Release
    The migration of TransferRules is almost the same as the migration of UpdateRules. The main difference is, that you should not create a new InfoSource during the migration process. You have created the InfoSource already in the first step during the migration of the UpdateRule. Reuse the already existing InfoSource. Otherwise the link to the Transformation created before is missing.
    Please have in mind that a transformation is source system dependent. Therefore the shadow table logic is the same as already known for the TransferRules. Be aware that you have to build a transformation pointing to a DataSource for each source system release which is needed (Basis release e.g. 620,640,700u2026) .
    In order to perform the migration as described at the beginning of this chapter all TransferRules for the required SourceSystem relations must be active in the system.
    3.     Migrate the DataSource. Make sure, that no TransferRule is still using the DataSource .
    The migration of the DataSource is the last step which is needed to migrate the whole dataflow. Please make sure that not TransferRules pointing to this DataSource are still in use. How this can be done will be added later to the document.
    TransferRules will be deleted when the DataSource is getting migrated. Choose the DataSource and select the Entry u201CMigrateu201D in the context menu.
    4.     Test if everything is ok.
    5.     Delete the UpdateRules and the 3.x InfoSource if the InfoSource in not used by still existing UpdateRules.
    Best regards,
    Frank

  • Emulated or non emulated data source ?

    Hi All,
    We are in the process of moving our application code from JDBC to Toplink. As we are in this process , the application code is a mixture of Toplink and JDBC calls (some individual method also mixture ). We are using External Transaction Controller for Toplink. Now I am just wondering if Emulated data source is good enough or should I go for Non-emulated data source for this situation.Can some one guide me on this. We have some connection pooling issues like connections reached max and Time Out exception..etc
    Giri.

    More to add to the above issue:
    I have a method which executes some JDBC and toplink calls in a loop . Toplink calls are releasing connections to the pools but JDBC call are not releasing connections to the pool.Both connections use the same data source.
    Giri.

  • How to configure non-unicode in JVM of 1.4...

    hii,
    Plz explain me how configure non unicode bytes in JVM, ie SYSTEM PROPERTY set to false in JVM 1.4.1 & higher verison.
    waiting for reply,
    Santosh

    Hey cross-poster,
    This is the third time you posted this question, so I shouldn't really answer you..... but it's nearly Christmas and I'm bored at work.
    Firstly, if you don't know the question, then you'll never get the answer.
    Every string & character in Java is in Unicode.
    Think of Java as a Unicode black box.
    So to support other encoding schemes you must be able to read and write them out properly.
    The default encoding scheme is determined by your PC's regional settings and architecture ( Windows/Unix/Mainframe for byte ordering )
    You normally don't want to mess with that.
    What I've done in the past to support arbitrary encoding schemes of text files and databases, is force the user to select the encoding scheme themselves then explicitely use it in my code.
    Look at java.io.InputStreamReader, it accepts an "encoding scheme" parameter, eg "US-ASCII" or "ISO8859_8".
    So you can read a text file, which is encoding in multi-byte format this way.
    That should at least give you some direction to ask the real question you want.
    regards,
    Owen

  • Configuring Multiple LDAP Datasources in VDS

    Hi,
    I'm trying to configure multiple LDAP Datasources using VDS, one talking to AD and other to Novell eDir from VDS, my LDAP connection strings works well but when I start the service in VDS the service will never startup all I see is Exception null, it does not throw any exception at the same time it doesn't start up the service. I've tried configuring with signle Datasource which works fine. This is failing  when I combine those two datasources into one configuration. Have any configured multiple datasources with in VDS. Not sure if you have encountered any problems.
    Thanks,
    Joe.P

    Are you just trying to bring in two LDAP data sources or do a join between them? 
    Actually both I believe are considered types of joins.
    You cannot just define two datasources and expect them to show up.

  • Can SOA consume business events using non-apps datasource ?

    Hi Gurus & Experts,
    We have a scenario where EBS raises custom business event to be consumed by SOA.
    Everything works fine using APPS login, however we need to non-apps datasource in some environements (custom schema user)
    Can SOA consume business events using non-apps datasource ?
    Please let me know.
    Thanks,
    Rev

    Hi Srini,
    Even i have a similar requirement . Could you please send me the link for OracleEBSAdapterUserGuide(b_28351). ?
    Did you come to know how to check whether WF_Listener is running ?
    Thanks in advance
    Nutan

  • Configure webservices as datasource for coherence

    Is it possible to directly configure webservices as datasource for coherence. So that after a preconfigured period of time, the cache refreshes itself. If yes samples/ documentation for the same.

    Hi Tally,
    you can do almost that, but not exactly that with out-of-the-box features only.
    You can use the refresh-ahead feature of the read-write backing map with which a get to a key will cause the refresh of the entry value from the cache loader which will be seen by the subsequent get to the same key.
    Alternatively you can schedule your own periodic processing with some timer which fetches data from the backing storage and it puts it into the cache.
    A mix of the two would be to have the periodic processing issue the get requests which trigger the refresh-ahead behaviour.
    Best regards,
    Robert

  • Non-JTS datasource

    Dumb questions. What is the exact definition of 'non-JTS datasource'? And what are the recommended usages of the datasource? The TopLink's doc has little information on it. It says "Use Non-JTS for Specify if the session requires a non-JTS connection. Note: Normally, use this option for an application server when using cache synchronization".
    We use WAS connection pooling with Oracle databases. Some projects use 'XA datasource' while others use 'non-XA datasource'. In the sessions xml files, we all use <datasource> tag.
    What are the differences b/w the 'non-JTS datasource' referenced in TopLink and the 'non-XA datasource' concept?
    Also, I was told that all datasources in WAS 5.x are JTS datasources. I haven't found any information on it in the WAS documentations. Anyone has any more info on it?
    Thanks a lot for your assistance.
    Haiwei

    Andrei,
    I tried what you suggested, with WAS XA and non-XA datasource. Here is the code:             UserTransaction transaction = (UserTransaction) new InitialContext().lookup(USER_TRANSACTION);
                transaction.begin();
                connection = ((DataSource) new InitialContext().lookup(NON_XA_DATASOURCE)).getConnection("user04", "user04");
                statement = connection.createStatement();
                numberOfRows =
                    statement.executeUpdate(
                        "UPDATE USER04.PERSON SET VERSION = VERSION +1 WHERE  ID = '9999'");
                connection.commit();
                transaction.rollback();With XA datasource or non-XA datasource, I was both getting exception (see below), hence the version is not updated.
    java.sql.SQLException: DSRA9350E: Operation Connection.commit is not allowed during a global transactionThis seems to suggest (or confirm) that the WAS datasources are JTS datasources.
    Any comment?
    Haiwie

  • BC4J with non-SQL datasource

    Hi
    Is it possible to use BC4J with non-SQL, non-relational datasources? For example, we are exploring the possibility of using a file-based XML datastore (NOT the XDB) as the back-end for some ADF screens.
    Thanks,
    Sean

    BC4J is designed for SQL databases.
    http://www.oracle.com/technology/products/jdev/collateral/tutorials/903/j2ee_bc4j/prnt/j2ee_bc4j.html#bc4j-faq

  • Websphere 5.0 and non-jts-datasource = 2PC exception!

    Hello all
    We're migrating from a working weblogic 8 app to websphere 5.0, and we run into this problem.
    Toplink tries to enlist the NON-JTS datasource in the global transaction. In weblogic we defined our non-jts datasource as a non-transactional datasource, but there is no such option in websphere. What is going on??
    Please help
    TIA
    - Russ -

    Hello Rustam,
    WebSphere 5 throws exceptions when you try to get a non-jta datasource while in a transaction - it seems to try to enlist it in the transaction I think.
    This is more a WebSphere issue, since it means you cannot read outside of the transaction.
    There are 3 options:
    1) Don't define a non-jta datasource at all in TopLink. Draw backs to this are that there may be problems reading when there is no transaction, such as when you are using cache synch.
    2) Create your own datasource (outside of WebSphere) and place it in JNDI. Then have TopLink access it as a non-jta datasource. Your datasource must be completely independent of WebSphere so that it does not attempt to associate with JTA.
    3) Use a TopLink maintained read connetion pool. You can use the non-jts-connection-url sessions.xml tag, which will use the login setting defined in your project.xml. I've not tested it, but can also override the read pool in a preLogin event that should look something like:
    public void preLogin(SessionEvent event) {
    DatabaseLogin dbLogin(new Oracle9iPlatform());
    dbLogin.setUserName("name");
    dbLogin.setPassword("password");
    dbLogin.setConnectionString("jdbc:oracle:thin:@ip:port:sid");
    ConnectionPool readPool = new ReadConnectionPool("read", dbLogin, minCon, maxCon, event.getSession());
    event.getSession().setReadConnectionPool(readPool);
    Best Regards,
    Chris Delahunt

  • Emulation Datasource

    hi
    I am a bit confused with term EMULATION.
    In help doc
    If the 3.x DataSource already exists in a data flow based on the old concept, you use emulation first to model the data flow with transformations and data transfer processes and then test it. During migration you can delete the data flow you were using before, along with the metadata objects.
    In the above statement: <b>you use emulation first to model the data flow with transformations</b> .
    Here i am in 7.0 version and using 3.X features. Now i want them to convert into 7.0 by doing transformations. I am on dev server, transferrules, updaterules, transfer structure exist for the datasource.
    Can anyone tell me in simple terms what is the meaning of emulated datasource Q2.  what is emulation and purpose of emulation?
    Q3. How can i achieve perfect migration by using emulation?
    REgards
    Annie

    Hi Annie,
    Emulation of DataSource enables you to create Transformations & DTP for a 3.x (R3TR ISFS) DataSource, without migrating it to the 7.0 (R3TR RSDS).
    Meaning of Emulation: You can use error stack with an emulated DataSource. In general, migrating to the new DataSource form ensures you have completely moved to the new dataflow concept in a consistent manner. The new dataflow has good advantages from a performance standpoint, in that you can parallelize the data load. The error stack is a big advantage from a data quality standpoint. The overall approach using DTP's is a step forward in that the dataflow occurs in a more transparent manner in process chains and monitoring.
    More on it, you dont have to Migrate the DataSource so the existing UR & TR are not requierd to be deleted.
    Please read below on Migration(3.x to 7.x) & also restoration(7.x to 3.x) of DataSources.
    Migration: -
    When a DataSource 3.x (R3TR ISFS) is migrated, it is deleted and transferred to the DataSource (R3TR RSDS).
    The dependent 3.x objects mapping (R3TR ISMP) and transfer structure (R3TR ISTS) are deleted, if they exist, (R3TR ISIP) and the PSA with requests that have already been loaded are transferred into the DataSource. The 3.x objects can also be exported as part of the migration process. This means it is possible to recover the metadata objects DataSource 3.x, the mapping and the transfer structure.
    Restoration:
    It is only possible to recover a DataSource 3.x if it was migrated with export. This allows the objects DataSource 3.x (R3TR ISFS), mapping (R3TR ISMP), and transfer structure (R3TR ISTS) to recover the same status as they had before the migration.
    During the recovery the DataSource (R3TR RSDS) is deleted. The system tries to retain the PSA. This is, however, not possible if the PSA was initially generated for the DataSource because either there was no active transfer structure for DataSource 3.x or it was loaded using IDoc.
    The dependent objects transformation (R3TR TRFN), Data transfer process (R3TR DTPA), and InfoPackage (R3TR ISIP) are retained. If you want to delete them, you must do this manually.
    Thanks for any points you choose to assign.

Maybe you are looking for

  • Filename in sender Mail Adapter

    Hi Experts, Brief about Scenario: Invoice generated with PDF format in R/3 system and send to  mail box.  XI  pick that file from mail box  and deposited in SFTP Server.    Landscape in XI end. Mailbox - - -  XI 7.0 - - - SFTP The file name generated

  • Mac Book pro Heats up quickly and battery drains faster

    From the time i insstalled Mavericks ive been facing extreme heat issues which causes my mac to slow down. I had the same system configuration before installing maverics and it worked perfectly but after installing maverciks it is just bad. I am post

  • Trouble importing accurate versions of Word docs into InDesign CC

    HELP! Sometimes when I import a word document sent to me by a client, InDesign imports an older version of that word document. If I open the document they sent me in Word and compare it to the text that imports from the same document, it's sometimes

  • HT1688 how can I tell if my iphone 4 has a virus

    does my phone have a virus?

  • Newbie Tableview question

    I'm taking a crack at my first app. I have 5 items (cells) in a table.  I need the first 3 items to point to three seperate plists.  The fourth would go to a tabbar view and the last to a regular view.   I need help getting the rootcontroller to poin