Question about JAVA_XA in XE database

During initialization of application using Jboss, I turned on the sql trace and found below entry in the trace file.
Does this means JAVA_XA package is available in XE database?
I have seen in other threads saying that JAVA_XA is not available in XE.
Can anyone enlighten?
========================================================
begin :1 := JAVA_XA.xa_start_new(:2,:3,:4,:5,:6); end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.01 0 31 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 31 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 37
begin :1 := JAVA_XA.xa_commit_new (:2,:3,:4,:5); end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.00 0 0 0 0
========================================================

Hi
SQL> select object_type, owner from dba_objects
  2  where object_name='JAVA_XA';
no rows selected

Similar Messages

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • Questions about 1Z0-047 Oracle Database SQL Expert

    I am planning to take this exam and I have several questions:
    1) I am using Steve O'Hearn's 'SQL Certified Expert Exam Guide' book and this states the following about SQL functions:
    "Be sure to review the Oracle Database SQL Language Reference Manual and review the lengthy description of all of the SQL functions before taking the exam"
    In trial tests it seemed that book's information was enough, but how about real exam? Is it necessary to study something in addition to this book information? You can answer regarding other exam objectives as well, if there is something that I should read from some other materials.
    2) The book states that I can add not null constraint to column that has null values, if I specify default value. I tried and cannot, I get error. So the book states it wrongly or do I misunderstand something?
    3) The book states that I cannot drop a NOT NULL constraint, but I can get the job done using: ALTER TABLE table_name MODIFY column_name NULL;
    I tried and I can execute: alter table table_name drop constraint nameofnotnullconstraint;
    4) To use external tables, is only read grant on the directory necessary or also write?
    5) I understood from the book that to flashback table (e.g to before drop) I need to have row movement enabled on the table. But I tried and I can make this flashback operation to table that does not have row movement enabled. How can this be explained!
    Big thanks in advance!

    #1) well, the manual is free; find here the SQL Language Reference - http://www.oracle.com/pls/db112/portal.all_books#index-SQL
    #2 & 3) If you proved it yourself, that settles it!
    #4) Here's a good article about external tables: http://www.oracle-developer.net/display.php?id=512
    I noted this paragraph in it, which might answer your question:
    In addition to the standard read-write Oracle directory that we need for our external table, we also need an additional executable directory object for the preprocessor. This directory defines the location of the executables used by the preprocessor (we will be using gzip below). As far as Oracle is concerned, an executable directory is one that has EXECUTE privileges granted on it (this is an 11g feature specifically to support the preprocessor).
    #5) don't know

  • Question about how a RAC database connects to ASM

    I have recently installed Grid Clusterware 11.2.0.1 on an IBM PSeries server running AIX. Today, I installed the Oracle database software, version 11.2.0.1, but when I try to start the database, I get an error message: ORA-01031: insufficient privileges.
    I can log in to the ASM instance using "sys as sysdba" but when I try to log in to ASM as "sys as sysasm" then I get the same error message.
    Obviously I have configured something incorrectly. I would appreciate any help locating the proper documentation so I can correct my errors. All advice is greatly appreciated. Thank you.

    Hi,
    Regarding the AIX group "asmdba", we do not have separate administrators to handle the administration of ASM in 11g. I read in the documentation (Grid Infrastructure Installation Guide) that if separate administrators are not wanted for ASM and the database(s), then "asmdba" group is optional. Would you recommend creating this group anyway?I agree to create OS group asmdba it's mandatory only if you are using different user to GUI Install.
    But it's a question of organization of the environment. Each group will designate different kinds of permissions and different roles.
    So, Although the ASM come built into Oracle Database (actually 11.2 and higher Oracle has created a new layer called Grid Infrastructure), Clusterware, ASM and Database products are completely different for different purposes. With the evolution of these products believe that the tendency is they have their own installation and administration. Yet all will work together.
    Create a group asmdba will not impact the current environment, only will help keep the environment more organized to the future. Look on the bright side, if one day you need of group asmdba, you does not need to re-configure the environment.
    Correcting my previous post the group that has permission to access as SYSASM privilege is asmadmin instead of asmdba.
    Cheers,
    Levi Pereira

  • Question about connection to firebird database

    Hi I'm a new JDBC programmer. I picked firebird because the application I wanted to make needed a server-less database. The database was easy to set up (I'm using the classic version of firebird) with isql, and I am now trying to access it through the JDBC. Here is my code, I know the code may be crude looking (I am new to it).
            try
                Class.forName("org.firebirdsql.jdbc.FBDriver");
                Connection con = DriverManager.getConnection("jdbc:firebirdsql:localhost/3050:C:/test.gdb", "sysdba", "masterkey");
            catch(ClassNotFoundException e)
                System.out.println(e.getMessage());
            catch(SQLException e)
                System.out.println(e.getMessage());
            }With this code, I recieved the following exception...:
    java.lang.UnsupportedClassVersionError: Bad version number in .class file
         at java.lang.ClassLoader.defineClass1(Native Method)
         at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
         at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
         at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
         at java.net.URLClassLoader.access$100(URLClassLoader.java:56)
         at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:164)
         at DatabaseBoundary.hire(DatabaseBoundary.java:16)
         at DatabaseBoundary.<init>(DatabaseBoundary.java:9)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
         at bluej.runtime.ExecServer$3.run(ExecServer.java:808)I have the most updated version of the JDK, and I have the most updated version of firebird (as of March 24, 2008 9:48 PM EST). I don't know why I am getting the UnsupportedClassVersionError, all help would be greatly appreciated. :)

    well, apparently you do NOT have the correct version of the Java runtime set up as the one you're actually calling, as that's the only thing that can cause this error.
    Most likely there's fragments of an older installation left floating around the system in directories that are higher in the path than your JDK.
    Check for example your windows/system32 directory for java*.exe and if they're there, remove them.
    After that set up your system path correctly to point to your JAVA_HOME/bin (you should also define JAVA_HOME in your environment variables to point to the JDK installation directory).
    The Firebird driver I think is compiled against JDK 1.4, maybe 1.3, so the error is almost certainly caused by your own code being compiled against 1.6 and you're attempt to run it (unwittingly) against 1.5.

  • Two questions about TimesTen In-Memory Database

    1.
    In TimesTen, there are two methods to do the replication:
    a. create active standby pair
    b. create replication tt element ...
    whose performence is better ? when the cache group exists, only the A/S can be used, is it right?
    2.
    If I just want to store the data in memory, how to stop the changes of the data from logging to the disk?

    Hi,
    Regarding your queries
    1) Each replication type has its own benifits. Active -standby is used mostly for High availibility systems, while the classic replication scheme is mostly used for load sharing and distributed workload.
    Cache group will work well with Active standby pair. Using cache group with class replication systems is not encouraged.
    2) Disabling logging is something which was provided in TT 7.x(DSN attribute LOGGING) but no more allowed in latest TT releases like 11.2.1.x or 11.2.2.x.
    Regards
    Rajesh

  • Question about DBCA generate script o create RAC database 2 node cluster

    Question about creating two node RAC database 11g after installing and configuration 11g clusterware. I've used DBCA to generate script to create a rac database. I've set
    environment variable ORACLE_SID=RAC and the creating script creates instance of RAC1 and RAC2. My understanding is that each node will represent a node, however there should only be one database with a name of 'RAC'. Please advise

    You are getting your terminology mixed up.
    You only have one database. Take a look, there are one set of datafiles on shared storage.
    You have 2 instances which are accessing one database.
    Database name is RAC. Instance names are RAC1, RAC2, etc, etc.
    Also, if you look at the listener configuration and if your tnsnames is setup properly then connecting to RAC will connect you to either one of the instances wheras connecting to RAC1 will connect you to that instance.

  • Few questions about upgrading database

    Hi everyone,
    greetings of the day
    I have few questions about the upgrading database,
    In export and import mode
    1.can we have new name for the target database,
    2.I think we need to create tablespaces ,do we need to create users as well
    3.If we are upgrading from 9i to 10g database ,any activity to be perfromed other than creating a new sysaux tablespace
    4.How come we get consistent export
    In DBUA mode ( in the same machine only)
    1.Do we need shutdown / startup restrict the database
    2.How can we move the files to the new location
    3.Can we change db_name of the database
    4.Can we use the old database as well
    In manual upgration using catupgrd scripts
    1.Can we rename the db_name
    2.can we use old database as well
    3.how to move the database files to the new location
    4.can we perform this kind of upgrade on a different server
    5.when we startup upgrade in the new home ,how it identifies the old database in-order to upgrade the old one
    Thanks

    udayjampani wrote:
    Hi everyone,
    greetings of the day
    Pl post details of source and target database versions, along with your OS details.
    I have few questions about the upgrading database,
    In export and import mode
    1.can we have new name for the target database,Yes.
    2.I think we need to create tablespaces ,do we need to create users as wellYou can create users, but it is not necessary. You need to pre-create tablespaces only if their characteristics/locations on the target are different than on the source.
    3.If we are upgrading from 9i to 10g database ,any activity to be perfromed other than thisNot that I am aware of - see the steps in the Upgrade Guide - http://docs.oracle.com/cd/B19306_01/server.102/b14238/expimp.htm
    4.How come we get consistent export wetherEnsure the database is started in restricted mode, so users will not be able to access the database during the export.
    >
    In DBUA mode ( in the same machine only)
    1.Do we need shutdown / startup restrict the databaseNo - DBUA will do this automatically for you.
    2.How can we move the files to the new locationYou can after the upgrade move the datafiles to wherever you want - use the ALTER DATABASE RENAME DATAFILE (http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_1004.htm#i2082829) command.
    3.Can we change db_name of the databaseI do not believe this is possible with DBUA.
    4.Can we use the old database as wellNo - the database will be upgraded by DBUA - there is no "old" database.
    >
    >
    In manual upgration using catupgrd scripts
    1.Can we rename the db_nameYes.
    2.can we use old database as wellNo - the scripts will upgrade the database - there is no "old" database.
    3.how to move the database files to the new locationSee above.
    4.can we perform this kind of upgrade on a different serverPl elaborate on what you mean by this. You can copy the existing database to a different server (assuming compatible OS) and upgrade it there.
    >
    >
    ThanksHTH
    Srini

  • Question about import database pk, fk

    Hi
    I want to ask a question about imported database. I imported database and I noticed tables doesnt have Primary key or foreign key. They havent relation between eachs. Why can this occured ?
    is it about import or another something?

    yeah ı am surprised none of tables have pk or fk ..
    but ı want to show you http://i51.tinypic.com/2hedc9d.jpg
    what is blue shapes in there ?
    can be foerign keys ?

  • A Question about LV Database Connectivi​ty Toolkit

    Hello everyone!
    I have a question about using LabVIEW DataBase Connectivity Toolkit 1.0.2 that eagerly needs your help. I don't know how to programmaticlly create a new Microsoft Access(.mdb)file (Not a new table in a existing Database)using LabVIEW Database Connectivity Toolkit1.0.2. As you know, usually we can set up the connection by creating a Universal Data Link (.udl) file and inputting the path to the DB Tools Open Connec VI in the LabVIEW DataBase Connectivity Toolkit. However, searching a table within an existing database containing a great many tables is a toilfulif job. If I want to use a new DataBase file with the date and time string as its name to log my acquisition data in each measurement process, how to do? I am sure someone of you must can resolve my question, and thanks very much for your help.

    I don't know what your real design considerations are here but, from I understand from your post, this is a really bad way to go about the process of logging data -- IF you want to be able to do significant ad hoc or stored procedures analyses after it has been collected.  Using separate MDB files for data that ONLY differs by one field (namely that date) is not the most efficient way to organize it.  What would be much more efficient would a joined table including the date and a reference ID of some sort for the various measurements that were done.  That way your stored procedures for looking at ALL measurements of type X would be very simple going across ALL dates.  Making such a comparison across multiple MDB files is a much more challenging process AND doing the original data collection in that way doesn't really gain you anything.
    Generally, if something is difficult to do in the DCT (Database Connectivity Toolkit) it's because it's a "not good thing" to do within MDBs.  I know that others probably disagree with that but I've worked with Access since it's initial release and other RDBMs prior to that both through compiled tools, Unix scripts, etc.  You may, of course, still choose to proceed in the way you've described and that may work excellently for you. 

  • Few basic questions about database administration

    Hello,
    I have a few basic questions about database administration.
    1. I switched one of my oracle instances to archivelog mode. I just cannot locate the archive log files on my windows system. The %ora_home%/ora92/database/archive directory is desperately empty...
    2.What is the tools01.dbf datafile used for?
    3.What is the undotbso1.dbf datafile used for?
    Thanks in advance,
    Julien.

    1. The archive log location needs to be specified in your init.ora file. By default, Oracle will place the archive files in either ORACLE_HOME/dbs or ORACLE_HOME/database.
    2. The tools01.dbf file belongs to the tools tablespace which should be set as the default tablespace for system. It primary purpose is to hold Oracle Forms and Reports database objects, however, it should also be used for holding other non sys database objects such as the perfstat (statspack) or other third party database schemas e.g. quests sqllab.
    3. undotbs01.dbf file belongs to the undo tablespace.

  • [IPCC Express] Questions about database fields

    Hello,
    I'm developing some reports for Cisco IPCC Express.
    During the development, a lot of questions came out related to the definition of some fields:
    - In ContactCallDetail, what is the difference between transfer and redirect a call?
    - In ContactCallDetail, what is the difference between transfer = 1 and contactType=5? Both are related to transfering a call.
    I'll appreciate any help you can give me.
    Best Regards,
    Filipe Cruz - Portugal

    Where i can get that Enterprise Edition ?<br>
    here (click)<br>
    <br>
    does Express edition has Web based EM ?<br>Read the doc about Express Edition : Oracle Database 10g Express Edition<br>
    <br>
    Nicolas.

  • Basic question about Flashback Database

    Hi,
    I have a very generic question about using Flashback Database.
    On my testing systems, for performance testing and simulation purposes, I want to create a guaranteed restore point so I can test impact on batch when code change releases are done, before deployment in production.  My confusion is with respect to redo logs, as summarized in the questions below:
    1. Is it possible to change redo log files, when a guaranteed restore point has been configured?
    2. If yes, will the Flashback to restore point, also change the size of the redo logs?
    I could not find anything in the docs about this....hence my question....
    Appreciate your time taken in responding to these questions....
    Regards.

    HI,
    donneskold wrote:
    Hi,
    I have a very generic question about using Flashback Database.
    On my testing systems, for performance testing and simulation purposes, I want to create a guaranteed restore point so I can test impact on batch when code change releases are done, before deployment in production.  My confusion is with respect to redo logs, as summarized in the questions below:
    1. Is it possible to change redo log files, when a guaranteed restore point has been configured?
    1) Yes it is possible.
    2. If yes, will the Flashback to restore point, also change the size of the redo logs?
    2) It will not change the size of redo log file.  Cannot resize a redo log file..
    I could not find anything in the docs about this....hence my question....
    Appreciate your time taken in responding to these questions....
    Regards.
    Thank you

  • Questions about free Download Oracle 10g, Database and Developer suite

    Hi everyone, got some questions..
    1) Is it possible to download free Oracle 10g and Developer suite? is a 30 day license trial or something like that?
    2) On windows systems which are the minimun requirements?, for example in a Pentium 4, 512Mb RAM, Windows XP Home edition is OK?
    3) Should I download Standard Edition? Personal?
    4) If I am trying to update my Oracle Developer knowledge (I was developer on 1999 with Oracle 7.3 and Developer 2000) what products do I have to install?? Oracle DB 10g, Developer Suite,Application Server too?, what else?
    Thanks guys!
    J.

    My answer you could find here Questions about free download Oracle 10g, Developer Suite

  • A question about the impact of SQL*PLUS SERVEROUTPUT option on v$sql

    Hello everybody,
    SQL> SELECT * FROM v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0  Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>
    OS : Fedora Core 17 (X86_64) Kernel 3.6.6-1.fc17.x86_64I would like to ask a question about the SQL*Plus SET SERVEROUTPUT ON/OFF option and its impact on queries on views such as v$sql and v$session. Here is the problem
    Actually I define three variables in SQL*Plus in order to store sid, serial# and prev_sql_id columns from v$session in order to be able to use them later, several times in different other queries, while I'm still working in the current session.
    So, here is how I proceed
    SET SERVEROUTPUT ON;  -- I often activate this option as the first line of almost all of my SQL-PL/SQL script files
    SET SQLBLANKLINES ON;
    VARIABLE mysid NUMBER
    VARIABLE myserial# NUMBER;
    VARIABLE saved_sql_id VARCHAR2(13);
    -- So first I store sid and serial# for the current session
    BEGIN
        SELECT sid, serial# INTO :mysid, :myserial#
        FROM v$session
        WHERE audsid = SYS_CONTEXT('UserEnv', 'SessionId');
    END;
    PL/SQL procedure successfully completed.
    -- Just check to see the result
    SQL> SELECT :mysid, :myserial# FROM DUAL;
        :MYSID :MYSERIAL#
           129   1067
    SQL> Now, let's say that I want to run the following query as the last SQL statement run within my current session
    SELECT * FROM employees WHERE salary >= 2800 AND ROWNUM <= 10;According to Oracle® Database Reference 11g Release 2 (11.2) description for v$session
    http://docs.oracle.com/cd/E11882_01/server.112/e25513/dynviews_3016.htm#REFRN30223]
    the column prev_sql_id includes the sql_id of the last sql statement executed for the given sid and serial# which in the case of my example, it will be the above mentioned SELECT query on the employees table. As a result, right after the SELECT statement on the employees table I run the following
    BEGIN
        SELECT prev_sql_id INTO :saved_sql_id
        FROM v$session
        WHERE sid = :mysid AND serial# = :myserial#;
    END;
    PL/SQL procedure successfully completed.
    SQL> SELECT :saved_sql_id FROM DUAL;
    :SAVED_SQL_ID
    9babjv8yq8ru3
    SQL> Having the value of sql_id, I'm supposed to find all information about cursor(s) for my SELECT statement and also its sql_text value in v$sql. Yet here is what I get when I query v$sql upon the stored sql_id
    SELECT child_number, sql_id, sql_text
    FROM v$sql
    WHERE sql_id = :saved_sql_id;
    CHILD_NUMBER   SQL_ID          SQL_TEXT
    0              9babjv8yq8ru3    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;Therefore instead of
    SELECT * FROM employees WHERE salary >= 2800 AND ROWNUM <= 10;for the value of sql_text I get the following value
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES);Which is not of course what I was expecting to find in v$sql for the given sql_id.
    After a bit googling I found the following thread on the OTN forum where it had been suggested (well I think maybe not exactly for the same problem) to turn off SERVEROUTPUT.
    Problem with dbms_xplan.display_cursor
    This was precisely what I did
    SET SERVEROUTPUT OFFafter that I repeated the whole procedure and this time everything worked pretty well as expected. I checked SQL*Plus documentation for SERVEROUTPUT
    and also v$session page, yet I didn't find anything indicating that SERVEROUTPUT should be switched off whenever views such as v$sql, v$session
    are queired. I don't really understand the link in terms of impact that one can have on the other or better to say rather, why there is an impact
    Could anyone kindly make some clarification?
    thanks in advance,
    Regards,
    Dariyoosh

    >
    and also v$session page, yet I didn't find anything indicating that SERVEROUTPUT should be switched off whenever views such as v$sql, v$session
    are queired. I don't really understand the link in terms of impact that one can have on the other or better to say rather, why there is an impact
    Hi Dariyoosh,
    SET SERVEROUTPUT ON has the effect of executing dbms_output.get_lines after each and every statement. Not only related to system view.
    Here below what Tom Kyte is explaining in this page:
    Now, sqlplus sees this functionality and says "hey, would not it be nice for me to dump this buffer to screen for the user?". So, they added the SQLPlus command "set serveroutput on" which does two things
    1) it tells SQLPLUS you would like it <b>to execute dbms_output.get_lines after each and every statement</b>. You would like it to do this network rounding after each call. You would like this extra overhead to take place (think of an install script with hundreds/thousands of statements to be executed -- perhaps, just perhaps you don't want this extra call after every call)
    2) SQLPLUS automatically calls the dbms_output API "enable" to turn on the buffering that happens in the package.Regards.
    Al

Maybe you are looking for