Db- verify of standalone database file

I need help with database verification for a 4.5.20 CDS application running on VxWorks.
I create an environment, and create ~30 databases within that environment, which all exist within one single file. I recently discovered that since I create a set of database file handles at startup, which are shared between several VxWorks tasks, that I need to specify the DB_THREAD flag to both the environment open, and database open calls. This works great and our concurrency problems were eliminated.
The environment is configured with the following flags:
(DB_INIT_CDB | DB_INIT_MPOOL | DB_CREATE | DB_SYSTEM_MEM | DB_THREAD)
However, now my database verifcation is no longer working correctly.
Here is my original code, where I create the verification database handle within the database environment which I later open the databases within:
if ((ret = db_create(&dbp_verify, dbenv_starchoice, 0)) != 0)
{goto err;}
if ((ret = dbp_verify->verify(dbp_verify, DB_STARCHOICE_FILENAME, NULL, NULL, 0)) != 0)
if ((ret = db_destroy()) != 0)
{goto err;}
Until I set the DB_THREAD flag for the environment and databases, this code worked just fine. However, with DB_THREAD flag set the db->verify routine NEVER returns.
So, I modified the code so that I create the dbp_verify database handle outside of the database environment by setting the second parameter of db_create to NULL. Now when the verification runs I get the following error:
S_iosLib_DRIVER_GLUT
I suspect that this is because it has no way to know where to create the temporary files needed for the verify, where previously it used the "temp" directory I specified when configuring the database environment.
Can anyone tell me how to correctly configure the database handle I use for the verifcation, so that I can verify it as a standalone? Or, tell me why when the environment is configured with DB_THREAD the verification never returns??
Thanks!

I have used two different Berkeley builds. For the first one, db->verify had not returned after having let my box run overnight. When I did a task trace in the morning on the task that had called db->verify, I got the following:
0x80053670 vxTaskEntry +0x14 : Neptune::OsTask::entryHelper(Neptune::OsTask*) (
0x830f6a34 Neptune::OsTask::entryHelper(Neptune::OsTask*)+0x80 : Neptune::OsHelp
erTask::entry() ()
0x830f4a10 Neptune::OsHelperTask::entry()+0x40 : OpenTVTask::entry() ()
0x83defba8 OpenTVTask::entry()+0x80 : o_db_vsk_init ()
0x83e3df08 o_db_vsk_init+0x14c: db_startup ()
0x83e1bfac db_startup +0x44 : db_database_init ()
0x83e1a1a0 db_database_init+0x94 : 0x83a28a18 ()
0x83a28a40 __db_verify_pp+0x20 : __db_verify_internal ()
0x83a28b40 __db_verify_internal+0xf4 : 0x83a28ca8 ()
0x83a28f48 __db_verify_internal+0x4fc: 0x83a299fc ()
0x83a29b18 __db_verify_internal+0x10cc: 0x83a2a7bc ()
0x83a2a9b8 __db_vrfy_meta+0x6c0: __bam_vrfy_structure ()
0x839fdf38 __bam_vrfy_structure+0x1a0: __bam_vrfy_subtree ()
0x839fe918 __bam_vrfy_subtree+0x870: __bam_vrfy_subtree ()
0x839fe320 __bam_vrfy_subtree+0x278: __db_vrfy_ovfl_structure ()
0x83a2d9c4 __db_vrfy_ovfl_structure+0x138: __db_vrfy_pgset_inc ()
0x83a2cde4 __db_vrfy_pgset_inc+0xc4 : __db_put ()
0x83a058d0 __db_put +0x40 : __db_cursor ()
0x83a13360 __db_cursor +0x124: __lock_get ()
0x83a59c98 __lock_get +0xa0 : __lock_get_internal ()
0x83a5a84c __lock_get_internal+0xb5c: __db_tas_mutex_lock ()
0x83a7b5d8 __db_tas_mutex_lock+0x114: 0x83a7d994 ()
0x83a7da50 __os_sleep +0xb0 : select ()
0x80110ef0 select +0x578: semTake ()
0x801996b4 semTake +0x160: semBTake ()
value = 0 = 0x0
Then we discovered there was a known bug in VxWorks due to the use of select(), which is used by os_sleep(). So, one of our developers modified os_sleep() based on both Wind River's and Oracle's recommendations so that os_sleep() does not use select.
When I used this new build, db->verify still does not return, but when I do a stack trace on the task I don't get anything. tt simply returns value = 0 = 0x0. It is like the task just stopped running though it's status is READY, not PEND.
Does this information help?

Similar Messages

  • OFS add standalone database to group error

    I was installing OFS 3.3.3 with Oracle 10g R2 on 2 win2k3sp1 servers. The standalone database instance could be startup at each node respectively without any issue.
    The standalone database verification ran successfully. (I failover the data files to the second node, and verify that I can start and connect the same database instance, maybe I missed any configuration in registry or somewhere on node2?)
    But when I tried to add the resource to the cluster group, I got following error:
    Versions: client = 3.3.3 server = 3.3.3 OS =
    Operation: Adding resource "OLTP.WORLD" to group "OLTP"
    Starting Time: Dec 10, 2006 22:02:09
    Elapsed Time: 0 minutes, 25 seconds
    1 22:02:09 Starting clusterwide operation
    2 22:02:10 FS-10370: Adding the resource OLTP.WORLD to group OLTP
    3 22:02:10 FS-10371: ORAFS01 : Performing initialization processing
    4 22:02:11 FS-10371: ORAFS02 : Performing initialization processing
    5 22:02:14 FS-10372: ORAFS01 : Gathering resource owner information
    6 22:02:14 FS-10372: ORAFS02 : Gathering resource owner information
    7 22:02:14 FS-10373: ORAFS01 : Determining owner node of resource OLTP.WORLD
    8 22:02:14 FS-10374: ORAFS01 : Gathering cluster information needed to perform the specified operation
    9 22:02:14 FS-10374: ORAFS02 : Gathering cluster information needed to perform the specified operation
    10 22:02:14 FS-10375: ORAFS01 : Analyzing cluster information needed to perform the specified operation
    11 22:02:15 >>> FS-10652: ORAFS01 has Oracle Database version 10.2.0 installed in ORADB10G_HOME1
    12 22:02:15 >>> FS-10652: ORAFS02 has Oracle Database version 10.2.0 installed in ORADB10G_HOME1
    13 22:02:15 FS-10376: ORAFS01 : Starting configuration of resource OLTP.WORLD
    14 22:02:15 FS-10378: ORAFS01 : Preparing for configuration of resource OLTP.WORLD
    15 22:02:15 FS-10380: ORAFS01 : Configuring virtual server information for resource OLTP.WORLD
    16 22:02:15 > FS-10496: Generating the Oracle Net migration plan for OLTP.WORLD
    17 22:02:16 > FS-10490: Configuring the Oracle Net listener for OLTP.WORLD
    18 22:02:16 >> FS-10600: Oracle Net configuration file updated: D:\ORACLE\PRODUCT\10.2.0\DB_1\NETWORK\ADMIN\LISTENER.ORA
    19 22:02:16 >> FS-10606: Listener configuration updated in database parameter file: m:\spfileoltp.ora
    20 22:02:20 >> FS-10605: Oracle Net listener Fsloltpd created
    21 22:02:21 >> FS-10602: Oracle Net listener LISTENER restarted
    22 22:02:21 > FS-10491: Configuring the Oracle Net service name for OLTP.WORLD
    23 22:02:21 >> FS-10600: Oracle Net configuration file updated: D:\ORACLE\PRODUCT\10.2.0\DB_1\NETWORK\ADMIN\TNSNAMES.ORA
    24 22:02:21 FS-10381: ORAFS01 : Creating the resource information for resource OLTP.WORLD
    25 22:02:21 > FS-10424: Checking whether the database OLTP.WORLD is online
    26 22:02:31 ** ERROR : ORA-01506: missing or illegal database name
    27 22:02:31 ** ERROR : FS-10778: The Oracle Database resource provider failed to configure the cluster resource OLTP.WORLD
    28 22:02:31 ** ERROR : FS-10890: Oracle Services for MSCS failed during the add operation
    29 22:02:31 ** ERROR : FS-10497: Starting clusterwide rollback of the operation30 22:02:31 FS-10488: ORAFS01 : Starting rollback of operation31 22:02:31 > FS-10090: Rolling back Oracle Net changes on node ORAFS01
    32 22:02:34 FS-10489: ORAFS01 : Completed rollback of operation
    33 22:02:34 ** ERROR : FS-10495: Clusterwide rollback of the operation has been completed
    34 22:02:34 Please check your Windows Application log using the Event Viewer for any additional errors
    35 22:02:34 The clusterwide operation failed !
    The DB_NAME is already stated in the spfile, but why still got "ORA-01506: missing or illegal database name" error?
    How and where can I find out any other detailed information about the error?
    thanks, ezlee

    Hi,
    Did you manage to get past this error?
    Did you set DB_NAME in the init.ora file as per the message for ORA-01506?
    Regards,
    Ranjit

  • HELP! Oracle FailSafe - Listener fails when adding standalone database

    Well, I have a cluster of two nodes with the following specs:
    (1) an Oracle 10g database each
    (2) Microsoft Cluster Service (MSCS)
    (3) Windows Server 2003 64-bit edition
    (4) Intel Itanium Processor
    (5) Oracle Failsafe 3.3.3 for Windows 2003 64-bit
    The 64-bit Oracle Failsafe doesn't come with Oracle Failsafe Manager, so I used a Failsafe Manager remotely from another clustered servers. The version is also 3.3.3, but it's running on a Windows 2000 Advanced Server.
    Well, after connecting to the 64-bit cluster, I added the standalone database to a Cluster Group. There are two cluster groups on the Server:
    (1)"Cluster Group" (the default cluster group created by MSCS); containing an IP address, a network name, Oracle Cluster Services, and the Quorum hard drive.
    (1)"ORACLE DB" A cluster gropu I created for the database; containing another IP address, a network name for the IP address, and every hard drive volumes of the database files.
    The database currently resides on the Node 2 (because I created it there). I have successfully verified the database (using "Verify Standalone Database" option). BUT when I added the database into the cluster group ORACLE DB, it failed with the following message:
    23 20:48:48 ** ERROR : FS-10066: Failed to start Windows service OracleOraDb10g_home1TNSListener for the Oracle Net listener
    When I opened the Windows Event Viewer, apparently the Listener Service had started, but it soon "terminated unexpectedly":
    At first, the Listener Service appeared to be started:
    But this is what happened next; it seemed the Listener Service terminated abruptly after entering the running state for a very short time:
    What happened? What should I do? What is the problem? Many thanks!
    PS: the following are the messages from both Verifying Standalone Database and Adding Standalone Database. The verification was successfull, but I just failed to add the database:
    >
    Versions: client = 3.3.3 server = 3.3.3 OS =
    Operation: Verifying standalone database "PAYMENT"
    Starting Time: May 11, 2005 19:50:11
    Elapsed Time: 0 minutes, 4 seconds
    1 19:50:11 Starting clusterwide operation
    2 19:50:11 FS-10915: POSDB2 : Starting the verification of standalone resource PAYMENT
    3 19:50:11 FS-10371: POSDB2 : Performing initialization processing
    4 19:50:11 FS-10371: POSDB1 : Performing initialization processing
    5 19:50:12 FS-10372: POSDB2 : Gathering resource owner information
    6 19:50:12 FS-10372: POSDB1 : Gathering resource owner information
    7 19:50:12 FS-10373: POSDB2 : Determining owner node of resource PAYMENT
    8 19:50:12 FS-10374: POSDB2 : Gathering cluster information needed to perform the specified operation
    9 19:50:12 FS-10374: POSDB1 : Gathering cluster information needed to perform the specified operation
    10 19:50:12 FS-10375: POSDB2 : Analyzing cluster information needed to perform the specified operation
    11 19:50:12 FS-10378: POSDB2 : Preparing for configuration of resource PAYMENT
    12 19:50:12 ** WARNING : FS-10247: The database parameter file H:\PAYMENT\admin\pfile\pfilePAYMENT.ora specified for this operation will override the parameter file value in the registry
    13 19:50:12 ** WARNING : FS-10248: At registry key SOFTWARE\ORACLE\KEY_OraDb10g_home1, value of ORA_PAYMENT_PFILE is H:\PAYMENT\admin\pfile
    14 19:50:12 FS-10916: POSDB2 : Verification of the standalone resource
    15 19:50:12 > FS-10341: Starting verification of database PAYMENT
    16 19:50:13 > FS-10342: Starting verification of Oracle Net configuration information for database PAYMENT
    17 19:50:13 > FS-10496: Generating the Oracle Net migration plan for PAYMENT
    18 19:50:13 > FS-10491: Configuring the Oracle Net service name for PAYMENT
    19 19:50:13 > FS-10343: Starting verification of database instance information for database PAYMENT
    20 19:50:13 >> FS-10347: Checking the state of database PAYMENT
    21 19:50:13 >> FS-10425: Querying the disks used by the database PAYMENT
    22 19:50:15 > FS-10344: Starting verification of Oracle Intelligent Agent for database PAYMENT
    23 19:50:15 > FS-10345: Verification of standalone database PAYMENT completed successfully
    24 19:50:15 FS-10917: POSDB2 : Standalone resource PAYMENT was verified successfully
    25 19:50:15 FS-10378: POSDB1 : Preparing for configuration of resource PAYMENT
    26 19:50:15 FS-10916: POSDB1 : Verification of the standalone resource
    27 19:50:15 > FS-10341: Starting verification of database PAYMENT
    28 19:50:15 > FS-10342: Starting verification of Oracle Net configuration information for database PAYMENT
    29 19:50:15 > FS-10496: Generating the Oracle Net migration plan for PAYMENT
    30 19:50:15 > FS-10491: Configuring the Oracle Net service name for PAYMENT
    31 19:50:15 > FS-10343: Starting verification of database instance information for database PAYMENT
    32 19:50:15 > FS-10344: Starting verification of Oracle Intelligent Agent for database PAYMENT
    33 19:50:15 > FS-10345: Verification of standalone database PAYMENT completed successfully
    34 19:50:15 FS-10917: POSDB1 : Standalone resource PAYMENT was verified successfully
    35 19:50:15 The clusterwide operation completed successfully, however, the server reported some warnings.
    >
    Versions: client = 3.3.3 server = 3.3.3 OS =
    Operation: Adding resource "PAYMENT" to group "ORACLE DATABASE"
    Starting Time: May 11, 2005 20:48:43
    Elapsed Time: 0 minutes, 7 seconds
    1 20:48:43 Starting clusterwide operation
    2 20:48:44 FS-10370: Adding the resource PAYMENT to group ORACLE DATABASE
    3 20:48:44 FS-10371: POSDB2 : Performing initialization processing
    4 20:48:44 FS-10371: POSDB1 : Performing initialization processing
    5 20:48:45 FS-10372: POSDB2 : Gathering resource owner information
    6 20:48:45 FS-10372: POSDB1 : Gathering resource owner information
    7 20:48:45 FS-10373: POSDB2 : Determining owner node of resource PAYMENT
    8 20:48:45 FS-10374: POSDB2 : Gathering cluster information needed to perform the specified operation
    9 20:48:45 FS-10374: POSDB1 : Gathering cluster information needed to perform the specified operation
    10 20:48:45 FS-10375: POSDB2 : Analyzing cluster information needed to perform the specified operation
    11 20:48:45 >>> FS-10652: POSDB2 has Oracle Database version 10.1.0 installed in ORADB10G_HOME1
    12 20:48:45 >>> FS-10652: POSDB1 has Oracle Database version 10.1.0 installed in ORADB10G_HOME1
    13 20:48:45 FS-10376: POSDB2 : Starting configuration of resource PAYMENT
    14 20:48:45 FS-10378: POSDB2 : Preparing for configuration of resource PAYMENT
    15 20:48:46 FS-10380: POSDB2 : Configuring virtual server information for resource PAYMENT
    16 20:48:46 ** WARNING : FS-10247: The database parameter file H:\PAYMENT\admin\pfile\pfilePAYMENT.ora specified for this operation will override the parameter file value in the registry
    17 20:48:46 ** WARNING : FS-10248: At registry key SOFTWARE\ORACLE\KEY_OraDb10g_home1, value of ORA_PAYMENT_PFILE is H:\PAYMENT\admin\pfile
    18 20:48:46 > FS-10496: Generating the Oracle Net migration plan for PAYMENT
    19 20:48:46 > FS-10490: Configuring the Oracle Net listener for PAYMENT
    20 20:48:46 >> FS-10600: Oracle Net configuration file updated: F:\ORACLE\PRODUCT\10.1.0\DB_1\NETWORK\ADMIN\LISTENER.ORA
    21 20:48:46 >> FS-10606: Listener configuration updated in database parameter file: H:\PAYMENT\admin\pfile\pfilePAYMENT.ora
    22 20:48:47 >> FS-10605: Oracle Net listener Fslpos created
    23 20:48:48 ** ERROR : FS-10066: Failed to start Windows service OracleOraDb10g_home1TNSListener for the Oracle Net listener
    24 20:48:48 ** ERROR : FS-10065: Error trying to configure the Oracle Net listener
    25 20:48:48 > FS-10090: Rolling back Oracle Net changes on node POSDB2
    26 20:48:50 ** ERROR : FS-10784: The Oracle Database resource provider failed to configure the virtual server for resource PAYMENT
    27 20:48:50 ** ERROR : FS-10890: Oracle Services for MSCS failed during the add operation
    28 20:48:50 ** ERROR : FS-10497: Starting clusterwide rollback of the operation
    29 20:48:50 FS-10488: POSDB2 : Starting rollback of operation
    30 20:48:50 FS-10489: POSDB2 : Completed rollback of operation
    31 20:48:50 ** ERROR : FS-10495: Clusterwide rollback of the operation has been completed
    32 20:48:50 Please check your Windows Application log using the Event Viewer for any additional errors
    33 20:48:50 The clusterwide operation failed !

    umm... help? Anyone?

  • Adding Standalone Database to Group using Fail Safe fails

    Background:
    Oracle: 10.2.0.4
    OFS: 3.4.1
    Have installed Oracle Fail Safe correctly without errors.
    Ran Verify Standalone Database get this result:
    27 10:00:04 > FS-10345: Verification of standalone database MYDB completed successfully
    28 10:00:04 FS-10917: Server01A : Standalone resource MYDB was verified successfully
    29 10:00:04 FS-10378: Server01B : Preparing for configuration of resource MYDB
    30 10:00:04 FS-10916: Server01B : Verification of the standalone resource
    31 10:00:04 > FS-10341: Starting verification of database MYDB
    32 10:00:04 > FS-10342: Starting verification of Oracle Net configuration information for database MYDB
    33 10:00:04 > FS-10496: Generating the Oracle Net migration plan for MYDB
    34 10:00:04 > FS-10491: Configuring the Oracle Net service name for MYDB
    35 10:00:04 > FS-10343: Starting verification of database instance information for database MYDB
    36 10:00:04 > FS-10344: Starting verification of Oracle Intelligent Agent for database MYDB
    37 10:00:04 > FS-10345: Verification of standalone database MYDB completed successfully
    38 10:00:04 FS-10917: Server01B : Standalone resource MYDB was verified successfully
    39 10:00:04 The clusterwide operation completed successfully, however, the server reported some warnings.
    The warnings are concerning dome files that are on the D: drive as oppose to one of the Shared Drives. I made that that the SPFILE and init.ora are on the o1B node and in the same exact Path.
    I did create a Group with a Virtual Address and added the Shared Drive resources
    When I run the "Add to Group" wizard, I have tried using both the SPFILE and the 'init.ora' and get the same result:
    20 10:02:58 >> FS-10605: Oracle Net listener Fslmydbdb created
    21 10:02:59 >> FS-10602: Oracle Net listener LISTENER restarted
    22 10:02:59 > FS-10491: Configuring the Oracle Net service name for MYDB
    23 10:02:59 >> FS-10600: Oracle Net configuration file updated: D:\ORACLE\PRODUCT\10.2.0\DB_1\NETWORK\ADMIN\TNSNAMES.ORA
    24 10:02:59 FS-10381: Server01A : Creating the resource information for resource MYPLM
    25 10:02:59 > FS-10424: Checking whether the database MYDB is online
    26 10:03:07 ** ERROR : FS-10778: The Oracle Database resource provider failed to configure the cluster resource MYDB
    27 10:03:07 ** ERROR : FS-10890: Oracle Services for MSCS failed during the add operation
    28 10:03:07 ** ERROR : FS-10497: Starting clusterwide rollback of the operation
    29 10:03:07 FS-10488: Server01A : Starting rollback of operation
    30 10:03:07 > FS-10090: Rolling back Oracle Net changes on node Server01A
    31 10:03:10 FS-10489: Server01A : Completed rollback of operation
    32 10:03:10 ** ERROR : FS-10495: Clusterwide rollback of the operation has been completed
    33 10:03:10 Please check your Windows Application log using the Event Viewer for any additional errors
    34 10:03:10 The clusterwide operation failed !
    I have checked the README files and looked up the FS- Errors, but they are very general and don't give me a clue as towhy they failed.
    Have been Googleing for days and found no answer yet. Are there any other logs that might shed some light on this?
    Thanks and have a great day,
    Andrew

    Additional info.....here's wha tI found on the D:\oracle\product\10.2.0\db_1\log\Server01a\client logs:
    clsc78.log
    Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
    2009-05-27 13:12:43.942: [  OCROSD][2272]utgdv:1:could not open registry key SOFTWARE\Oracle\ocr os error The system could
    not find the environment option that was entered.
    2009-05-27 13:12:43.942: [  OCRRAW][2272]proprinit: Could not open raw device
    2009-05-27 13:12:43.942: [ default][2272]a_init:7!: Backend init unsuccessful : [33]
    2009-05-27 13:12:43.942: [ CSSCLNT][2272]clsssinit: error(33 ) in OCR initialization
    2009-05-27 13:12:45.036: [  OCROSD][2272]utgdv:1:could not open registry key SOFTWARE\Oracle\ocr os error The system could
    not find the environment option that was entered.
    2009-05-27 13:12:45.036: [  OCRRAW][2272]proprinit: Could not open raw device
    2009-05-27 13:12:45.036: [ default][2272]a_init:7!: Backend init unsuccessful : [33]
    CSS41.log
    Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
    2009-05-27 13:12:50.833: [  OCROSD][3508]utgdv:1:could not open registry key SOFTWARE\Oracle\ocr os error The system could
    not find the environment option that was entered.
    2009-05-27 13:12:50.833: [  OCRRAW][3508]proprinit: Could not open raw device
    2009-05-27 13:12:50.833: [ default][3508]a_init:7!: Backend init unsuccessful : [33]
    2009-05-27 13:12:50.833: [ CSSCLNT][3508]clsssinit: error(33 ) in OCR initialization
    Related? To me it looks like it initilaizes everything, stop the Database, but then can't startup?
    Thanks..
    Andrew

  • Use DataSource of weblogic in a standalone Java file

    Hi,
    We have a requirement to run a java file at a scheduled time in a day using cron scheduler on our linux server.
    We need to fetch data from the database & perform some business logic in this standalone JAVA file.
    Our application has an EAR which is deployed on Weblogic 10.3 server & in our application, we are utilizing the datasource created in that domain using Hibernate.
    Now, can we create a standealone Java file & use exisitng datasource (without Hibernate) instead of legacy JDBC code to connect to DB.
    Also, do we need to keep this JAVA file a part of this EAR, WAR or can we put the class file in anylocation outside this EAR & then utilize datasource feature.
    Please help on the same in implementation.
    Thanks,
    Uttam

    Hi Ravi,
    I did create Datasource domain & put wlclient.jar in my application classpath (Add jar in Java Build path of application), but, when I ran application, its giving below error
    Exception in thread "Main Thread" java.lang.NoClassDefFoundError: weblogic/kernel/KernelStatus
         at weblogic.jndi.Environment.<clinit>(Environment.java:78)
         at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117)
         at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)
         at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
         at javax.naming.InitialContext.init(InitialContext.java:223)
         at javax.naming.InitialContext.<init>(InitialContext.java:198)
         at TestDataSource.main(TestDataSource.java:37)
    Also, I'm putting code here as I there is no provision to attach the file here. Please let me know whats wrong here.
    Also, I've given the name of datasource as testDataSource through Admin console of the domain. Do we need to prefix the name with jdbc/ in the code or without that also it works?
    import java.sql.Connection;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Statement;
    import java.util.Hashtable;
    import javax.naming.Context;
    import javax.naming.InitialContext;
    import javax.naming.NamingException;
    import javax.sql.DataSource;
    public class TestDataSource {
         public static void main(String[] args)
              DataSource ds=null;
              Connection conn=null;
              Statement stmt=null;
              ResultSet rs=null;
              Context ctx=null;
              try
                   Hashtable ht = new Hashtable();
                   ht.put(Context.INITIAL_CONTEXT_FACTORY,
                   "weblogic.jndi.WLInitialContextFactory");
                   ht.put(Context.PROVIDER_URL,
                   "t3://172.19.244.180:7001");
                   System.out.println("HERE");
                   ctx=new InitialContext(ht);
                   ds=(DataSource)ctx.lookup("jdbc/testDataSource");
                   System.out.println("HERE AFER CONTEXT CREATION");
                   conn=ds.getConnection();
                   stmt=conn.createStatement();
                   rs=stmt.executeQuery("select distinct(circle) from AIRCEL_STORE_FINDER order by 1 ");
                   while (rs.next())
                        System.out.println("circle name "+rs.getString(1));
              }catch (Exception e) {
                   System.out.println("Error in Main "+e.toString());
                   e.printStackTrace();
              finally{
                   try{
                   if(rs!=null)
                        rs.close();
                   if(stmt!=null)
                        stmt.close();
                   if(conn!=null)
                        conn.close();
                   if(ds!=null)
                        ds=null;
                   if(ctx!=null)
                        ctx.close();
                   }catch (SQLException e) {
                        System.out.println("Error in SQLException Finally "+e.toString());
                   catch (NamingException e) {
                        System.out.println("Error in NamingException Finally "+e.toString());
    }

  • Verify that the database is created with the correct path  specification

    Dear All
    When I installed DB2 9.1 with SP4 on windows 2003 64 bit,I use Configuration Assistant tool which is problem
    SQL1031N  The database directory cannot be found on the indicated file system.  SQLSTATE=58031
    Explanation:
    The system database directory or local database directory could
    not be found.  A database has not been created or it was not
    cataloged correctly. 
    The command cannot be processed. 
    User Response:
    Verify that the database is created with the correct path
    specification.  The Catalog Database command has a path parameter
    which specifies the directory where the database resides. 
    sqlcode :  -1031
    sqlstate :  58031
    Thanks

    Hi Phuc,
    Could you please validate, that the database TST is located in driver H: ?
    If the database is there, you find a directory with the name of your instance under the H: drive, inside this directory, you must find a NODE0000 directory and finally inside the NODE0000 directory there must be a SQL0000? directory, where ? is a number.
    You get your instance name from the environment variable DB2INSTANCE or simply by the execution of:
    db2 get instance
    If there is no SQL0000? directory at all, this means the database is located somewhere else.
    If there is SQL0000? directory in that drive, you can perform the following:
    db2 CATALOG DB TST AS TST ON H:
    And provide the error message, if any.
    Hope this helps
    Best regards, Edgardo

  • How to run standalone java file from weblogic server on Solaris

    Hi,
    We have a requirement to run a java file at a scheduled time in a day using cron scheduler on our linux server.
    We need to fetch data from the database & perform some business logic in this standalone JAVA file.
    Our application has an EAR which is deployed on Weblogic 10.3 server & in our application, we are utilizing the datasource created in that domain.
    Now, we have created a standealone Java file & used exisitng datasource (without Hibernate) of the domain with the help of below forums,
    Use DataSource of weblogic in a standalone Java file
    able to achieve the same.
    I've bundled this JAVA in application WAR & depoyed on the same domain where datasource is created.
    Now, how can I execute this file from anywhere on the server using weblogic classpath.
    Please help on the same in implementation.
    Thanks,
    Uttam

    If the Java application must be stand-alone you must not deploy it on WebLogic, then WebLogic will manage its lifecycle.
    In this case you can use the CommonJ API and use the timermanager if it must run on a certain time. More information
    of how to do this can be found here: http://middlewaremagic.com/weblogic/?p=5569
    If you keep the Java application stand-only and want to use a cron job you can follow the example presented here: http://middlewaremagic.com/weblogic/?p=7065
    Note that the example runs a WLST script, but you can follow the same steps to run your Java application.

  • Kodo.util.FatalDataStoreException: Wrong database file version

    Hi,
    I am using Kodo JDO 3.0.2 together with HSQLDB (non-cached, same process).
    It
    runs fine. However, after having used a SQL tool such as Aqua Data Studio
    to
    inspect the database my Java code complains with the message
    "kodo.util.FatalDataStoreException: Wrong database file version". I have
    to
    rebuild the database and extend my classes again to get rid of this error.
    Is there some information in the database script that does not survive the
    inspection with the SQL tool? How can I work around this?
    Thanks for your help
    --Bruno

    Marc,
    It was indeed a version mismatch with my hsqldb libs. My SQL Tool used
    version 1.7.2 whereas Kdo used 1.7.0. A quick update of the property file
    of
    Aqua Data Studio fixed the problem. Thanks for the hint.
    --Bruno
    Marc Prud'hommeaux wrote:
    Bruno-
    Without being at all familiar with "Aqua Data Studio", I'll make a
    completely shot in the dark guess about what might be happening: you are
    using version x of Hypersonic to access the database, and then "Aqua
    Data Studio" is using version x+1. When the database is opened with HSQL
    version x+1, some internal version identifier in the database file is
    incremented, which disallows the previous version of HSQL (which is
    being used by Kodo) from opening the file.
    Again, this is a blind guess, but if it is the case, then the solution
    would be to ensure that you are using the same version of HSQL in both
    Kodo and "Aqua Data Studio".
    Otherwise, can you post the stack trace of the exception? That might
    give some more insight as to why this might be happening.
    As an aside, note that Kodo doesn't store or verify any internal
    "version" or anything like that, so I very much doubt that it is a
    problem with Kodo itself.
    In article <c1fihi$igu$[email protected]>, Bruno Schaeffer wrote:
    Hi,
    I am using Kodo JDO 3.0.2 together with HSQLDB (non-cached, same
    process).
    It
    runs fine. However, after having used a SQL tool such as Aqua Data Studio
    to
    inspect the database my Java code complains with the message
    "kodo.util.FatalDataStoreException: Wrong database file version". I have
    to
    rebuild the database and extend my classes again to get rid of thiserror.
    Is there some information in the database script that does not survivethe
    inspection with the SQL tool? How can I work around this?
    Thanks for your help
    --Bruno
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • How to remove Database Files and Backup from ASM

    Hi All,
    Oracle Database 11.2.0.3
    OEL 5.7
    We have a host for restore purposes.
    We execute monthly or quarterly restore operations to verify that I am able to restore a subset of the data in the given amount of time or to other purposes.
    I have a automated script to Clone the Database, but to Clone Database we need remove the old database from ASM before start this operation. I want remove only Database files and keep the configuration (such as oratab/network/ocr and so on).
    Question: There is a easy way to remove these files without connect on ASM or by using DBCA?
    Appreciate any helps.

    user12028775 wrote:
    Hi All,
    Oracle Database 11.2.0.3
    OEL 5.7
    We have a host for restore purposes.
    We execute monthly or quarterly restore operations to verify that I am able to restore a subset of the data in the given amount of time or to other purposes.
    I have a automated script to Clone the Database, but to Clone Database we need remove the old database from ASM before start this operation. I want remove only Database files and keep the configuration (such as oratab/network/ocr and so on).
    Question: There is a easy way to remove these files without connect on ASM or by using DBCA?
    Yes... there is a easy way. Use command DROP DATABASE.
    Use the DROP DATABASE command to delete the target database and, if RMAN is connected to a recovery catalog, unregister it. RMAN removes all datafiles, online redo logs, and control files belonging to the target database. By default, RMAN prompts for confirmation.
    Put this command on your script before clone your database:
    RMAN> CONNECT TARGET SYS@test1
    RMAN> STARTUP FORCE MOUNT
    RMAN> SQL 'ALTER SYSTEM ENABLE RESTRICTED SESSION';
    RMAN> DROP DATABASE INCLUDING BACKUPS NOPROMPT;Regards,
    Levi Pereira

  • Opening windows "Access" database files through network

    Is there any way to open windows "Access" database files on my mac through a network to a windows machine? I have the windows for mac software but it doesn't include access.

    Hi,
    As per the description, I understand that your Office 2013 cannot open files through a network share directly.
    I would like to know that how many clients/users are affected by this in your environment. It could be some specific user account settings, that the Office applications are being affected by.
    I'd first suggest you try with a new Windows user profile, then verify result.
    Regards,
    Ethan Hua
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Eventhough data present in the D Drive (Database files), E Drive (.Pag) files, we can see data is coming as zero in Excel retrival.

    Eventhough data present in the D Drive (Database files), E Drive (.Pag) files, we can see data is coming as zero in Excel retrival. Can anyone help me to figureout this issue.
    Thanks,
    SRI

    Hi
    Verify the below details:
    1. Whether your app/db started successfully.
    2. Check the database properties :
    Storage tab - .ind and .pag  exits.
    Statistics tab -  whether any number exists in "number of existing blocks" and "existing level 0 blocks"
    3. If all the above properties in esbase are fine then check your excel add-ins options settings while retrieval.
    Thanks

  • Movig MR from existing RAC db to new standalone database

    Hi,
    I installed MRCA and created MR on existing RAC database. (Sun X86)
    I installed application server by using that MR on RAC database.
    Now this application server Portal as middle tier went alive.
    Now I would like to MR from existing RAC database to newly installed standalone database.(Sun SPARC 64)
    Is it possible0? If yes, how? If no, Why?
    source db platform is Sun X86 and destination db platform is Sun SPARC64.
    Source db is using ASM file system and destination db is using normal filesystem.
    After the move, my application server should work. (Because this is production system).

    Hi Roberto Barrera,
    I’m sorry for the way I posted my queries.
    The document you suggested me is really useful but I want to make sure this document will surely applicable for my case. Our concern is to move the entire database which is also MR for Oracle Application Server 10.1.2.3.0. Let me explain you more about our architecture.
    Dbnode1& dbnode2 (Sun X86 – OS: Sun Solaris 10)
    We have two node RAC db and created MR on that db by MRCA. (Oracle home 1)
    We installed Oracle AS Infra tier Identity Manager. (including OID, HA, DIP but excluding SSO component) (oracle home 2)
    Midnode-1 & Midnode2 (Sun X86- OS: Sun Solaris 10)
    We installed Oracle AS Infra tier Identity Manager. (Including SSO, HA, DIP but excluding OID) (oracle home 1)
    We installed Oracle AS Middle tier (Portal and wireless services). (Including Web cache and Portal along with default component j2ee, http) (Oracle home 2)
    We need to move whatever running on dbnode1 and 2 to another couple of Sun SPARC64 – OS: Sun Solaris 10 machines. We know binary migration is not possible between X86 and SPARC64. Therefore, I have installed a new database with SPARC64 installation media and created fresh MR by running runRepca.sh script. In one of the document Oracle recommend to do RMAN backup to move entire data from existing (original) database to new (target) database.
    http://download.oracle.com/docs/cd/B14099_19/core.1012/b13995/chginfra.htm#BGBDDDBE
    (Oracle® Application Server Administrator's Guide 10g Release 2 (10.1.2) B13995-08)
    Chapter 9 Changing Infrastructure Services
    9.6 Changing the Metadata Repository Used by Identity Management
    In this procedure, new database also must use the same Oracle home, datafile location, SID, and global database name as the original Metadata Repositorys. Our newly installed database is not RAC database with ASM but this is standalone database on qfs file system. We could not create database with the same datafile location as the original MR.
    • Can we do full database export/import? (your document only talking about portal schema but in my MR there are around 60 schemas related with MR and we need to import all the schemas)
    • Is there any other possible solution for case?
    Thanks in Advance :)
    Edited by: Padmanaban G on Jan 28, 2010 2:58 PM

  • While generating a crystal report can we edit database files in field explorer ?

    Hi,
    while generating a crystal report can we edit database files in field explorer (at middle can we edit database fields)
    Regards,
    Mahendra

    Wrong forum to post the question, try to post the question in crystal reports community.
    Its not possible to edit the database fields while creating the reports, its only used to call the rows of values in it. Can you explain me in detail what are you actually looking for .
    --SumanT

  • SQL and database file

    Hello.(sorry for my english)
    I have a legacy system , ported to opencobol that uses Berkeley DB. I can successfully open the database files using java both in windows and linux. However i want to create a web front end for the application.
    Data is stored using strange cobol structures (most numbers are stored as text or 4 bits per digit) and other fancy stuff.. so custom parsers should be written. (dont rember BDB classes name for that)
    I have been reading the documentation of Berkeley DB 12 hours totaly today but couldnt find answers to some questions...
    Is it possible to attach the database to dbsql.exe (sqlite) server ?? I tried to do "attach "..pathtofile" as NULL ,but i get multiple databases specified and not supported and other stupid messages....
    I think that what i tried propably doesnt make sense as collumns arent specified and nothing is in order this thing to work ..It is a simple Key value database.
    So i am a little stucked here.
    The only solution i can think is to synchronize with a relational Database. But i dont have triggers or nothing that helps....
    Note that i want one way synchronization. The "other way" will have very limited tasks.
    It seems to me that checking every x minutes if something changed using a cursor is a demanding task. So i am wondering.. Is there a way to track changes???
    Thank you :)
    Edited by: 843912 on 12 Μαρ 2011 7:13 μμ

    Hello,
    I am not sure I completely understand your question. If you are asking about importing
    and exporting data from a Berkeley DB database into an Oracle DB Table in Berkeley DB
    releases prior to 5.*, then this can be done with the Oracle OCI interface. If I am
    misunderstanding the question, please let me know.
    Thanks,
    Sandra

  • Finding whole mapping from database file - filesystems - logical volume manager - logical partitions

    Hello,
    Trying to make reverse engeneering of database files and their physical carriers on logical partitions ( fdisk ).
    And not able to make whole path from filesystem down to partitions with intermediate logical volumes.
    1. select from dba_data_files ...
    2. df -k
    to get the listing of filesystems
    3. vgdisplay
    4. lvdisplay
    5. cat /proc/partitions
    6. fdisk /dev/sda -l
       fdisk /dev/sdb -l
    Problem I have is that not able to determine which partitions are consisten in logical volumes. And then which logical volumens are consisted in filesystem.
    Thank you for hint or direction.

    Hello Wadhah,
    Before start the discussion let me explain I am newcommer to Oracle Linux. My genetic with dba experience of Oracle is from IBM UNIX ( AIX 6.1 ) and Oracle 11gr2.
    First task is to get the complete picture of one database on Oracle Linux for future maintenance tasks and make database more flexible and
    preparing for more intense work:
    -adding datafiles,
    -optimize/replace archive redolog files on separated filesystem from ORACLE_BASE
    - separating auditing log files from $ORACLE_BASE to own filesystem
    - separating diag directory on separated file system ( logging, tracing )
    - adding/enlarging TEMP ts
    - adding/enlarging undo
    - enlarging redo for higher transaction rate ( to reduce number of switched per time perceived in alert_SID.log )
    - adding online redo and control files mirrors
    So in this context try to inspect content of the disk space from the highest logical level in V$, DBA views down to fdisk partitions.
    The idea was to go in these steps:
    1. select paths of present online redo groups, datafiles, controlfiles, temp, undo
       from V$, dba views
    2. For the paths got from the step 1
       locate filesystems and for those filesystems inspect which are on logical volumens and which are directly on partitions.
    3. For all used logical volumes locate the logical partitions and their disks /dev/sda, /dev/sdb, ...

Maybe you are looking for

  • I am looking for a App or software to be able to quote jobs,

    I have been trying to find a App to be able to do quotes for my Customers with ease. Hope someone can find or help me with this. Mike

  • Page item field placement in report region

    I have a report region that has item fields in it. Additionally I have a page item field that I can only display above left of the report region. Question: how can I move the page item to middle area above region and how do I associate it's value wit

  • Swing Icons

    Hello, in the book Java Look and Feel Design Guidelines, there's an iconset (page 19 and next) with icons representing jfc components. Are those icons available somewhere ? I developp an gui "wysiwyg" builder and want to use them. Thanks

  • How can I add pretest quiz after chapters/groups

    My project requires the learning to be divided into 17 chapters with pretests or quizzes at the end of each one then one final test at the end. It is restricting me from moving any quiz slides around in any segment type arrangements. It is locking al

  • Apps won't sync with iTunes 12.1.2

    (iMac with OS X 10.10.3, iTunes 12.1.2.27; iPhone with iOS 8.3) After upgrading to OS X 10.10.3, all the apps in my iTunes library were gone (music was still there). I transferred the apps on my iPhone back to the library, and downloaded the rest of