Access another database

I have Oracle Portal installed on Windows NT. I need to access a database that is on linux. How may I do it?

Please post this on the iAS+Portal+Applications&number=81&DaysPrune=20&LastLogin=]Oracle 9iAS Applications forum.

Similar Messages

  • Accessing another database from HTMLDB

    All is there a way to do selects on another database from within an HTMLDB application.

    We have several applications on our intranet, all requiring login & password. I was unable to hit the needed tables directly outside the application as I'm not a DBA and they've never given me access to the schema names. So while trying to connect to those same tables as I do with Crystal Reports, I just went to the network drive where the TNSNAMES file is located, hunted for the IP of the server I was looking for, used that SID and did the following:
    Database Link Name - (whatever I want)
    Connect to Schema - (used my username that I use with crystal)
    Password - (same password used in crystal)
    Remote Hostname or IP - used stolen IP found in TNSNAMES
    Remote Host Port - used port found in TNSNAMES
    SID or Service Name - same as in crytal but also found in tnsnames.
    After that I had no problem hitting the tables I wanted as my username and login gets me the access I need.
    I can't tell for sure why I don't need the actual schema name and why my username works instead. I'm just glad it does as I can now use the necessary tables to do MY authentication scheme without relying having to ask to get access to the scheme used by the department. They wouldn't give it to me anyway:)
    Hope that helped, I know it didn't explain why it works tho.
    Not sure what you mean by the SQL query... that is done for you in HTML DB in the connect string created by the link.
    If you mean how do you hit the link you use it prefaced by the at sign "@" as in gimme what I want from the database @(the link name you made)

  • An error has occurred while accessing SQL database or system resources. If this is the first time you have seen this message, please try again later. If this problem persists, please contact your administrator.

    I have SP Server 2010, and when I try to DELETE a rule within an existing Audience, "Property (Account Name) = domain/username", I get this error, "An error has occurred while accessing SQL database or system resources. If this
    is the first time you have seen this message, please try again later. If this
    problem persists, please contact your administrator."  When I try to "MODIFY" the rule I get this error, "One or more values typed on this page are not valid. Check the text for the indicated fields." 
    The last time I checked it was working, I'm not aware of any new updates installed recently?  I did a full Profile Synchronization as well, but still not working, please advise? -- Evenstarline

    Hi Sara,
    First of all thank you very much for your prompt responses. Here are my comments to each of your suggestions below, and just to let you know I am using a Farm Admin account.  I
    was able to do this way after we upgraded from SP 2007 to SP 2010 as well.   I would like to mention I'm not a SP expert, just been given the responsibility due to another person handling it just left, so apologize with some of
    my novice questions below?
    1. When I change the Operators to "Contains" or "Not Contains" get generates this error below.
         Error generating in red towards top of the audience page..."One or more values typed on this page are not valid.  Check the text for the indicated fields."
         Error occurred where you enter your "Value"..."Could not resolve the user identity. Please re-enter the account name."
    2. We have a 3-server-tier topology (SPWeb, SPDB, and SPFarm).  Does the updates only apply to where the Central Admin is installed, which is the "SPFarm"?  I checked all
    3 servers, and NONE of the updates (KB2899494, KB2889845, and KB2883055) you'd mentioned are installed.
    3. I'm new to IISRET, I need to be extra cautious of what I run in production, is this safe to run with no problem?  What does it do?  And How do I run it?
    4. I'm also new to viewing the ULS log.  I'd just downloaded a viewer for it.  I'm assuming the only logs I need to be concern with viewing are within the SPAdmin (where Central
    Admin is installed)?  There's so many of them, what should I be looking for exactly?
    Evenstarline 

  • Trying to access 10gr2 database from sqlplus utility but it connects to 11g

    Hi All,
    I am facing below issue while access 10gr2 database from sqlplus.
    I am having two oracle homes on one server one belongs to 11gr2 and another belongs to 10gr2.
    I want to access a 10gr2 database through sqlplus utility.but instaed of connecting to 10gr2 it is connecting to 11gr2 after I enter a user name and passowrd.
    I have set all the env variables such as oracle_home ,oracle_sid and path with respect to Oracle 10gr2.
    Can you please let me kno what I need to do to connect to 10gr2 database and not 11gr2 database through sql plus.
    Let me know if I need to give any more details on this.
    Best Regards,
    Dipti S

    Hi Rocky,
    I got the resolution.
    I made a mistake by creating an oracle instance/service(e.g. fsdmo) when Oracle_home was pointing to 11gR2 Directory.so that service was refering to 11gr2 oracle home.
    hence when I was setting oracle_sid(fsdmo) and trying to access database instance from 11g sqlplus utility ,it was directing to 11g and not 10g since oracle service was pointing to 11g.
    So now after chnaging the oracle home directory to 11g,I am creating a service and its working fine.
    thank you so much for responding.
    hope I am clear with my reply.
    Best Regards,
    Dipti S

  • Using Single Datasource to Access Multiple Databases

    Hi,
    We would like to know the pros and cons of accessing multiple
    databases through a single datasource, versus accessing each
    database through its own datasource. Our environment includes
    multiple web servers w/ the latest version of ColdFusion MX 7,
    clustered through a load balancer. Each web server has 800+ dsns
    pointing to different SQL databases on the same SQL server. We have
    noticed that the ColdFusion administrator is taking a long time to
    display or verify all datasources and sometimes it even times out.
    Another problem is that sometimes the neo-query file gets corrupted
    (for unknown reasons) which results in the deletion of one, or
    more, or all datasources on the web server.
    Because of the issues above we are researching the
    possibility of removing most of the datasources, and then accessing
    each database through a single bridge datasource. In that regard we
    plan to change our queries by inserting the sql db name and user in
    front of each table in the query such as:
    <cfquery name="query" datasource="single_dsn_name">
    select * from [#dbname#].dbo.tableName
    </cfquery>
    In the example above, obviously #dbname# would be a variable
    that will hold the name of the requested database. The code above
    would similarly apply to queries using, update, insert and join
    words.
    Are there any limitations or negatives from scalability,
    performance, and reliability perspective in implementing the above
    scenario versus having one datasource for each database.
    Also, if there is a better way of accomplishing this, we
    would love to hear about it.

    Here is my opinion, because I work with both schemas. The
    main advantage to use one datasource for all DBs in a SQL Server is
    the simplicity of administration.
    But the main disadvantage is security, because you are using
    a single user to access all DB in a server, you don't have
    isolation, and a user that knows your schema can access data of
    other DBs that he sould not be authorized.
    Another issue is is a user must access 2 differents DB with
    different permissions (a DB only read and the other read/write),
    you'll have to create another datasource, user, etc for it.
    But the decision depends in the enviroment. If you are a
    hosting company, I would use 1 datasource for user or DB. If the
    servers and DBs are of the same company, I could use one datasource
    for each SQL server.
    Best regards

  • Access MySQL Database on Server with PHP Services

    Hi there
    There are lots of tutorials on how to connect to a MySQL database on your local machine but I'd like to access a database on my server.
    When creating a new Flex Project the wizard asks me to define a Web root and a Root URL. I used '/home/ecoflexer/public_html' as Web root and 'http://ecoflexer.com' as Root URL. However, the Web root coudn't be validated. So I've chosen the local folder 'C:\ecoflexer' as Web root. Though it was possibly wrong Flash Builder generated a debug folder at the defined location. After that I went to "Connect to Data/Service" and selected "PHP Service". I tried to generate a sample using the same credentials I use for a standard PHP login script ("Server Port" was left empty). After clicking on "Connect to Database" Zend was installed and returned an error. 'gateway.php' couldn't be found on 'http://ecoflexer.com/testProject-debug/gateway.php'.
    So I went into my local Web root and copied the 'testProject-debug' folder to my server to the destination the previous error mentioned. Then another error occured concerning a Zend file. So I went back and copied the whole 'ZendFramework' folder as well to my server. It connects now successfully to my database. I can chose a table but soon after that the introspection of the service fails. I modified the 'amf_config.ini' by adding 'webroot =/home/ecoflexer/public_html' and 'zend_path =/home/ecoflexer/public_html/ZendFramework/library' but it's still not working. Anithing I've done wrong or forgot to do?
    Cheers!
    ecoFLEXER

    iam doing client server application,the database is on the server,and iam doing the log in part,so i need to access the database to match the entered user name and password?so i should implement the accessing database part on the server side with the above code,right?i didn't test that i will test it now,but i thought that it's a different way

  • Moving Access Services database

    I'm sure this must be simple... but I'm having trouble finding any references to it.
    Once I have synced an Access Web Database with SharePoint 2010 Access Services, how can I remove it from that SharePoint implementation and attach it to another SharePoint? I've tried making this happen from both SharePoint and Access, but there appears
    to be no 'detach' option from either end.
    Anyone have any ideas?
    Thanks
    Jim

    Hello Jim Henderson NZ,
    Thanks for your question. I think your question might get a better answered in the SharePoint 2010 - Setup, Upgrade, Administration and Operation forum. Let me make sure I understand your question correctly.
    You're inquiring about moving your service application you created in central admin that ties all your Access content together? Based on this documentation it does not seem possible.
    http://technet.microsoft.com/en-us/library/ff621100.aspx
    The listed service applications that can be published are:
    Business Data Connectivity, Managed Metadata, User Profile, Search, Secure Store, Web Analytics.
    If I'm understanding your question correctly I can move you to that forum. Otherwise, please give me a bit more information so I can further assist you.
    Regards,
    Dalibor K Microsoft
    Online Community Support

  • Error saying "Error in the module RSQL accessing the database interface"

    Hi,
    there is a standard program available for retrieving the assets for the given cost centres on the selection screen.
    Our requirement is instead of cost centre we have to retrieve the assets for the given cost centre group.
    We have to find all the cost centres available on the given cost centre group and its child nodes.
    For all these cost centres we will be retriving the asset history data.
    So we copied the standard program into another Y program and made the changes to it.
    What's happening is if we give the top most Cose centre group node there are 16000 cost centres available . while retriving the asset history data for all these cost centres an runtime error occurs in a standard program saying
    "Error in the module RSQL accessing the database interface, " DBIF_RSQL_INVALID_RSQL ".
    The error occurs while executing the statement Fetch Next cursor in a standard Include program.
    <b>This happens only if we give huge range of data. If we give small range of data it
    works fine</b>.
    Can anyone help me in this by saying why it occurs and what will be the solution for this.
    Thanks.

    Hi camila,
    The huge range is part of the query string passed to the database.
    While an MP3 music gadget easily stores a gigabyte of data, ORACLE was able to increase the maxium size of a query string from 16 k to 32 k bytes in the last 5 years.
    <b>Unbelievable but true!</b>
    Just multiply the number of entries in the range with the field length and see where go go...
    Regards,
    Clemens

  • Application hangs during accessing transactional database.

    Hi! Here is minimazed test which creates a database with secondary index, fills it with some records and then starts two threads. One thread adds records, another thread deletes records. The problem is that threads hangs with the following backtraces:
    (dbx) thread t@2
    Current function is __db_pthread_mutex_lock
    204 RET_SET((pthread_cond_wait(
    t@2 (l@2) stopped in ___lwp_cond_wait at 0xfffffd7ffee6e8ea
    0xfffffd7ffee6e8ea: ___lwp_cond_wait+0x000a: jae ___lwp_cond_wait+0x18 [ 0xfffffd7ffee6e8f8, .+0xe ]
    (dbx) where
    current thread: t@2
    [1] ___lwp_cond_wait(0xfffffd7ffeb88620, 0xfffffd7ffeb88608, 0x0, 0x0, 0xfffffd7ffeb88608, 0xfffffd7ffeb9dfb8), at 0xfffffd7ffee6e8ea
    [2] __lwp_cond_wait(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0xfffffd7ffee574ec
    =>[3] __db_pthread_mutex_lock(dbenv = 0x41c7f0, mutex = 118U), line 204 in "mut_pthread.c"
    [4] __lock_get_internal(lt = 0x420f90, locker = 2147484656U, flags = 0, obj = 0x424eb0, lock_mode = DB_LOCK_WRITE, timeout = 0, lock = 0xfffffd7ffe9fd760), line 808 in "lock.c"
    [5] __lock_vec(dbenv = 0x41c7f0, locker = 2147484656U, flags = 0, list = 0xfffffd7ffe9fd748, nlist = 2, elistp = 0xfffffd7ffe9fd740), line 116 in "lock.c"
    [6] __db_lget(dbc = 0x424df0, action = 2, pgno = 5U, mode = DB_LOCK_WRITE, lkflags = 0, lockp = 0xfffffd7ffe9fd8d8), line 1006 in "db_meta.c"
    [7] __bam_search(dbc = 0x424df0, root_pgno = 1U, key = 0xfffffd7ffe9fdc68, flags = 1410U, slevel = 1, recnop = (nil), exactp = 0xfffffd7ffe9fda6c), line 489 in "bt_search.c"
    [8] __bam_c_search(dbc = 0x424df0, root_pgno = 0, key = 0xfffffd7ffe9fdc68, flags = 25U, exactp = 0xfffffd7ffe9fda6c), line 2479 in "bt_cursor.c"
    [9] __bam_c_get(dbc = 0x424df0, key = 0xfffffd7ffe9fdc68, data = 0xfffffd7ffe9fdce0, flags = 25U, pgnop = 0xfffffd7ffe9fdb34), line 953 in "bt_cursor.c"
    [10] __db_c_get(dbc_arg = 0x42f6f0, key = 0xfffffd7ffe9fdc68, data = 0xfffffd7ffe9fdce0, flags = 25U), line 689 in "db_cam.c"
    [11] __db_c_put(dbc_arg = 0x424460, key = 0xfffffd7ffe9fdf28, data = 0xfffffd7ffe9fdf00, flags = 20U), line 1327 in "db_cam.c"
    [12] __db_put(dbp = 0x41d070, txn = 0x423f30, key = 0xfffffd7ffe9fdf28, data = 0xfffffd7ffe9fdf00, flags = 20U), line 380 in "db_am.c"
    [13] __db_put_pp(dbp = 0x41d070, txn = 0x423f30, key = 0xfffffd7ffe9fdf28, data = 0xfffffd7ffe9fdf00, flags = 20U), line 1500 in "db_iface.c"
    [14] Db::put(this = 0x41c708, txnid = (nil), key = 0xfffffd7ffe9fdf28, value = 0xfffffd7ffe9fdf00, flags = 20U), line 340 in "cxx_db.cpp"
    [15] Storage::put(this = 0x41c690, key = 2003, data = 2003), line 114 in "bdb_put_del_hang.cpp"
    [16] reading_thread_func(arg = (nil)), line 161 in "bdb_put_del_hang.cpp"
    [17] thrsetup(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0xfffffd7ffee6b40b
    [18] lwpstart(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0xfffffd7ffee6b640
    (dbx) thread t@3
    Current function is __db_pthread_mutex_lock
    204 RET_SET((pthread_cond_wait(
    t@3 (l@3) stopped in ___lwp_cond_wait at 0xfffffd7ffee6e8ea
    0xfffffd7ffee6e8ea: ___lwp_cond_wait+0x000a: jae ___lwp_cond_wait+0x18 [ 0xfffffd7ffee6e8f8, .+0xe ]
    (dbx) where
    current thread: t@3
    [1] ___lwp_cond_wait(0xfffffd7ffeb88668, 0xfffffd7ffeb88650, 0x0, 0x0, 0xfffffd7ffeb88650, 0xfffffd7ffeb9dfb8), at 0xfffffd7ffee6e8ea
    [2] __lwp_cond_wait(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0xfffffd7ffee574ec
    =>[3] __db_pthread_mutex_lock(dbenv = 0x41c7f0, mutex = 119U), line 204 in "mut_pthread.c"
    [4] __lock_get_internal(lt = 0x420f90, locker = 2147484655U, flags = 0, obj = 0x424b80, lock_mode = DB_LOCK_WRITE, timeout = 0, lock = 0xfffffd7ffe7fd7d0), line 808 in "lock.c"
    [5] __lock_vec(dbenv = 0x41c7f0, locker = 2147484655U, flags = 0, list = 0xfffffd7ffe7fd7b8, nlist = 2, elistp = 0xfffffd7ffe7fd7b0), line 116 in "lock.c"
    [6] __db_lget(dbc = 0x424ac0, action = 2, pgno = 5U, mode = DB_LOCK_WRITE, lkflags = 0, lockp = 0xfffffd7ffe7fd948), line 1006 in "db_meta.c"
    [7] __bam_search(dbc = 0x424ac0, root_pgno = 1U, key = 0xfffffd7ffe7fdc48, flags = 1410U, slevel = 1, recnop = (nil), exactp = 0xfffffd7ffe7fdadc), line 489 in "bt_search.c"
    [8] __bam_c_search(dbc = 0x424ac0, root_pgno = 0, key = 0xfffffd7ffe7fdc48, flags = 8U, exactp = 0xfffffd7ffe7fdadc), line 2479 in "bt_cursor.c"
    [9] __bam_c_get(dbc = 0x424ac0, key = 0xfffffd7ffe7fdc48, data = 0xfffffd7ffe7fdc70, flags = 8U, pgnop = 0xfffffd7ffe7fdba4), line 871 in "bt_cursor.c"
    [10] __db_c_get(dbc_arg = 0x424790, key = 0xfffffd7ffe7fdc48, data = 0xfffffd7ffe7fdc70, flags = 8U), line 689 in "db_cam.c"
    [11] __db_c_del_primary(dbc = 0x425120), line 2249 in "db_cam.c"
    [12] __db_c_del(dbc = 0x425120, flags = 0), line 285 in "db_cam.c"
    [13] __db_del(dbp = 0x41d070, txn = 0x42f5d0, key = 0xfffffd7ffe7fdf30, flags = 0), line 492 in "db_am.c"
    [14] __db_del_pp(dbp = 0x41d070, txn = 0x42f5d0, key = 0xfffffd7ffe7fdf30, flags = 0), line 485 in "db_iface.c"
    [15] Db::del(this = 0x41c708, txnid = (nil), key = 0xfffffd7ffe7fdf30, flags = 0), line 226 in "cxx_db.cpp"
    [16] Storage::del(this = 0x41c690, key = 501), line 131 in "bdb_put_del_hang.cpp"
    [17] writing_thread_func(arg = (nil)), line 173 in "bdb_put_del_hang.cpp"
    [18] thrsetup(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0xfffffd7ffee6b40b
    [19] lwpstart(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0xfffffd7ffee6b640
    Looks like some problems with locking, because if I set_lk_detect(DB_LOCK_DEFAULT) some methods fails with DB_LOCK_DEADLOCK, but i do not understand where is raise condition, because both threads calls just a single operations. Can anybody explain the problem?
    Here is soruce code:
    #include <set>
    #include "pthread.h"
    #include "bdb/db_cxx.h"
    static const char *__THIS_FILE__ = __FILE__;
    class Storage
    private:
    DbEnv m_bdbEnv;
    Db m_bdbLongKeys, m_bdbLongKeysSec;
    static void bdb_error_call( const DbEnv dbEnv, const char errpfx, const char *msg )
    printf( "%s\n", msg );
    static int bdb_sec_callback( Db secondary, const Dbt pkey, const Dbt pdata, Dbt skey )
    skey->set_data( pdata->get_data() );
    skey->set_size( pdata->get_size() );
    return 0;
    public:
    Storage( void )
    : m_bdbEnv( DB_CXX_NO_EXCEPTIONS ),
    m_bdbLongKeys( &m_bdbEnv, DB_CXX_NO_EXCEPTIONS ),
    m_bdbLongKeysSec( &m_bdbEnv, DB_CXX_NO_EXCEPTIONS )
    int open( const char *storageURL )
    m_bdbEnv.set_errcall( bdb_error_call );
    static const u_int32_t envFlags =
    DB_CREATE | /* create if not exists */
    DB_INIT_MPOOL | /* memory pool */
    DB_INIT_LOCK | /* locking */
    DB_INIT_LOG | /* recovery */
    DB_INIT_TXN | /* transactions */
    DB_THREAD;
    int error = m_bdbEnv.open( storageURL, envFlags, 0 );
    if (error != 0)
    m_bdbEnv.err( error, "DbEnv::open() failed." );
    return -1;
    static const u_int32_t dbFlags =
    DB_CREATE | /* create if not exists */
    DB_AUTO_COMMIT | /* auto commit */
    DB_THREAD;
    m_bdbLongKeys.set_errcall( bdb_error_call );
    error = m_bdbLongKeys.open( NULL, "longKeys.db", NULL, DB_BTREE, dbFlags, 0 );
    if (error != 0)
    m_bdbLongKeys.err( error, "Db::open() failed." );
    return -1;
    m_bdbLongKeysSec.set_errcall( bdb_error_call );
    m_bdbLongKeysSec.set_flags( DB_DUP | DB_DUPSORT );
    if (error != 0)
    m_bdbLongKeysSec.err( error, "Db::set_flags() failed (%s:%d).",
    __THIS_FILE__, __LINE__ );
    return -1;
    error = m_bdbLongKeysSec.open( NULL, "longKeys-sec.db", NULL, DB_BTREE, dbFlags, 0 );
    if (error != 0)
    m_bdbLongKeysSec.err( error, "Db::open() failed (%s:%d).",
    __THIS_FILE__, __LINE__ );
    return -1;
    error = m_bdbLongKeys.associate( NULL, &m_bdbLongKeysSec, bdb_sec_callback, DB_AUTO_COMMIT );
    if (error != 0)
    m_bdbLongKeys.err( error, "Db::associate() failed (%s:%d).",
    __THIS_FILE__, __LINE__ );
    return -1;
    return 0;
    int close( void )
    m_bdbLongKeysSec.close(0);
    m_bdbLongKeys.close(0);
    m_bdbEnv.close(0);
    return 0;
    int put( long key, long data )
    Dbt dbtKey, dbtData;
    dbtKey.set_data( (void*) &key );
    dbtKey.set_size( sizeof(key) );
    dbtData.set_data( (void*) &data );
    dbtData.set_size( sizeof(data) );
    int error = m_bdbLongKeys.put( NULL, &dbtKey, &dbtData, DB_NOOVERWRITE );
    if (error != 0)
    m_bdbLongKeys.err( error, "Db::put() failed (%s:%d)", __THIS_FILE__, __LINE__ );
    return -1;
    return 0;
    int del( long key )
    Dbt dbtKey;
    dbtKey.set_data( (void*) &key );
    dbtKey.set_size( sizeof(key) );
    int error = m_bdbLongKeys.del( NULL, &dbtKey, 0 );
    if (error != 0)
    m_bdbLongKeys.err( error, "Db::del() failed (%s:%d)", __THIS_FILE__, __LINE__ );
    return -1; }
    return 0;
    Storage *g_storage = NULL;
    static long g_numberTotal = 1000;
    static long g_numberToRead = g_numberTotal/2;
    static std::set<long> g_toRead;
    static std::set<long> g_toWrite;
    extern "C" void reading_thread_func( void arg );
    extern "C" void writing_thread_func( void arg );
    void reading_thread_func( void arg )
    std::set<long>::const_iterator it = g_toRead.begin();
    std::set<long>::const_iterator it_end = g_toRead.end();
    for (int j = 2*g_numberTotal+1; it != it_end; ++it) {
    ++j;
    if (g_storage->put(j, j) != 0)
    printf( "put() failed (%s:%d).\n", __THIS_FILE__, __LINE__ );
    return NULL;
    void writing_thread_func( void arg )
    std::set<long>::const_iterator it = g_toWrite.begin();
    std::set<long>::const_iterator it_end = g_toWrite.end();
    for (; it != it_end; it++) {
    if (g_storage->del(*it) == 0)
    printf( "Deleted key %d\n", *it );
    else
    printf( "Failed to delete key %d\n", *it );
    return NULL;
    int main( int argc, char *argv[] )
    pthread_t read_tid, write_tid;
    if (g_numberToRead > g_numberTotal)
    return 1;
    g_storage = new Storage;
    if (g_storage->open("test_storage") != 0)
    return 3;
    for (int i = 0; i < g_numberTotal; ++i) {
    if (g_storage->put(i, i) != 0)
    std::cout << "Failed to add key '" << i << "'" << std::endl;
    if (i <= g_numberToRead)
    g_toRead.insert(i);
    else
    g_toWrite.insert(i);
    pthread_create( &read_tid, NULL, reading_thread_func, NULL );
    pthread_create( &write_tid, NULL, writing_thread_func, NULL );
    pthread_join( read_tid, NULL );
    pthread_join( write_tid, NULL );
    if (g_storage != NULL) {
    g_storage->close();
    delete g_storage;
    return 0;
    }

    Hi,
    Thank you for you answer, but I still not happywith
    it.
    I attentively read the documents you providedbefore
    posted the message on the forum. Ok. Let me explain again.
    I could enclose these calls in each owntransaction
    (and I tried it, but had the same result) but itdoes
    not make a sence if DB_AUTO_COMMIT flag wasspecified
    for the Db::open()Using transaction does not necessarily eliminate the
    chance of deadlock. Practically any application that
    uses locking may deadlock. Can you post here the
    example in which you are trying it transactionally?
    And maybe I will try to correct it.The example is in the my first message. It would be nice if you could suggest a good solution to workaround the deadlock.
    There is only ONE OPERATION from the BerkeleyDBAPI
    point of view in each thread under the transaction
    and I would expect that the one single Db::put()or
    Db::del() can not cause the deadlock. Why it
    happens?Because "With the exception of the Queue access
    method, the Berkeley DB access methods do page-level
    locking." (Locking granularity:
    http://www.oracle.com/technology/documentation/berkele
    y-db/db/ref/lock/page.html)
    The only solution I see now is use
    DbEnv::set_lk_detect() and handle the
    DB_LOCK_DEADLOCK error code on each DB operationand
    retry the operation on the deadlock, but it is the
    last thing I would like to use.
    The conclusion:
    - why single API call of Db::put() or Db::del()under
    transaction made in the same time cause adeadlock?
    Because both of them will require write locks on the
    same page.
    "The first component of the infrastructure, deadlock
    detection, is not so much a requirement specific to
    transaction-protected applications, but instead is
    necessary for almost all applications in which more
    than a single thread of control will be accessing the
    database at one time. Even when Berkeley DB
    automatically handles database locking, it is
    normally possible for deadlock to occur. Because the
    underlying database access methods may update
    multiple pages during a single Berkeley DB API call,
    deadlock is possible even when threads of control are
    making only single update calls into the database."
    (Deadlock detection:
    http://www.oracle.com/technology/documentation/berkele
    y-db/db/ref/transapp/deadlock.html)
    Please let me know if this time I was clear enough.
    Bogdan Coman

  • Generic Reports accessing Multiple Database Servers

    <p>Hi,</p><p>I have a report which is accessing a table present within SQL Server. This is done by creating a system DSN which points to SQL Server at report design time through the database expert from the Crystal Reports Developer. The same table is also present in another database server ie ORACLE. The requirement is that I should be able to execute the report against ORACLE database at runtime. I have seen a lot of examples to do this using ODBC and OLEDB ie changing datasources at runtime but all of these have to specify the database username and password at runtime. </p><p>Is there any way for me to achieve this without passing the username and password at runtime? If so it would be great if i could get all possible approaches to achieve this. </p><p>Thanks in advance</p><p>Joseph Samuel</p><p>&#160;</p>

    I am doing the samething.
    I found that if the report is created under OS Auth mode of Oracle, then it is OK in integrated security mode in runtime, you don't need to set any logon information in runtime, but sure, please follow the OS Auth requirement of Oracle.
    But if the report is created using stand security mode and wish it to be run under integrated security mode in runtime, then a logon error would occurs, but if the crystal report view control set to enable the database logon promot, then we can still enter something in username textbox and check the "use integrated security" checkbox, then the report is still OK.
    I wish to have the database logon prompt disable and override the logon information in program in runtime and let the report shown without any problem, but up to now, i still have no any idea.  I will come back after I got any solution for this.

  • PERFORMANCE while accessing remote database DB2 on AS/400 using WAS

    Subject: PERFORMANCE while accessing remote database
    We have IBMWebSphere Application Server Standard Edition 3.5.3 running on
    AS/400 iSeries Server (V4R5, test)and local DB2 Database.
    I am using AS/400 Developer Kit for Java JDBC Driver(type2, com.ibm.db2.jdbc.app.DB2Driver)
    to talk to local database. The performance was very good.
    When I try to access remote database (every thing same as local) which is on another AS/400
    machine of V4R4 (we use it for production, remote database) using IBM Toolbox for Java JDBC driver
    (com.ibm.as400.access.AS400JDBCDriver, type 4 driver), I can see 30to40%decrease in performance.
    Here we have WAS on previous V4R5 AS/400 machine.
    My questions are
    Is the performance decrease is due to
    1. the driver I am using? if it is Is there any other alternative drivers to access
    remote database to boost performance?
    2. the release difference of local(V4R5) and remote data base(V4R5)
    3. Currently most uses remote database while we do this testing. Is that the cause?
    or Is there any other cause or Drivers etc??? Suggestions and help is most welcome.
    Thank you.

    What about
    4. the data has to travel across the network.

  • Apex on one database, application schema on another database

    Hi Forum
    I need your help to let me whether following scenario is possible in Apex or not.
    Currently we have a database that has application schema running on server A
    We want to install another Database on another server ( server B) and install and configure apex on that.
    Can we configure Apex that we installed on B access to database on server A?
    if yes, can you please provide some hints how to do that or you can provide a link to web page.
    Sincerely

    Hello,
    Yes you can achieve that by using database links, take a look at this link for the syntax -
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_56a.htm#2061507
    you can then query etc over the link -
    SELECT * FROM employees@remotedb;Note that there will be a performance overhead in operating on the remote data rather than having it 'local', so for certain data (like lookups etc) you might want to consider the use of materialized views to store a local copy of the data.
    Hope this helps,
    John.
    http://jes.blogs.shellprompt.net
    http://www.apex-evangelists.com

  • How to copy file from one table to another table at another database

    I need to transfer my tables from one workspace and schema to another workspace and schema. Basically I need to create again all the tables at this new schema. How could I transfer data from tha table at old schema to the table at new schema when this table has files stored in it? (data type is blob)
    thank you so much,
    Silver

    Hello Silver,
    Depending which database you're using (if it's available) I would recommend to use datapump.
    Datapump allows you to copy an entire schema to another database, it's the "new" export/import you might now.
    Regards,
    Dimitri
    http://dgielis.blogspot.com/
    http://www.apex-evangelists.com/
    http://www.apexblogs.info/
    REWARDS: Please remember to mark helpful or correct posts on the forum

  • Help on export sybase iq tables with data and import in another database ?

    Help on export Sybase iq 16 tables with data and import into another database ?

    Hi Nilesh,
    If you have table/index create commands (DDLs), you can create them in Developper and import data using one of methods below
    Extract/ Load table
    Insert location method : require IQ servers to be entered in interfaces file
    Backup/Restore : copy entire database content
    If you have not the DDLs, you can generate them using IQ cockpit or SCC.
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc01773.1604/doc/html/san1288042631955.html
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc01840.1604/doc/html/san1281564927196.html
    Regards,
    Tayeb.

  • How to update date in a table from another database

    Hello,
    I am trying to code a way to Update my Testing database from another database. In the coding below the database that has the updated data ends in Restore.
    Use ClientDB_MASTER_Restore
    Truncate Table ClientDB_MASTER_Testing.dbo.Activity_Tracking_AZ
    GO
    SELECT * INTO ClientDB_Master_Testing.dbo.Activity_Tracking_AZ
    FROM Activity_Tracking_AZ
    Go
    I know with this Technique the table must be truncated if not deleted first.  There is probably a better way to do this which I'm very open to.  I'm also looking for coding that will roll back any changes made should an error occur.
      As always, any help is greatly appreciated.
    David92595

    USE ClientDB_MASTER_Testing
    go
    SET XACT_ABORT ON
    BEGIN TRANSACTION
    Truncate Table dbo.Activity_Tracking_AZ
    -- SET IDENTITY_INSERT dbo.Activity_Tracking_AZ ON
    INSERT dbo.Activity_Tracking_AZ (col1, col2, ...)
    SELECT col1, col2,
    FROM ClientDB_MASTER_Restore.dbo.Activity_Tracking_AZ
    -- SET IDENTITY_INSERT dbo.Activity_Tracking_AZ OFF
    COMMIT TRANSACTION
    If you find it boring to type the column lists, just find the table in Object Explorer, and drag the columns node to where you want the column list.
    You need the SET IDENTITY_INSERT command if the table has an IDENTITY column.
    By wrapping the code in a transaction, you are not left with an empty table if the INSERT fails. The command SET XACT_ABORT ON makes sure that the batch is aborted and rolled back in case of an error.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for

  • "NETWORK CHANGE DETECTED" - Broadband dropping out

    About 4 to 5 weeks ago I started finding my broadband connection would just 'drop out'. I'd lose it. No email or internet browsing, nothing. It might come back on it's own, it might not. Generally I'd run diagnoistic's, and some time sit would come b

  • Application Manager is Corrupted

    How can I fix it?  I am traveling Sunday and need it to work.  I have tried turning the computer off and on again.  Several times.  Running Windows 7.  It was woorking until a few days ago.  Now I can't access any apps.

  • Queries with out workbooks???

    Hi, I wish to have a list of queries which do not have workbooks - And i am creating a report which will have these information such as the list of queries with out workbooks and who is calling them?? Raj

  • Oracle 8i Personal Ed. start-up PSW

    I installed Oracle 8i Pesonal Edition on Windows 98. Now I want to start up database throught OracleHome81-Database Administration-start database. It ask me Pasward and I tried all possible PSW and not work. What is start-up password? Thanks! Shi nul

  • Which thread to post PM based questions in forum?

    Hi, I want to post a question about SAP PM module (IW3* and IW4* screens) I could not find a dedicated thread in forum, where should I post, MM or PP or another one? Thanks,