Multiple processes accessing a replicated database

Hi
I am after some help with multiple processes and replicated databases.
I have a primary and secondary database replicated across a pair of servers and this seems to be working well. I'm trying to run another process on one of the machines that opens the environment and databases to view and/or modify the data.
The problem is that when I run this process it causes some sort of corruption such that the server process on the same box gets a DB_EVENT_PANIC the next time it accesses the database. I would like to understand what I am doing wrong.
The servers and standalone process all use the same code to open and close the environment and databases (see below). Just calling
     open_env();
     open_databases();
     close_databases();
     close_env();
in the utility process causes DB_EVENT_PANIC in the server process.
Can anybody spot what I am doing wrong? I am using DB Version 4.7
Thanks
Ashley
open_env() {
db_env_create(&dbenv, 0);
dbenv->app_private = &my_app_data;
dbenv->set_event_notify(dbenv, event_callback);
dbenv->rep_set_limit(dbenv, 0, REPLIMIT);
dbenv->set_flags(dbenv, DB_AUTO_COMMIT | DB_TXN_NOSYNC, 1);
dbenv->set_lk_detect(dbenv, DB_LOCK_DEFAULT)
int flags = DB_CREATE | DB_INIT_LOCK |
          DB_INIT_LOG | DB_INIT_MPOOL |
          DB_INIT_TXN | DB_RECOVER | DB_THREAD;
flags |= DB_INIT_REP;
dbenv->repmgr_set_local_site(dbenv, listen_host, port, 0);
dbenv->rep_set_priority(dbenv, 100);
dbenv->repmgr_set_ack_policy(dbenv, DB_REPMGR_ACKS_ONE);
for (x = 0; x < num_peers; x++) {
dbenv->repmgr_add_remote_site(dbenv, peers[x].name, peers[x].port, &peers[x].eid, 0);
dbenv->rep_set_nsites(dbenv, num_peers + 1);
dbenv->open(dbenv, ".", flags, S_IRUSR | S_IWUSR);
dbenv->repmgr_start(dbenv, 3, DB_REP_ELECTION);
sleep(SLEEPTIME);
close_env() {
dbenv_p->txn_checkpoint(dbenv_p, 0, 0, 0);
dbenv_p->close(dbenv_p, 0);
open_databases() {
db_create(&dbp, dbenv_p, 0)
flags = 0;
if (app_data->is_master)
flags |= DB_CREATE;
dbp->open(dbp, NULL, "primary", NULL, DB_HASH, flags, 0);
... Wait for db if slave and ENOENT ...
primary = dbp;
dbp->open(dbp, NULL, "secondary", NULL, DB_BTREE, flags, 0);
... Wait for db if slave and ENOENT
secondary = dbp;
while (app_data->client_sync) {
sleep(SLEEPTIME);
close_databases() {
     secondary->close(secondary, 0);
     primary->close(primary, 0);
     dbenv_p->txn_checkpoint(dbenv_p, 0, 0, 0);
}

Running recovery (DB_RECOVER flag to env->open()) must be done only in the first process to open the environment.
This is a general rule of Berkeley DB, not specific to replication. You can read more about it in the Reference Guide, on the page entitled "Architecting Transactional Data Store applications".

Similar Messages

  • Installing multiple instances accessing the same database

    Hi,
    I want to install two different instances of Oracle 10g in two different machines which will access the same database which will be stored in the shared storage.
    Is it possible to install them without installing RAC? The instances will be one active and the other passive, so the services will be up in one server and down in the other and the switching (shutting down one server and starting up the other) will be manual.
    Two servers will be running Linux and clustered in Linux level.
    Does Oracle offers this solution without installing Clusterware software?
    Thank you

    > The instances will be one active and the other passive, so the services will be up in one server and down in
    the other and the switching (shutting down one server and starting up the other) will be manual
    Missed this part as I was thinking proper cluster and RAC.
    This is neither. Yes, this can be done using two servers and shared storage.
    Is it a good idea? Not really. As this configuration does not provide redundancy at physical database level. You loose that storage.. bye-bye database. Does not matter whether you have a 100 backup servers that can be used.
    Thus the business reasons that you are trying to meet with this config have to be clarified and expectations determined.
    Separate servers using Data Guard will be a far more superior solution in many respects.

  • Can multiple processes access 1 class exclusively

    hi,
    i have three listener programs running in 3 consoles which listen to different queues.now i have a conection program which i call from my 3 listeners.the problem is that only 1 connection must be made at one time.i tried implementing the connection class as a singleton,but it doesnt work as the 3 listeners operate as 3 diff processes(my understanding).however if i implement all 3 listeners in 1 single program and start 1 program in console then the singleton works fine. so my question is is there a way i can synchronize the connection class from 3 listener programs running seperately in different consoles???
    maybe my understanding is wrong.excuse me if it is.
    any help will be highly appreciated... !!!!

    You could either run your three listeners as three threads in the same VM as the connection, or you can construct a means to enforce single access. Here are several possibilities for the latter. Some are better than others:
    * When a listener wants to connect, he opens a server socket on a particular port. The OS will only allow one listener on a port at a time, so while one listener holds the port, the others will wait their turns. The port is only used as a token to say whose turn it is.
    * Have the listeners open a separate connection to the connection program, to tell it when they want their turn. The connection program then tells each listener when it's his turn to open the "real" connection.
    * Have a separte process that accepts multiple connetions and recieves requests from the listeners, then forwards them on one-by-one to the connection program.
    * Give each listener a fixed block of time when it can connect. For example: L1 connects when minute % 9 == 0, L2 when m%0 ==3, L3 when m%0==6
    Or, you could remove the one-connection-at-a-time restriction. Is there a good reason to keep it?

  • Multiple RO processes accessing replicated client?

    Hi. I have a question about using replication. (Under Linux if it makes any difference).
    Is it possible to have multiple processes sharing a set of replicated db files? I would only have one process updating the environment, and only to the extent of running the replication manager
    to get updates from the master; we would never have the clients write anything out. The other processes would only open the files read only. The docs are a bit unclear on the subject and as to whether this is a supported configuration.

    Hey Matthew,
    That configuration is supported. The only constraint on multi-process access relates to processing incoming messages (which the replication manager takes care of). As long as only one process calls DB_ENV->repmgr_start, other processes can open the environment for read-only access.
    Michael.
    P.S. For future reference, we've got a separate forum for Berkeley DB HA:
    Berkeley DB High Availability (Replication)

  • Sharing in-memory database among multiple processes

    From the bdb document, due to disk-less operation, it is not possible to share in-memory among multiple processes.
    I am wondering, if it is possible to use DB_ENV but without DB->sync or any other sync() function call, such that the database, cache, etc in the shared region not flushed to the disk, to achieve disk-less operation. Please share your thought.

    See "Using the Resource Capping Daemon on a System With Zones Installed" in SystemAdministration Guide: Solaris Containers-Resource
    Management and Solaris Zones. You can get the manual from docs.sun.com.
    Chapter 10 discuss physical memory control.
    Regards

  • Accessing one mail database across multiple macs

    I am not too familiar with the mac yet, I'm getting there, but I have an issue I can't figure out...
    I convinced my parents a long time ago to get macs, imac and macbook pro, and both have windows on them, and that's what they're using. I have imposed a deadline for them of tomorrow to finally make the switch, and remove the windows installation. They're using Outlook for their email, and I want to know if it's possible to
    1) Import outlook pst into Mail without going through entourage first
    2) Access the mail database on the imac on the macbook pro.
    Thanks in advance...

    ZooCrewMan,
    I may be wrong about their support of IMAP. I just checked the Earthlink support site myself, and I cannot find anything about it.
    Mail does not store in a "database," but rather stores each message as a separate text file. Attachments are stored separately, but linked to the associated message. This is handy for several reasons. One, it avoids the problems associated with monolithic database files. It also aids in the backup process, making each incremental backup that much smaller. Finally, it is instrumental in the ability to index and search all messages by content.
    It is not possible to "point" Mail on one machine to the stored emails on another, at least not without serious advanced work and headaches, and it is not necessary. By telling Mail on each machine to use the same address, servers, etc., but to leave the messages on the server until they are either moved to a local email folder or deleted, the results you need will be achieved. In this configuration, Mail essentially works as an IMAP client, only using the POP protocol. When Mail is launched, it downloads all new messages from the server, but leaves them there. The server is told to remove the message only when that message is deleted and/or moved to another local folder.
    When Mail is launched on the second machine, the same new messages are there on the server for downloading, minus those already deleted by the first machine. The only thing that is not duplicated on both is "sent" mail.
    Scott

  • Can multiple threads write to the database?

    I am a little confused from the statement in the documentation: "Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time."
    1. Can multiple threads write to the "Simple Data Store"?
    2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    #include "stdafx.h"
    #include <stdio.h>
    #include <windows.h>
    #include <db.h>
    static DB *db = NULL;
    static DB_ENV *dbEnv = NULL;
    DWORD WINAPI th_write(LPVOID lpParam)
    DBT key, data;
    char key_buff[32], data_buff[32];
    DWORD i;
    printf("thread(%s) - start\n", lpParam);
    for (i = 0; i < 200; ++i)
    memset(&key, 0, sizeof(key));
    memset(&data, 0, sizeof(data));
    sprintf(key_buff, "K:%s", lpParam);
    sprintf(data_buff, "D:%s:%8d", lpParam, i);
    key.data = key_buff;
    key.size = strlen(key_buff);
    data.data = data_buff;
    data.size = strlen(data_buff);
    db->put(db, NULL, &key, &data, 0);
    Sleep(5);
    printf("thread(%s) - End\n", lpParam);
    return 0;
    int main()
    db_env_create(&dbEnv, 0);
    dbEnv->open(dbEnv, NULL, DB_CREATE | DB_INIT_MPOOL | DB_THREAD, 0);
    db_create(&db, dbEnv, 0);
    db->open(db, NULL, "test.db", NULL, DB_BTREE, DB_CREATE, 0);
    CreateThread(NULL, 0, th_write, "A", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "C", 0, 0);
    th_write("C");
    Sleep(2000);
    }

    Here some clarification about BDB Lock and Multi threads behavior
    Question 1. Can multiple threads write to the "Simple Data Store"?
    Answer 1.
    Please Refer to http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    A Data Store (DS) set up
    (so not using an environment or using one, but without any of the DB_INIT_LOCK, DB_INIT_TXN, DB_INIT_LOG environment regions related flags specified
    each corresponding to the appropriate subsystem, locking, transaction, logging)
    will not guard against data corruption due to accessing the same database page and overwriting the same records, corrupting the internal structure of the database etc.
    (note that in the case of the Btree, Hash and Recno access methods we lock at the database page level, only for the Queue access method we lock at record level)
    So,
    if You want to have multiple threads in the application writing concurrently or in parallel to the same database You need to use locking (and properly handle any potential deadlocks),
    otherwise You risk corrupting the data itself or the database (its internal structure).
    Of course , If You serialize at the application level the access to the database, so that no more one threads writes to the database at a time, there will be no need for locking.
    But obviously this is likely not the behavior You want.
    Hence, You need to use either a CDS (Concurrent Data Store) or TDS (Transactional Data Store) set up.
    See the table comparing the various set ups, here: http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    Berkeley DB Data Store
    The Berkeley DB Data Store product is an embeddable, high-performance data store. This product supports multiple concurrent threads of control, including multiple processes and multiple threads of control within a process. However, Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time. The Berkeley DB Data Store is intended for use in read-only applications or applications which can guarantee no more than one thread of control updates the database at a time.
    Berkeley DB Concurrent Data Store
    The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. This product provides built-in concurrency and locking feature. Berkeley DB Concurrent Data Store is intended for applications that need support for concurrent updates to a database that is largely used for reading.
    Berkeley DB Transactional Data Store
    The Berkeley DB Transactional Data Store product adds support for transactions and database recovery. Berkeley DB Transactional Data Store is intended for applications that require industrial-strength database services, including excellent performance under high-concurrency workloads of read and write operations, the ability to commit or roll back multiple changes to the database at a single instant, and the guarantee that in the event of a catastrophic system or hardware failure, all committed database changes are preserved.
    So, clearly DS is not a solution for this case, where multiple threads need to write simultaneously to the database.
    CDS (Concurrent Data Store) provides locking features, but only for multiple-reader/single-writer scenarios. You use CDS when you specify the DB_INIT_CDB flag when opening the BDB environment: http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envopen.html#envopen_DB_INIT_CDB
    TDS (Transactional Data Store) provides locking features, adds complete ACID support for transactions and offers recoverability guarantees. You use TDS when you specify the DB_INIT_TXN and DB_INIT_LOG flags when opening the environment. To have locking support, you would need to also specify the DB_INIT_LOCK flag.
    Now, since the requirement is to have multiple writers (multi-threaded writes to the database),
    then TDS would be the way to go (CDS is useful only in single-writer scenarios, when there are no needs for recoverability).
    To Summarize
    The best way to have an understanding of what set up is needed, it is to answer the following questions:
    - What is the data access scenario? Is it multiple writer threads? Will the writers access the database simultaneously?
    - Are recoverability/data durability, atomicity of operations and data isolation important for the application? http://docs.oracle.com/cd/E17076_02/html/programmer_reference/transapp_why.html
    If the answers are yes, then TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    Question 2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    Answer 2.
    Definitely yes, You can see data loss and/or data corruption.
    You can check the behavior of your testcase in the following way
    1. Run your testcase
    2.After the program exits
    run db_verify to verify the database (db_verify -o test.db).
    You will likely see db_verify complaining, unless the thread scheduler on Windows weirdly starts each thread one after the other,
    IOW no two or ore threads write to the database at the same time -- kind of serializing the writes
    Question 3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    Answer 3.
    In Your case the TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    doing this You have proper deadlock handling in place and proper transaction usage
    so
    You are protected against potential data corruption/data loss.
    see http://docs.oracle.com/cd/E17076_02/html/gsg_txn/C/BerkeleyDB-Core-C-Txn.pdf
    Multi-threaded and Multi-process Applications
    DB is designed to support multi-threaded and multi-process applications, but their usage
    means you must pay careful attention to issues of concurrency. Transactions help your
    application's concurrency by providing various levels of isolation for your threads of control. In
    addition, DB provides mechanisms that allow you to detect and respond to deadlocks.
    Isolation means that database modifications made by one transaction will not normally be
    seen by readers from another transaction until the first commits its changes. Different threads
    use different transaction handles, so this mechanism is normally used to provide isolation
    between database operations performed by different threads.
    Note that DB supports different isolation levels. For example, you can configure your
    application to see uncommitted reads, which means that one transaction can see data that
    has been modified but not yet committed by another transaction. Doing this might mean
    your transaction reads data "dirtied" by another transaction, but which subsequently might
    change before that other transaction commits its changes. On the other hand, lowering your
    isolation requirements means that your application can experience improved throughput due
    to reduced lock contention.
    For more information on concurrency, on managing isolation levels, and on deadlock
    detection, see Concurrency (page 32).

  • Accessing the same database file using different handles/cursors

    Will there be any problems accessing the same database file using different DB handles in a transactional environment? We have implemented a process which have multiple transient threads coming up and initiating DB opens and read/write operations to the same database file using different handles and cursors?
    Can this potentially create any problems/bottlenecks? Can someone suggest the best way to deal with this scenario?
    Thanks in advance.
    SB

    Hi,
    Berkeley DB is well suited to the scenario you describe. You need to ensure that Berkeley DB is configured correctly for transactional access, the best information describing the requirements is in the Reference guide here:
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/transapp.html
    If there will be multiple threads operating concurrently, then you will need to design your application to detect and deal with deadlock situations.
    Regards,
    Alex Gorrod
    Oracle Berkeley DB

  • ARCA Xtra  - multiple user access to SQLite DB

    Hi Guys,
    I have having trouble with multiple users accessing a SQLite DB file from a projector. They can all open and browse the data through the projector fine, but if USER1 makes a change and clicks save, and then USER2 tries access the DB it causes USER2 to get a script error.
    Is it possible to write a script which tells the user that the DB is being modified so please wait, rather than throwing them out of the projector with a script error
    Thankyou

    If you want multiple users to access a database then you would probably be better off using one designed for that purpose like MySQL.
    But, if you are set on SQLite and if you can have multiple simultaneous connections which you seem to imply you can, then the best solution I can think of is to create a Class/Script that handles all the details of polling the database for availability and handles other issues like a progress dialogue box.
    You have to treat the database as an asynchronous action - meaning you send it a query and at some latter time it sends a result back via a callback. Doing it this way greatly simplifies any database queries you want to make. As long as your queries are fairly simple like SELECT, INSERT, DELETE, etc. then the code is fairly straight forward.
    Off the top of my head I wrote something to get you started. I have not tested this code but it does compile. The following is a Parent script that you would use to interface to your database in an asynchronous manner.
    -- Asynchronus SQLite
    property  pDB  -- instance of Arca xtra
    property  pTimeoutTime  -- how long in milliseconds to ping database
    property  pPingTime  -- time between database pings.
    property  RunningQuery  -- Boolean. is a query running? true/false
    property  pTimeOb  -- timout object for polling the database
    property  pCurOperation  -- set of data for current query
    property  pPingCount  -- how many times the current query has pinged the database
    property  pAlertBox  -- a MIAW, LDM, or a Sprite that informs the user as to the progress of the query.
    on new me
      arcaregister([0000,000,0000])
      pDB = xtra("arca").new()
      Result = pDB.openDB(the moviepath & "ST_data")
      if Result.errorMsg then
        alert("Error Opening Database" & return & pDB.explainError(Result.errorMsg))
        return void
      end if
      pTimeoutTime  = 10000 
      pPingTime  = 250
      RunningQuery = false
      pAlertBox = Sprite(1000)  -- for example
      return me
    end new
    on cleanup me
      pDB.closeDB()
    end cleanup
    on executeSQL me, Query, CallbackOb, CallbackMethod, OptionalParameters
      if RunningQuery then exit -- only allow one query at a time
      RunningQuery = True
      pCurOperation = [#Query:Query, #OptionalParameters:OptionalParameters, #CallbackOb:CallbackOb, #CallbackMethod:CallbackMethod]
      pPingCount = 0
      pTimeOb = timeout().new("QueryProcessor_"&me, 1, me, #processQuery)  -- creating the timeout object here breaks the call stack, which is good.
    end executeSQL
    on processQuery me, TimeOb
      Result = pDB.executeSQL(pCurOperation.Query, pCurOperation.OptionalParameters)
      if Result.errorMsg then
        if Result.errorMsg = 5 then -- database is currently locked
          pPingCount = pPingCount + 1
          if pPingCount = 1 then  -- then inform user there will be a delay.
            pAlertBox.setMessage("Waiting for database response.")
            pAlertBox.setProgress(0)
            pAlertBox.show()
            pTimeOb.period = pPingTime
            exit
          end if
          pAlertBox.setProgress((pPingCount * pPingTime / pTimeoutTime) * 100 ) 
          if pPingCount * pPingTime = pTimeoutTime then -- timed out
            alert("Query Timed out.")
          else
            exit  -- try again in pPingTime time.
          end if
        else  -- there is some sort of database error
          alert("Database Error: " & return & pDB.explainError(Result.errorMsg))
        end if
      else  -- no query errors
        call(pCurOperation.CallbackMethod, pCurOperation.CallbackOb, Result)
      end if
      -- if the code makes it this far then we are done and need to clean things up
      if pTimeOb. objectP then
        pTimeOb.forget()
        pTimeOb = void
      end if
      pAlertBox.hide()
      RunningQuery = false
    end processQuery
    on setTimeOutTime me, MilliSecs
      pTimeoutTime = MilliSecs
    end setTimeOutTime
    on setPingTime me, MilliSecs
      pPingTime  = MilliSecs
    end setPingTime
    You then create an instance of this script on preparemovie.
    -- Movie script
    global gDB
    on prepareMovie
      gDB = script("Asynchronus SQLite").new()
      if gDB.voidP then halt -- can not connect to the database
    end
    on stopMovie
      gDB.cleanup()
    end
    Then it is simply a matter of sending your queries to the gDB object and it will send the results back to the callback handler and object that you specify. Here's a behavior that shows how simple this should be:
    -- Sample DB Behavior
    global gDB
    on mouseUp me
      Query = "select * from users"
      gDB.executeSQL(Query, me, #setQueryResult) -- string, callback object, callback handler name
    end
    on setQueryResult me, Result  -- this is the callback handler/method
      put Result
    end
    I also suggest using a MIAW or a LDM or a set of sprites as a way to inform the user of any delays in processing a query. Check the code for pAlertBox to see how I use this idea to update a progress bar. Of course you will have to create the implementation.

  • Access to MySql Database in the Web Service

    Hi all,
    In my j2me web service project, i want to access to MySql database in the web service and
    my installation is
    Tomcat 5.0 for Java WSDP,
    Java Web Services Developer Pack 1.5,
    MySql Server 5.0 and environment variables are adjusted...
    server.xml where is in "C:\tomcat50-jwsdp\conf" is adjusted like this
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <Realm className="org.apache.catalina.realm.JDBCRealm"
         debug="99"
         driverName="com.mysql.jdbc.Driver"
         connectionURL="jdbc:mysql://localhost:3306/erendb"
         connectionName="root"
         connectionPassword=""
    />
    but i have not access to MySql database in the web service yet...
    please help...

    Hi Luis,
    If you see closely, the productID is actually a concatenation of the areaID and the productID. The reasoning here is that a product can be in multiple nodes of the product catalog and one would need to specify the product on a particular node for the direct URL to work.
    On a client site, we came up with a better solution by creating a new FM to retrieve the specific areaID and productID for a product. This FM could be called by extending the webshop. Then a .NET program was written to re-direct a shortened form of the URL to the long URL and therefore the specific product in the product catalog. Eg.: http://yyyyy.com/b2c/b2c/product would be automatically redirected to http://yyyyy.com/b2c/b2c/init.do?shop=<shop name>&areaID=<area Guid>&productID=<product Guid>, something similar to the concept behind "tinyurl.com". A point to remember here is that this would work only for the B2C webshop.
    Hope this helps.
    Cheers,
    Ashok.

  • I am in the process of expanding a database of chemistry journal articles.  These materials are ideally acquired in two formats when both are available-- PDF and HTML.  To oversimplify, PDFs are for the user to read, and derivatives of the HTML versions a

    I am in the process of expanding a database of chemistry journal articles.  These materials are ideally acquired in two formats when both are available-- PDF and HTML.  To oversimplify, PDFs are for the user to read, and derivatives of the HTML versions are for the computer to read.  Both formats are, of course, readily recognized and indexed by Spotlight.  Journal articles have two essential components with regards to a database:  the topical content of the article itself, and the cited references to other scientific literature.  While a PDF merely lists these references, the HTML version has, in addition, links to the cited items.  Each link URL contains the digital object identifier (doi) for the item it points to. A doi is a unique string that points to one and only one object, and can be quite useful if rendered in a manner that enables indexing by Spotlight.  Embedded URL's are, of course, ignored by Spotlight.  As a result, HTML-formatted articles must be processed so that URL's are openly displayed as readable text before Spotlight will recognize them.  Conversion to DOC format using MS Word, followed by conversion to RTF using Text Edit accomplishes this, but is quite labor intensive.
      In the last few months, I have added about 3,500 articles to this collection, which means that any procedure for rendering URL's must be automated and able to process large batches of documents with minimal user oversight.  This procedure needs to generate a separate file for each HTML document processed. Trials using Automator's "Get Specified Finder Items" and "Get Selected Finder Items", as well as "Ask For Finder Items"  (along with "Get URLs From Web Pages") give unsatisfactory results.  When provided with multiple input documents, these three commands generate output in which the URLs from multiple input items are merged into a single block, which yields a single file using "Create New Word Document" as the subsequent step.  A one-to-one, input file to output file result can be obtained by processing one file at a time, but this requires manual selection of each item and one-at-a-time processing. What I need is a command that accepts multiple input documents, but processes them one at a time, generating a separate output for each file processed.  Is there a way for Automator to do this?

    Hi,
    With the project all done, i'm preparing for the presentation. Managed to get my hands on a HD beamer for the night (Epason TW2000) and planning to do the presentation in HD.
    That of course managed to bring up some problems. I posted a thread which i'll repost here . Sorry for the repost, i normally do not intend to do this, but since this thread is actually about the same thing, i'd like to ask the same question to you. The end version is in AfterEffects, but that actually doesn't alter the question. It's about export:
    "I want to export my AE project of approx 30 min containing several HD files to a Blu Ray disc. The end goal is to project the video in HD quality using the Epson  EMP-TW2000 projector. This projector is HD compatible.
    To project the video I need to connect the beamer to a computer capable of playing a heavy HD file (1), OR burn the project to a BRD (2) and play it using a BRplayer.
    I prefer option 2, so my question is: which would be the preferred export preset?
    Project specs:
                        - 1920x1080 sq pix  (16:9)
                        - 25 fps
                        - my imported video files (Prem.Pro sequences) are also 25 fps and are Progressive (!)
    To export to a BRD compatible format, do i not encounter a big problem: my projectfiles are 25 fps and progressive, and I believe that the only Bluray preset dispaying 1920x1080 with 25 fps requests an INTERLACED video  (I viewed the presets found on this forum, this thread)... There is also a Progr. format, BUT then you need 30 fps (29,...).
    So, is there one dimension that can be changed without changing the content of the video, and if yes which one (either the interlacing or the fps).
    I'm not very familiar with the whole Blu-ray thing, I hope that someone can help me out."
    Please give it a look.
    Thanks,
    Jef

  • DataSource for Replicated Database

    Hi, first, of all, I don't know if this forum is the right place to post my question, so sorry for bothering you!
    I'm working in a project where I'm thinking about using two Oracle databases one replicating the other. I read somewhere that this using Multimaster Replication is a good way to obtain more availability. But I don't know how to create a DataSource in OC4J that can use both databases and choose the one that's not down. If I were using Weblogic, I could create a connection pool for each database, and then create a MultiPool that uses the already created pools.
    I don't know if I miss something in the documentation of Multimaster Replication, but I don't see how to create a single point of access for the Replicated databases, and neither how to create a DataSource for OC4J that can access more than one database. I'm totally new in Oracle world! Perhaps it isn't the better way to obtain more availabilty, perhaps I need to use a third component to provide a single point of access for the databases. I really don't know!
    Thanks in advance,
    RGB

    Roberto,
    Multi-master replication is for distributed datbase. The best thing to have a better database availability is 9i Real Application Cluster.
    Please follow http://otn.oracle.com/products/oracle9i/content.html to read more about RAC.
    regards
    Debu

  • Access EBS R12 database views data

    Hello,
    I am connected with the database of EBS using Toad and I want to view data of different views like glfg_actual_balances, glfg_actual_journal_entries, etc.
    I have executed fnd_global.apps_initialize (1110,20434,101) but even after this, I get no rows for select * from glfg_actual_balances.
    Any idea how to solve this?
    Thanks

    How to view org-specific data in a MOAC environment [ID 415860.1]
    Oracle Applications Multiple Organizations Access Control for Custom Code [ID 420787.1]
    How to set the Organization Context in R12? [ID 437119.1]
    SQL Queries and Multi-Org Architecture in Release 12 [ID 462383.1]
    How To Retrieve Rows From Table Or Synonym For An ORG_ID In E-Business Suite 12 [ID 787677.1]
    as example
    Re: How to fetch data for all OUs

  • Moving Exchange 2010 Mailbox replicated databases path in DAG environments.

    Hi there,
    I’m trying to get some feedback on the topic of moving Exchange 2010 Mailbox replicated databases path in DAG environments.
    Here is the situation: I currently have a 3-Node DAG (Node 1 and Node 2 are in my main datacenter, and Node 3 in my Disaster Recovery (DR) site in a remote location.
    I have DB copies in Node 2 and Node 3. The thing is that the DB copies in Node 2 are in an older storage box and since we got a new storage box, I need to move the DBs and related logs of Node 2 to the new storage box
    I have found some information about how to deal with this (below I’m listing a KB link) but I would like to reconfirm a couple of things to make sure I’m understanding this correctly
    Move the Mailbox Database Path for a Mailbox Database Copy:
    https://technet.microsoft.com/en-us/library/dd979782%28v=exchg.141%29.aspx
    According to the KB: “If the mailbox database being moved is replicated to one or more mailbox database copies, you must follow the procedure in this topic to move the mailbox
    database path”
    Would this apply to my case even when I’m moving the BDs copies and Logs on Node 2 as opposed to Node 1 where the source DBs are?
    On step #3 in the procedure, you are supposed to “Remove all mailbox database copies for the database being moved. After all copies are removed, preserve the database and transaction log files from each server from which the database copy is being removed
    by moving them to another location. These files are being preserved so the database copies do not require re-seeding after they have been re-added.”
    Then in Step # 7, you are supposed to “Add all of the database copies that were removed in Step #3”
    As far as I know, when you add a copy of a database, Exchange creates the copy DB and starts to seed the replica servers with an up to date copy of the DB and all the current transaction logs at that point…according to the instructions
    above, you are supposed to re-add the DB copied we preserved...does it mean that we need to wait for the DBs seed process to finish after “adding the DB copy” and then replace the new DBs copies and logs created by the “Add database copy” function with the
    DB and logs preserved in Step #3?
    Thanks in advance for your feedback!
    FT
    FT

    Hi there,
    What the article is stating is that once you have removed the copies you can keep the existing transaction log files and database edb file to allow you not to have to do a full seed. You can do this by using the -seedingpostponed parameter in
    Add-MailboxDatabaseCopy
    However, and quite honestly, if your database isn't that big and your are not worried about performing a full copy of the database again to the other DAG members once you have moved your database to its preferred new location, just add the copy in the normal
    way and remove the legacy files afterwards.
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • Replicated database

    hi all
    ora 8i on sunsolaris
    I want to replicate the production database with the standby database. Both are having almost equal resources, which is the best option to create replicated database, it should be COST EFFECTIVE. we generate millions of records everyday. Could any one tell me in detail?
    reg
    bala

    Hi,
    You can create your standby database with a cold backup, or with a hot backup. In your case, perhaps the better is :
    1. create a standby controlfile
    2. stop production,
    3. copy datafile on standby site
    4. create your standby database
    5. start your standby database
    6. strat your production
    You can find information about creation of standby database here :
    http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76995/standbyc.htm
    I created a standby with this support, only problem with a temporary tablespace wtih tempfile (please see metalink note 101627.1).
    Attention about your network and quantity/size of datafile which transit about this network... I must to modify listener.ora from standby site with prespawn process.
    Nicolas.

Maybe you are looking for