Distributed database environment

In my one Jheadstart project, it will meet distributed database environment, that is some one project's application data be located in different oracle database .
How to treat this situation, about in one project to handle distributed database environment ?
Is the jheadstart support multi database connection ?
or need through database link capacity to fulfill !

Ting Rung,
There is at least one thing that you might run into, and that is the issue of transaction management. An application module represents a 'data model' for a tak that is accomplished within one transaction. You can nest application modules, but even then will the top-level application module provide the transaction context.
This means that all data that necessarily needs to be persisted in one transaction, needs to be represented by the same (hierarcy of) application module(s) and therefore must come from the same data source (database). Said in other words: when the customers are in one database and the orders in another, you cannot persist them in one and the same transaction.
There are more issues related to distributed applications. I have no experience with that, so I cannot comment on that. I do know that many distributed enterprise applications make use of EJB technology. BC4J supports entity beans (to be compared with entity and view objects). It also knows EJB application modules that behave like session beans. JHeadstart does not support using EJBs, however.
For further information about the support for EJB technology, I would like to refer you to the online help of JDeveloper.
Jan Kettenis
JHeadstart Team

Similar Messages

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • What is a Database Environment?

    I am going through Relational Database Design by JLM. I have come across words like
    database, data model, DBMS etc. which I am able to understand. But, I get confused when the author tries to use Database and Database Environment with supposedly different meaning (as per my understanding).
    What does Database Environment make up ? I understand that data model defines relationship of the data whereas DBMS is data model specific and translated data manipulation requests and retrieves data from physical storage device(s). The author defined Database
    as data and its relationship.
    Where does Environment come into picture ?
    This is where I got confused : underlying relationships in a database environment are independent of the data model and therefore also independent of the DBMS you are using
    BTW Am I reading the right book to start with, considering am just beginning?

    Both Strike Developer and EngSoonChea, its quite interesting to see that when ever Strike developer asks question nobody else but only EngSoonCheah is getting answers marked. I dont think this is mere co incidence
    I can see all threads created by Strikedeveloper and only marked answer is by EngSoonCheah. Please make sure favoritism is marking answer is not allowed  by Microsoft and both of you could be banned from entering forums.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Infinite loop during recovery of JE 4.1.10 database environment

    Hi there,
    We have a JE 4.1.10 database environment which is over 20GB in size. In order to improve performance we increased je.log.fileCacheSize so that we could cache all of the 10MB DB log file descriptors in memory and prevent JE from having to constantly open/close log files (the environment is only partially cached in memory and there are more than 2000 log files). Unfortunately, we failed to increase the file descriptor ulimit from the Linux default of 1024 and our application failed when the ulimit was reached.
    Since then we've reverted the settings and increased the JVM heap size so that we can fully cache everything in memory again. However, we are having problems recovering the DB environment. It looks like the environment recovery proceeds through the 10 recovery steps but gets stuck while loading the log file utilization meta-data:
    Stack trace #1:
    "main" prio=10 tid=0x000000005622f000 nid=0x5d94 runnable [0x00000000419fe000]
    java.lang.Thread.State: RUNNABLE
         at java.io.RandomAccessFile.seek(Native Method)
         at com.sleepycat.je.log.FileManager.readFromFileInternal(FileManager.java:1605)
         - locked <0x00002aaaf80dca88> (a com.sleepycat.je.log.FileManager$1)
         at com.sleepycat.je.log.FileManager.readFromFile(FileManager.java:1560)
         at com.sleepycat.je.log.FileManager.readFromFile(FileManager.java:1498)
         at com.sleepycat.je.log.FileSource.getBytes(FileSource.java:56)
         at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:861)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:790)
         at com.sleepycat.je.log.LogManager.getLogEntryAllowInvisibleAtRecovery(LogManager.java:751)
         at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1320)
         at com.sleepycat.je.tree.BIN.fetchTarget(BIN.java:1367)
         at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2499)
         at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1545)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1692)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1617)
         at com.sleepycat.je.cleaner.UtilizationProfile.getFirstFSLN(UtilizationProfile.java:1262)
         at com.sleepycat.je.cleaner.UtilizationProfile.populateCache(UtilizationProfile.java:1200)
         at com.sleepycat.je.recovery.RecoveryManager.recover(RecoveryManager.java:221)
         at com.sleepycat.je.dbi.EnvironmentImpl.finishInit(EnvironmentImpl.java:549)
         - locked <0x00002aaaf8009868> (a com.sleepycat.je.dbi.EnvironmentImpl)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:237)
         at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:229)
         at com.sleepycat.je.Environment.<init>(Environment.java:211)
         at com.sleepycat.je.Environment.<init>(Environment.java:165)
    Stack trace #2:
    "main" prio=10 tid=0x000000005622f000 nid=0x5d94 runnable [0x00000000419fe000]
    java.lang.Thread.State: RUNNABLE
         at com.sleepycat.je.tree.IN.findEntry(IN.java:2086)
         at com.sleepycat.je.dbi.CursorImpl.searchAndPosition(CursorImpl.java:2194)
         at com.sleepycat.je.cleaner.UtilizationProfile.getFirstFSLN(UtilizationProfile.java:1242)
         at com.sleepycat.je.cleaner.UtilizationProfile.populateCache(UtilizationProfile.java:1200)
         at com.sleepycat.je.recovery.RecoveryManager.recover(RecoveryManager.java:221)
         at com.sleepycat.je.dbi.EnvironmentImpl.finishInit(EnvironmentImpl.java:549)
         - locked <0x00002aaaf8009868> (a com.sleepycat.je.dbi.EnvironmentImpl)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:237)
         at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:229)
         at com.sleepycat.je.Environment.<init>(Environment.java:211)
         at com.sleepycat.je.Environment.<init>(Environment.java:165)
    It looks like it is spinning in UtilizationProfile.getFirstFSLN(). Examining the syscalls using strace shows that JE is looping around the same 3 log files over and over again (note the lseek calls on fd 147):
    14719 lseek(147, 2284064, SEEK_SET) = 2284064
    14719 read(147, "\310\f9\311\v\0\330\331\"\0:\0\0\0\2{\231\302\36\27\0\340\24\0\0\33\201\230\0\340\24\0"..., 4096) = 4096
    14719 lseek(95, 3030708, SEEK_SET) = 3030708
    14719 read(95, "B\17\36\373\v\0\350<.\0B\0\0\0\2{3\323\36\27\0+\21\0\0\210\221\230\0+\21\0"..., 4096) = 2469
    14719 lseek(95, 3025672, SEEK_SET) = 3025672
    14719 read(95, "#(\2571\v\0b*.\0\202\0\0\0\2{\26\323\36\27\0\0160\0\0f\221\230\0Z\22\0"..., 4096) = 4096
    14719 lseek(74, 35084, SEEK_SET) = 35084
    14719 read(74, "\361\34s\272\v\0\300\210\0\0h\0\0\0\2{\240\323\36\27\0\346\36\0\0\215\225\230\0\350\22\0"..., 4096) = 4096
    14719 lseek(74, 90663, SEEK_SET) = 90663
    14719 read(74, "\2\16P\341\v\0#;\1\0B\0\0\0\2{\275\323\36\27\0\366/\0\0003H.\0$\3\0"..., 4096) = 4096
    14719 lseek(147, 2284064, SEEK_SET) = 2284064
    14719 read(147, "\310\f9\311\v\0\330\331\"\0:\0\0\0\2{\231\302\36\27\0\340\24\0\0\33\201\230\0\340\24\0"..., 4096) = 4096
    14719 lseek(95, 3030708, SEEK_SET) = 3030708
    14719 read(95, "B\17\36\373\v\0\350<.\0B\0\0\0\2{3\323\36\27\0+\21\0\0\210\221\230\0+\21\0"..., 4096) = 2469
    From the output of lsof we can determine the names of the log files associated with the above file descriptors, but I'm not sure if this is useful or not. We also ran DbVerifyLog against the entire DB log and it produced no results, so this indicates to me that the DB is not corrupt after all (although it could be that DbVerifyLog simply does not check the utilization profile meta data).
    I had a look at the change log for 4.1.17 but couldn't see any relevant bugs. It's probably worth noting that this DB does not use HA, duplicates, or secondary indexes.
    So I guess I have two questions:
    1) Is this a bug? Presumably the DB should recover even in the event of a human error induced failure like this?
    2) Can we recover the DB somehow, e.g. using DbDump/DbLoad?
    Let me know if you need any additional information.
    Cheers,
    Matt

    1) Is this a bug? Presumably the DB should recover even in the event of a human error induced failure like this?It is probably a bug, but it depends on exactly what happened during the original crash.
    2) Can we recover the DB somehow, e.g. using DbDump/DbLoad?If you can't open the environment, only DbDump -R (salvage mode) can be used, which requires identifying the active data by hand. Not a good option.
    Let me know if you need any additional information.Do you happen to have the original stack trace, where you ran out of FDs?
    How long did the looping go on before you killed the process?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Distributed Database + DB13 + SAPXPG + Standalone Gateway

    Hi All,
    I knew this question has been asked and answered but i still couldn't find the concrete solution. Any advise is mostly welcome.
    I've installed Distributed Database on Solaris and CI on Solaris as well. Due to security reason and policies, our customer stop us from using RSH.
    I've installed standalone gateway in Distributed Database with System Name EGW, system number 20. My SAP and DB name is AEP.
    gwrd is started using egwadm. TCP_IP connection SAPXPG_DBDEST_DBHOSTNAME works fine, however when i kick off checkDB in DB13, i got the below error:
    "> Function: BtcXpgPanicCan't exec external program (No such file or directory)"
    Have i miss out something? Do i need to add any ENV to egwadm or edit the SAPXPG program?
    fyi, br* and SAPXPG authorization are all set correctly in /sapmnt/AEP/exe. Env for AEPadm in CI is identical to AEPadm in DB server.
    Please advise.
    Thanks,
    Nicholas Chang.

    Hi All,
    Since there's no SAP Note explaining how to setup standalone gateway in UNIX env and incorporate for brtools to work in distribtuted system. I'll summarize the resolution here:
    1) Install GW in database server
    2) update the GW start profile with below value:
    BGW = Gateway's SID.
    SETENV_00 = LD_LIBRARY_PATH=$(DIR_LIBRARY):%(LD_LIBRARY_PATH)
    SETENV_01 = SHLIB_PATH=$(DIR_LIBRARY):%(SHLIB_PATH)
    SETENV_02 = LIBPATH=$(DIR_LIBRARY):%(LIBPATH)
    SETENV_03 = ORACLE_SID=SID
    SETENV_04 = SAPDATA_HOME=/oracle/
    SIDSETENV_05 = PATH=$PATH:/home/BGWadm:/usr/sap/SID/SYS/exe/run:/usr/bin:.:/usr/ccs/bin:/usr/ucb:/oracle/SID/112_64/bin
    3) update BGWadm env profile - sapenv_HOSTNAME.csh
    setenv SAPDATA_HOME /oracle/SID
    setenv ORACLE_SID   SID
    setenv PATH /home/bgw:/usr/sap/BGW/SYS/exe/run:/usr/bin:.:/usr/ccs/bin:/usr/ucb:/oracle/SID112_64/bin
    setenv ORACLE_HOME /oracle/SID/112_64
    setenv USER bgwadm
    setenv dbs_ora_schema SAPSR3
    setenv NLS_LANG AMERICAN_AMERICA.UTF8
    setenv DB_SID SID
    setenv dbs_ora_tnsname SID
    setenv dbms_type ORA
    setenv LD_LIBRARY_PATH /usr/sap/SID/SYS/exe/run:/oracle/SID/112_64/lib
    setenv DIR_LIBRARY /usr/sap/SID/SYS/exe/run
    4) copy br* to gateway's /sapmnt/SID/exe/ and ensure 4775 is granted
    5)Create the special OPS$ setting to allow BGWADM to log into the database.  use the script below .  Log in as oraSID to run this on DB host via SQLPLUS (connect /as sysdba)
    CREATE USER "OPS$BGWADM" DEFAULT TABLESPACE PSAPSR3USR
    TEMPORARY TABLESPACE PSAPTEMP IDENTIFIED EXTERNALLY;
    grant unlimited tablespace to "OPS$EGWADM" with admin option;
    grant connect to "OPS$BGWADM" with admin option;
    grant resource to "OPS$BGWADM" with admin option;
    grant connect to "OPS$BGWADM" with admin option;
    grant resource to "OPS$BGWADM" with admin option;
    grant sapdba   to "OPS$BGWADM" with admin option;
    CREATE TABLE "OPS$BGWADM".SAPUSER
    (USERID VARCHAR2(256), PASSWD VARCHAR2(256));
    INSERT INTO "OPS$BGWADM".SAPUSER VALUES ('SAPSR3',
    'your_password');
    6) stop gateway and kill gateway sapstartsrv
    7) start gateway
    8) in CI - SM59 - TCP_IP -> SAPXPG_HOSTNAME_DB -> ensure gateway host is points to DB server and sapgw is the system number for DB Gateway
    9) test checkDB in db13.
    Cheers,
    Thanks,
    nicholas chang.

  • Distributed database

    Hi Everyone,
    We are looking at different options to connect the forte application to
    distributed database.
    Any help greatly appreciated
    Thanks & Regards
    Raju
    (Pothuraju Katta)
    [email protected]
    Phone: 847-969-3000
    Fax: 847-995-8287
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Hi Haidar,
    Actually, SQL Server does not recognize "distributed database" as concept.
    Based on your description, you want to access databases located at different locations. To achieve the requirement, you can create
    linked servers
     or configure
    replication in SQL Server.
    Linked servers provide SQL Server the ability to access data from remote data sources. Using these mechanisms, you can issue queries, perform data modifications and execute remote procedures.
    Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and
    to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.
    There are also detailed blogs and similar thread for your reference.
    Distributed Databases in SQL Server 2005
    http://www.c-sharpcorner.com/UploadFile/john_charles/DistributeddatabasesinSQLServer11122007092249AM/DistributeddatabasesinSQLServer.aspx
    SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database
    http://blog.sqlauthority.com/2012/11/27/sql-server-shard-no-more-an-innovative-look-at-distributed-peer-to-peer-sql-database/
    How to implement distributed database in SQL Server 2008 R2?
    http://stackoverflow.com/questions/7926773/how-to-implement-distributed-database-in-sql-server-2008-r2
    Thanks,
    Lydia Zhang

  • Distributed Database  with IPV6

    Distributed database was not that familiar few years before...will IPv6 enhance the distributed database systems ? DDBMS has wide range of best features & notable drawbacks to..however today's business need of distributed database. contribution of DDBMS in Cloud computing.

    user4422434 wrote:
    Distributed database was not that familiar few years before...will IPv6 enhance the distributed database systems ? No. Distributed databases have been common in the last century too.. running across IPv4 and Novel IPX networks.
    Do not confuse the communication transport layer with the ability of the database to support distributed integration. Even NetBEUI can be used (assuming that protocol routing is not needed). Distributed database is not about protocol dependency.

  • RE: Distributed database [Ref:C776312]

    By "distributed database", do you mean that the database lives on several
    servers, or that the database lives on one server which is distributed from the
    application ?
    Steve Elvin
    Systems Developer
    Frontline Ltd.
    UK
    -----Original Message-----
    From: INTERNET [email protected]
    Sent: Friday, October 23, 1998 12:13 AM
    To: Steve Elvin; X400
    p=NET;a=CWMAIL;c=GB;DDA:RFC-822=forte-users(a)sageit.com;
    Subject: Distributed database [Ref:C776312]
    Hi Everyone,
    We are looking at different options to connect the forte application to
    distributed database.
    Any help greatly appreciated
    Thanks & Regards
    Raju
    (Pothuraju Katta)
    [email protected]
    Phone: 847-969-3000
    Fax: 847-995-8287
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    By "distributed database", do you mean that the database lives on several
    servers, or that the database lives on one server which is distributed from the
    application ?
    Steve Elvin
    Systems Developer
    Frontline Ltd.
    UK
    -----Original Message-----
    From: INTERNET [email protected]
    Sent: Friday, October 23, 1998 12:13 AM
    To: Steve Elvin; X400
    p=NET;a=CWMAIL;c=GB;DDA:RFC-822=forte-users(a)sageit.com;
    Subject: Distributed database [Ref:C776312]
    Hi Everyone,
    We are looking at different options to connect the forte application to
    distributed database.
    Any help greatly appreciated
    Thanks & Regards
    Raju
    (Pothuraju Katta)
    [email protected]
    Phone: 847-969-3000
    Fax: 847-995-8287
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Distributed clustered environment

    Hi ,
    Distributed clustered environment:
    I am going to configure Clustered environment of BO 3.1 , I have some doubt hence I have less experience in clustering.
    Configuration details :
    1) 2 CMS on 2 diffrent unix box
    2) Report servers will be installed on two diffrent machines
    3) FRS will be mounted to two machines.
    Doubts :
    1 )Do I need to configure the connectivity and relate things in Profiles in the machine where Report servers are installed ? If   
    Yes  it will be help full if  you provide details information of configuration.
    2)  FRS can't clustered hence puting it on one location and mounting it  on two machines any challenges , I never did this kind of  configurations  I am on planning phase please help me with the required information.
    Regards,
    Neo A
    Edited by: Neo  A on Oct 13, 2010 1:37 PM

    Hi,
    >
    Neo  A wrote:
    > 1 )Do I need to configure the connectivity and relate things in Profiles in the machine where Report servers are installed ? If   
    > Yes  it will be help full if  you provide details information of configuration.
    Yes, you need to creat the dabase connectivity on both the servers, please clarify what do you mean by profiles ?
    >
    Neo  A wrote:
    > 2)  FRS can't clustered hence puting it on one location and mounting it  on two machines any challenges , I never did this kind of  configurations  I am on planning phase please help me with the required information.
    FRS works as active and passive servers, they dont do load balancing. The one you start first will be the active and passive will become active only when the first one is down.
    To configure FRS , you need to have the file repostory shared and accessible by both the servers and provide this file repository path to both FRS in the server properties page in CMC.
    Regards,
    Ramu.

  • To Design Distributed Database?

    What Architecture should I follow to Design Distributed Database?
    By using Forms at frount_end How do I design the Schema of client and server Database
    .

    I think you can use Java Trasaction API that is independent of the Resources and will give you more flexibility to handle trasactions
    This link may be helpful
    http://www.onjava.com/pub/a/onjava/2001/05/23/j2ee.html

  • Slapd database environment corrupt; wrong log files removed

    When our OSX 10.4.10 server boots up, slapd crashes out with a fatal error, followed by an unsuccessful attempt to recover the database. This is from the log:
    Sep 22 22:41:49 xyz123 slapd[428]: @(#) $OpenLDAP: slapd 2.2.19 $\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdbbackinitialize: Sleepycat Software: Berkeley DB 4.2.52: (December 3, 2003)\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdbdbinit: Initializing BDB database\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): DBENV->logflush: LSN of 3/2806706 past current end-of-log of 1/2458\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): Database environment corrupt; the wrong log files may have been removed or incompatible database files imported from another environment\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): gidNumber.bdb: unable to flush page: 0\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): txn_checkpoint: failed to flush the buffer cache Invalid argument\n
    Sep 22 22:41:49 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): PANIC: Invalid argument\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdbdbopen: dbenv_open failed: DB_RUNRECOVERY: Fatal error, run database recovery (-30978)\n
    Sep 22 22:41:50 xyz123 slapd[428]: backend_startup: bidbopen failed! (-30978)\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): DBENV->lockid interface requires an environment configured for the locking subsystem\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): DBENV->lockid interface requires an environment configured for the locking subsystem\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): DBENV->lockid interface requires an environment configured for the locking subsystem\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): DBENV->lockid interface requires an environment configured for the locking subsystem\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): DBENV->lock_idfree interface requires an environment configured for the locking subsystem\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdb(dc=xyz123,dc=mydomain,dc=com): txn_checkpoint interface requires an environment configured for the transaction subsystem\n
    Sep 22 22:41:50 xyz123 slapd[428]: bdbdbdestroy: txn_checkpoint failed: Invalid argument (22)\n
    Sep 22 22:41:50 xyz123 slapd[428]: slapd stopped.\n
    We are able to get it running by deleting the log file at /var/db/openldap/openldap-data/log.0000000001 and then rebooting. Thereafter the slapd.log complains:
    Sep 23 20:30:30 xyz123 slapd[58]: bdb(dc=xyz123,dc=dc=mydomain,dc=com): DBENV->logflush: LSN of 1/467990 past current end-of-log of 1/226654\n
    Sep 23 20:30:30 xyz123 slapd[58]: bdb(dc=xyz123,dc=dc=mydomain,dc=com): Database environment corrupt; the wrong log files may have been removed or incompatible database files imported from another environment\n
    Sep 23 20:30:30 xyz123 slapd[58]: bdb(dc=xyz123,dc=dc=mydomain,dc=com): sn.bdb: unable to flush page: 0\n
    We've tried running db_recover -c but that hasn't worked, or at least, we perhaps didn't do it right. We don't have a backup database that predates the existence of this problem.
    I'd appreciate if anyone has help or could point us to a resource explaining what to do to get our slapd working right.

    thank you for giving us a test program. This helped tremendously to fully understand what you are doing. In 5.3 we fixed a bug dealing with the way log files are archived in an HA environment. What you are running into is the consequences of that bug fix. In the test program you are using DB_INIT_REP. This is the key to use that you want an HA environment. With HA, there is a master and some number of read only clients. By default we treat the initiating database as the master. This is what is happening in our case. In an HA (replicated) environment, we cannot archive log files until we can be assured that the clients have applied the contents of that log file. Our belief is that you are not really running in an HA environment and you do not need the DB_INIT_REP flag. In our initial testing where we said it worked for us, this was because we did not use the DB_INIT_REP flag, as there was no mention of replication being needed in the post.
    Recommendation: Please remove the use of the DB_INIT_REP flag or properly set up an HA environment (details in our docs).
    thanks
    mike

  • Distributed Portal Environment Query

    We currently have a centralised G6 portal environment in which we have a single Portal server and another hosting Collab.
    We have a requirement which might lead to us having to provide another portal server and collab server at one of our remote sites, although in another domain, that as a result will extend the reach of our current portal. The portal and collab databases would remain at the central location so these remote 'services' would need access back to them we assume. Currently there is also a Firewall on the WAN between these 2 sites.
    We have tried to find other articles on distributed Portal configurations to see if this is possible but have not really come up with anything so I am hoping other users in this forum might help.
    The main points we are trying to understand initially are:
    1) Firstly can you distribute portal &/or Collab servers in this way?
    2) What ports would the firewall need to allow for the remote Portal server to talk back to the central SQL*Server database.
    3) Can you distribute Collab Server across 2 servers in this way and if so what do they need to talk back to the central database. The requirement is to be able to share the projects across the 2 collab servers so that users connecting to the central server can collaborate with those connecting to the remote server.
    4) We are assuming that authentication from the remote site into the central AD domain will be OK as we can host the AD AWS centrally.
    Any input appreciated,
    Ross

    Geoff,
    Thanks for the prompt reply and I can understand why my questions sound slightly confusing as the scenario is not easy to explain in this medium.
    What we currently have is 2 portal 'environments', one running internally and another running on another network but which is linked to our own. What we are exploring is whether we can consolidate this into a single environment.
    We need to maintain at least one portal server in the other network, which is effectively a DMZ as you imply, for non technical reasons primarily. But our understanding is that this will need to talk back to the portal database which is on our internal network behind a firewall hence the query about firewall ports.
    This 'remote' portal also currently hosts a Collab server so we were wondering if their was benefit in keeping a collab server 'local' to the remote users to avoid network traffic. From your reply I conclude this might not be sensible, or even possible, so perhaps we need to migrate these Collab objects back into the internal portal's collab environment and let the remote sites users access it from there.
    If this is the case then for improved availability we would probably want to consider Clustering Collab so are you aware of any reference docs on this side of things?
    Thanks,
    Ross

  • Distributed database or replicated database

    I have a project which needs to be done in a distributed environment. here is the project description. I would like to do this using java and I think java can do that. My question is how can I start. This will be a desktop program.
    Two servers A and B. Both have databases. A client C1 submits some data in server A. Server A will communicate with Server B if there is any request from any user.If not then Server A database will be updated with those new data. At the same time those data have to be updated in the Server B database. Therefore, databases are same for both server.
    Another example is if client C2 delete any data in server B it will also delete data in both servers.
    How can I proceed? Should I use socket or rmi or what can be? Do I need to implement lock or transaction? I want to use mysql database.
    Thanks.

    I am trying to practice the jms tutorial /simple one. Instead of sending text message I am trying to send an object. I am getting the following error from appclient -client SimpleProducer.java.
    Exception occurred: com.sun.messaging.jms.MessageFormatException: [C4014]: Seria
    lize message failed. - cause: java.io.NotSerializableException: SimpleProducerappclient -client SimpleSynchConsumer.java is receiving a null packet.
    SimpleProducer.java
    import javax.jms.*;
    import javax.naming.*;
    public class SimpleProducer {
         * Main method.
         * @param args     the destination used by the example
         *                 and, optionally, the number of
         *                 messages to send
        public SimpleProducer (String destName,int NUM_MSGS){
          Context jndiContext = null;
            try {
                jndiContext = new InitialContext();
            } catch (NamingException e) {
                System.out.println("Could not create JNDI API context: " +
                    e.toString());
                System.exit(1);
             * Look up connection factory and destination.  If either
             * does not exist, exit.  If you look up a
             * TopicConnectionFactory or a QueueConnectionFactory,
             * program behavior is the same.
            ConnectionFactory connectionFactory = null;
            Destination dest = null;
            try {
                connectionFactory = (ConnectionFactory) jndiContext.lookup(
                        "jms/ConnectionFactory");
                dest = (Destination) jndiContext.lookup(destName);
            } catch (Exception e) {
                System.out.println("JNDI API lookup failed: " + e.toString());
                e.printStackTrace();
                System.exit(1);
             * Create connection.
             * Create session from connection; false means session is
             * not transacted.
             * Create producer and text message.
             * Send messages, varying text slightly.
             * Send end-of-messages message.
             * Finally, close connection.
            Connection connection = null;
            MessageProducer producer = null;
            try {
                connection = connectionFactory.createConnection();
                Session session =
                    connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
                producer = session.createProducer(dest);
                Packet pack = new Packet();
               // TextMessage message = session.createTextMessage();
                ObjectMessage message = session.createObjectMessage(pack);
                   producer.send(message);
                   System.out.println("Message Sent");
                producer.send(session.createMessage());
            } catch (JMSException e) {
                System.out.println("Exception occurred: " + e.toString());
            } finally {
                if (connection != null) {
                    try {
                        connection.close();
                    } catch (JMSException e) {
        public static void main(String[] args) {
            final int NUM_MSGS;
            if ((args.length < 1) || (args.length > 2)) {
                System.out.println("Program takes one or two arguments: " +
                    "<dest_name> [<number-of-messages>]");
                System.exit(1);
            String destName = new String(args[0]);
            System.out.println("Destination name is " + destName);
            if (args.length == 2) {
                NUM_MSGS = (new Integer(args[1])).intValue();
            } else {
                NUM_MSGS = 1;
            SimpleProducer sp = new SimpleProducer(destName,NUM_MSGS );
             * Create a JNDI API InitialContext object if none exists
             * yet.
    public class Packet implements java.io.Serializable
       int cur_Num;
       public Packet(){
         cur_Num = 5;
    }SimpleSynchConsumer.java
    import javax.jms.*;
    import javax.naming.*;
    public class SimpleSynchConsumer {
         * Main method.
         * @param args     the destination name and type used by the
         *                 example
    public SimpleSynchConsumer (String destName){
              Context jndiContext = null;
            ConnectionFactory connectionFactory = null;
            Connection connection = null;
            Session session = null;
            Destination dest = null;
            MessageConsumer consumer = null;
          //  TextMessage message = null;
           ObjectMessage message = null; 
             * Create a JNDI API InitialContext object if none exists
             * yet.
            try {
                jndiContext = new InitialContext();
            } catch (NamingException e) {
                System.out.println("Could not create JNDI API context: " +
                    e.toString());
                System.exit(1);
             * Look up connection factory and destination.  If either
             * does not exist, exit.  If you look up a
             * TopicConnectionFactory or a QueueConnectionFactory,
             * program behavior is the same.
            try {
               /* connectionFactory = (ConnectionFactory) jndiContext.lookup(
                        "jms/JupiterConnectionFactory");*/
                   connectionFactory = (ConnectionFactory) jndiContext.lookup(
                        "jms/ConnectionFactory");
                dest = (Destination) jndiContext.lookup(destName);
            } catch (Exception e) {
                System.out.println("JNDI API lookup failed: " + e.toString());
                System.exit(1);
            try {
                connection = connectionFactory.createConnection();
                session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
                consumer = session.createConsumer(dest);
                connection.start();
                     Message m = consumer.receive(1);
                    System.out.println("Packet received");
                    Packet msg = new Packet();
                    if (m != null)
                        System.out.println("inside loop");
                         if(m instanceof ObjectMessage)
                              System.out.println("inside loop");
                              message = (ObjectMessage)m;
                              msg = (Packet) message.getObject();
                              System.out.println("Reading message: " + msg.cur_Num);
            } catch (JMSException e) {
                System.out.println("Exception occurred: " + e.toString());
            } finally {
                if (connection != null) {
                    try {
                        connection.close();
                    } catch (JMSException e) {
        public static void main(String[] args) {
            String destName = null;
              if (args.length != 1) {
                System.out.println("Program takes one argument: <dest_name>");
                System.exit(1);
            destName = new String(args[0]);
            System.out.println("Destination name is " + destName);
              SimpleSynchConsumer sc = new SimpleSynchConsumer(destName);
    public class Packet implements java.io.Serializable
       int cur_Num;
       public Packet(){
         cur_Num = 5;
    }Thanks.

  • Distributed Database Design Help

    I want to computerise a department where it has several branches. Each branch deals with customers from its region, processing applications and issuing orders. Processing an application goes through several levels. Like from branch office to regional office and later to head office and then back to the branch office for issuing order. and the requirement is that each branch should work irrespective of the network condition.So I want to make each office to be autonomous (storing the data related to the branch at its branch server). And one more requirement is that i want to make a web system where customers should be able to lodge application (by selecting the branch office) and check their application status. So the DB of all branches should be available to the web system no matter where the data of different branches is stored.
    Give me a brief description on possible Distributed DB design that will be sufficient for implementing the system.
    Thanks in Advance
    Sundar.

    Think of local databases, replication, maybe advanced queuing. BPEL/SOA (for adding some buzzwords). Workflow (management) could also be part of the equation. Brief enough?
    Anything else could require an in depth analysis - and there are people available for hire to pick up this job.
    C.

  • Huge Database environment support.

    Hi,
    I got an interview call and client has a requirement of administring 1000 database on 600 servers.
    Since I never worked on such a big environmet just wondering how a team can support such a big environment.
    Please advice me.

    As I said everything is automated at the server level and we do have some PERL engine to do all these tasks and also sending alert to the store admin in case of any problem and then he get us involve to resolve it. The DB is access by the same app but the size and the load vary on different store. I would say the key here is the PERL engine we develop to do all these DBA tasks, I know ppl will say that why to re-write the code again if it's already available from oracle and my answer is it all depends on the environment, like i could use DBConsole to monitor or schedule jobs but as I mentioned we dont manage these database directly plus neither the store admins has any rights to even lanuch sqlplus so all these database should be self managed and intelligent enough to take care of themselves and for that we build this intelligent engine. We just give 1 DVD to the customer and that DVD install oracle, create ASM, create database, run conversion, deploy engine without even 1 prompt in between.
    In short it all depends what kind of environment it is and how the situation or requirement is and then there are different ways to fulfill it.
    Best of Luck
    Daljit Singh

Maybe you are looking for

  • Missing on Users end: How to install Font Manager.IndesignPlugin. Indesign_CC

    I installed Indesign CC on the admin side of my computer. It seems to open/work fine there but when i login as a user on the computer, it won't open Indesign because it claims it is missing the Font Manager.IndesignPlugin. It originally said it was a

  • Having trouble with a push to phone app?

    If you have ever seen this error messages "The requested URL '/CGI/Execute' was not found on the RomPager server." you are not alone.  I had this problem recently writing a python script to push a simple raw file to a phone and in my searching I have

  • Orange UK website

    Has anyone had any luck accessing their current usage on the Orange website. I have one of their plans and would like to monitor my usage to avoid paying excess charges. The website has links to current usage which do not show any data having been us

  • Mouse Wheel Scroll - Change Brush Size

    Hi I use the "Liquify" filter a lot, and I've always wanted to be have the mouse wheel adjust brush size. (I'm not that keen on using the left-right bracket to adjust brush size.) Thanks. Q. Barrie

  • How long are IPhone messages stored on Apple Servers?

    Thought I read somewhere it's 30 days.