Huge Database environment support.

Hi,
I got an interview call and client has a requirement of administring 1000 database on 600 servers.
Since I never worked on such a big environmet just wondering how a team can support such a big environment.
Please advice me.

As I said everything is automated at the server level and we do have some PERL engine to do all these tasks and also sending alert to the store admin in case of any problem and then he get us involve to resolve it. The DB is access by the same app but the size and the load vary on different store. I would say the key here is the PERL engine we develop to do all these DBA tasks, I know ppl will say that why to re-write the code again if it's already available from oracle and my answer is it all depends on the environment, like i could use DBConsole to monitor or schedule jobs but as I mentioned we dont manage these database directly plus neither the store admins has any rights to even lanuch sqlplus so all these database should be self managed and intelligent enough to take care of themselves and for that we build this intelligent engine. We just give 1 DVD to the customer and that DVD install oracle, create ASM, create database, run conversion, deploy engine without even 1 prompt in between.
In short it all depends what kind of environment it is and how the situation or requirement is and then there are different ways to fulfill it.
Best of Luck
Daljit Singh

Similar Messages

  • Huge database Growth

    Hello Guys,
    We have been observing very huge database growth in PRD environment.
    We have to add atleast 25GB datafile weekly to TS PSAPSR3.
    I had a look at DB02 for TOP SIZES and TOP GROWTH.
    Owner     Name     Partition     Type     Tablespace     Size(MB)     Chg.Size/day     #Extents     #Blocks     Next Extent(MB)
    SAPSR3     LIPS          TABLE     PSAPSR3     21367.000     364.433     520     2734976     2.500
    SAPSR3     BSIS          TABLE     PSAPSR3     16460.000     277.667     442     2106880     10.000
    SAPSR3     CE11000          TABLE     PSAPSR3     16360.000     262.500     441     2094080     10.000
    SAPSR3     VBFA          TABLE     PSAPSR3     15402.000     265.133     425     1971456     10.000
    SAPSR3     GLPCA          TABLE     PSAPSR3     15171.000     259.867     425     1941888     10.000
    SAPSR3     FAGLFLEXA          TABLE     PSAPSR3     13738.000     232.667     399     1758464     10.000
    SAPSR3     ACCTIT          TABLE     PSAPSR3     12788.000     215.067     384     1636864     10.000
    SAPSR3     ARFCSDATA          TABLE     PSAPSR3     12350.000     410.400     380     1580800     2.500
    SAPSR3     RFBLG          TABLE     PSAPSR3     11433.000     194.667     363     1463424     2.500
    SAPSR3     CE41000_ACCT          TABLE     PSAPSR3     11177.000     184.000     360     1430656     10.000
    SAPSR3     VBAP          TABLE     PSAPSR3     9663.000     156.433     336     1236864     10.000
    SAPSR3     VBRP          TABLE     PSAPSR3     8308.000     140.800     313     1063424     2.500
    SAPSR3     FAGL_SPLINFO          TABLE     PSAPSR3     7960.000     135.200     308     1018880     20.000
    SAPSR3     MSEG          TABLE     PSAPSR3     7936.000     134.400     307     1015808     10.000
    SAPSR3     BSIS~0          INDEX     PSAPSR3     7488.000     132.267     300     958464     2.500
    SAPSR3     VBFA~0          INDEX     PSAPSR3     7304.000     123.533     299     934912     2.500
    SAPSR3     DBTABLOG          TABLE     PSAPSR3     7303.000     83.200     300     934784     10.000
    SAPSR3     COEP          TABLE     PSAPSR3     6991.000     119.467     293     894848     10.000
    SAPSR3     CE41000          TABLE     PSAPSR3     6144.000     91.733     279     786432     10.000
    SAPSR3     FAGLFLEXA~3          INDEX     PSAPSR3     6028.000     104.533     278     771584     2.500
    SAPSR3     FAGL_SPLINFO_VAL~0          INDEX     PSAPSR3     5702.000     98.133     273     729856     2.500
    SAPSR3     FAGLFLEXA~0          INDEX     PSAPSR3     5568.000     98.133     270     712704     2.500
    We have daily sales order of around 12000.
    I want to know why it growing at such alarming pace or atleast find the Transactions which are causing huge inserts and updates.
    Regards
    Abhishek

    Hi Abhishek,
    In addition to the above, a very interesting area to work upon periodically is Data Volume Management.
    SAP has released 6.3 version of this guide.
    Click on this link
    https://websmp101.sap-ag.de/~sapidb/011000358700005044382000E
    This guide covers almost all tables which have considerable data growth and what preventive actions can be taken to keep the total database size under control. Basically, this guide covers, Prevention, Aggregation, Deletion, Archiving areas.
    Coupled with the guide's recommendations with good space management activities like table reorgs would definitely keep the system away from performance issues due to database size.
    This is an on-going project at some customer places.
    Br,
    Venky

  • Distributed database environment

    In my one Jheadstart project, it will meet distributed database environment, that is some one project's application data be located in different oracle database .
    How to treat this situation, about in one project to handle distributed database environment ?
    Is the jheadstart support multi database connection ?
    or need through database link capacity to fulfill !

    Ting Rung,
    There is at least one thing that you might run into, and that is the issue of transaction management. An application module represents a 'data model' for a tak that is accomplished within one transaction. You can nest application modules, but even then will the top-level application module provide the transaction context.
    This means that all data that necessarily needs to be persisted in one transaction, needs to be represented by the same (hierarcy of) application module(s) and therefore must come from the same data source (database). Said in other words: when the customers are in one database and the orders in another, you cannot persist them in one and the same transaction.
    There are more issues related to distributed applications. I have no experience with that, so I cannot comment on that. I do know that many distributed enterprise applications make use of EJB technology. BC4J supports entity beans (to be compared with entity and view objects). It also knows EJB application modules that behave like session beans. JHeadstart does not support using EJBs, however.
    For further information about the support for EJB technology, I would like to refer you to the online help of JDeveloper.
    Jan Kettenis
    JHeadstart Team

  • Converting single instance database(11gr2) supporting EBS 12.1.3 to 2 node RAC on RHEL.

    Hi We are in the process of converting single instance database(11gr2) supporting EBS 12.1.3 to 2 node RAC on RHEL. Which version of RHEL is better? if its 6.x then x?? The oracle document only says RHEL update 6.
    thanks in advance.

    Hi,
    Yes you can use any version but i recommend  you to use 6.2
    Also refer my post for RAC migration in EBS
    Apps DBA Workshop: Using 11gR2 RAC with Oracle Applications R12.1.1
    Hope this helps
    thanks,
    X A H E E R

  • Problem with comparing different tables in huge database

    Hi!
    I'm experiencing a problem where I want to find out which records in one table (shoppingcart) does not exist in another table (shoppingcart_items).
    SHOPPINGCART
    ID-----------------CUSTOMER
    1------------------Adam
    2------------------Tor
    3------------------Alexis
    4------------------Fred
    SHOPPINGCART_ITEM
    ID----------------ITEM
    1-----------------Corn
    1-----------------Wheat
    3-----------------Syrup
    4-----------------Corn
    4-----------------Flakes
    No, I want to select all the records in SHOPPINGCART which do not have any occurans in SHOPPINGCART_ITEMS.
    I do this using the following statement:
    SELECT * FROM SHOPPINGCART WHERE ID NOT IN(
    SELECT ID FROM SHOPPINGCART_ITEM);
    Giving the following table:
    SHOPPINGCART
    ID-----------------CUSTOMER
    2------------------Tor
    The problem is that since I have a huge database, this takes to long. So, is there any smart way to do the same thing but much more efficient?
    Thanks!
    /Björn

    Did you try outer joins ? Example :
    TEST@db102 > select empno, ename from emp order by ename;
         EMPNO ENAME
          7876 ADAMS
          7499 ALLEN
          7698 BLAKE
          7782 CLARK
          7902 FORD
          7900 JAMES
          7566 JONES
          7839 KING
          7654 MARTIN
          7934 MILLER
          7788 SCOTT
          7369 SMITH
          7844 TURNER
          7521 WARD
    14 rows selected.
    TEST@db102 > select empno,ename from emp1 order by ename;
         EMPNO ENAME
          7876 ADAMS
          7499 ALLEN
          7698 BLAKE
          7902 FORD
          7839 KING
          7654 MARTIN
          7934 MILLER
          7788 SCOTT
          7369 SMITH
          7844 TURNER
          7521 WARD
    11 rows selected.
    TEST@db102 > select a.empno, a.ename from emp a, emp1 b
      2  where a.empno = b.empno(+)
      3  and b.empno is null
      4  order by a.ename;
         EMPNO ENAME
          7782 CLARK
          7900 JAMES
          7566 JONES
    TEST@db102 >

  • Database control support for "IN" clause

    Does anyone know if the database control supports an 'IN' clause construct, e.g.,
    SELECT name FROM mytable WHERE id IN ( {values} )
    where values would be passed in as a variable length array of Strings or Integers?
    Thanks,
    DC

    I forgot to paste in the link to the documenation. Here it is
    http://edocs.bea.com/workshop/docs81/doc/en/workshop/guide/controls/database/conParameterSubstitutionInJwsSqlStatements.html
    "John Rohrlich" <[email protected]> wrote in message
    news:[email protected]..
    Dave,
    I think you will need to build the sql and pass it to the control method.
    For example
    String matchMe = "('Bill Walton', 'Fred Williams')";
    Customer[] customers = weblogicCustomer.getCustomersIn(matchMe);
    * @jc:sql statement= "SELECT name from weblogic.customer where namein
    {sql: matchMe}"
    public Customer[] getCustomersIn(String matchMe);
    Here is a link to a doc explaining parameter substitution for jc:sql
    statements
    - john
    "Dave Chappelle" <[email protected]> wrote in message
    news:[email protected]..
    Does anyone know if the database control supports an 'IN' clauseconstruct, e.g.,
    SELECT name FROM mytable WHERE id IN ( {values} )
    where values would be passed in as a variable length array of Strings orIntegers?
    Thanks,
    DC

  • Reverse Engineering huge database

    Hi,
    Iam trying to reverse engineer (using oracle designer) a huge database ( more than 1,000 tables and forms & reports ) developed using forms & reports - 9i.
    Being a new to oracle designer, (through internet study) I was able to do reverse engineer for small database and could create only ER-diagrams. but I really don't have any idea, how to handle a huge database.
    So please, if any one have an idea about reverse engineering a existing database kindly answer the below questions....
    1. How to import entire database to repository?.
    2. What is the best way to import the huge database, importing entire database at once or in parts ??
    3. We can do reverse engineer and create ER-Diagram from existing database. but I need to create Process diagram and Dataflow diagram also, Is there any way to create this 2 diagrams automatically just like ER-diagram from existing database ??
    4. How to import all forms & reports to repository at once. How to create Process diagram and Dataflow diagram from forms and reports ?
    Thanking You
    regards,
    Lokesh

    My advise is DON'T. Don't try to reverse engineer the whole application in one fell swoop.
    Instead decide what parts of the application do you need to work with RIGHT NOW, and reverse engineer those parts. There are no applications that use ALL of the tables in a single form or related set of forms. In fact, I'll bet that 90% of your application uses only 10% of the tables. You may even decide that you NEVER need to reverse engineer the other 90%.
    In deciding what to bring into Designer, simply ask yourself, "Why am I doing this?". If you don't plan to re-design, re-write, do major maintenance, or otherwise change the table, form, or report, skip it for now. If you need it to help understand the parts of the database that you DO plan to change, then MAYBE you want to reverse engineer it soon. Otherwise, leave it on the "to-be-done" list, and don't fret too much if you don't get to it. Believe me, if you really need it, you WILL get to it.

  • What is a Database Environment?

    I am going through Relational Database Design by JLM. I have come across words like
    database, data model, DBMS etc. which I am able to understand. But, I get confused when the author tries to use Database and Database Environment with supposedly different meaning (as per my understanding).
    What does Database Environment make up ? I understand that data model defines relationship of the data whereas DBMS is data model specific and translated data manipulation requests and retrieves data from physical storage device(s). The author defined Database
    as data and its relationship.
    Where does Environment come into picture ?
    This is where I got confused : underlying relationships in a database environment are independent of the data model and therefore also independent of the DBMS you are using
    BTW Am I reading the right book to start with, considering am just beginning?

    Both Strike Developer and EngSoonChea, its quite interesting to see that when ever Strike developer asks question nobody else but only EngSoonCheah is getting answers marked. I dont think this is mere co incidence
    I can see all threads created by Strikedeveloper and only marked answer is by EngSoonCheah. Please make sure favoritism is marking answer is not allowed  by Microsoft and both of you could be banned from entering forums.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Prime Infrastructure 2.0: Open Database Schema Support available?

    Hello,
    I searched the forum for an Open Database Schema Support for PI.
    I only found Open Database Schemas for the Cisco Works LAN Managemet Solutions and Cisco Prime LAN Managemet Solutions.
    Exists an open Database Schema Support for Prime Infrastructure 2.0 as it exists for the older LAN Managemet Solutions ?
    Thanks.
    Bastian

    Programmatic interface to Prime Infrastructure is done via the the REST API vs. any open database schema.
    There are published reference guides here. For the most up to date information, please see your PI server itself. In Lifecycle view, the help menu has a link to the server-based API information:

  • Infinite loop during recovery of JE 4.1.10 database environment

    Hi there,
    We have a JE 4.1.10 database environment which is over 20GB in size. In order to improve performance we increased je.log.fileCacheSize so that we could cache all of the 10MB DB log file descriptors in memory and prevent JE from having to constantly open/close log files (the environment is only partially cached in memory and there are more than 2000 log files). Unfortunately, we failed to increase the file descriptor ulimit from the Linux default of 1024 and our application failed when the ulimit was reached.
    Since then we've reverted the settings and increased the JVM heap size so that we can fully cache everything in memory again. However, we are having problems recovering the DB environment. It looks like the environment recovery proceeds through the 10 recovery steps but gets stuck while loading the log file utilization meta-data:
    Stack trace #1:
    "main" prio=10 tid=0x000000005622f000 nid=0x5d94 runnable [0x00000000419fe000]
    java.lang.Thread.State: RUNNABLE
         at java.io.RandomAccessFile.seek(Native Method)
         at com.sleepycat.je.log.FileManager.readFromFileInternal(FileManager.java:1605)
         - locked <0x00002aaaf80dca88> (a com.sleepycat.je.log.FileManager$1)
         at com.sleepycat.je.log.FileManager.readFromFile(FileManager.java:1560)
         at com.sleepycat.je.log.FileManager.readFromFile(FileManager.java:1498)
         at com.sleepycat.je.log.FileSource.getBytes(FileSource.java:56)
         at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:861)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:790)
         at com.sleepycat.je.log.LogManager.getLogEntryAllowInvisibleAtRecovery(LogManager.java:751)
         at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1320)
         at com.sleepycat.je.tree.BIN.fetchTarget(BIN.java:1367)
         at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2499)
         at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1545)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1692)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1617)
         at com.sleepycat.je.cleaner.UtilizationProfile.getFirstFSLN(UtilizationProfile.java:1262)
         at com.sleepycat.je.cleaner.UtilizationProfile.populateCache(UtilizationProfile.java:1200)
         at com.sleepycat.je.recovery.RecoveryManager.recover(RecoveryManager.java:221)
         at com.sleepycat.je.dbi.EnvironmentImpl.finishInit(EnvironmentImpl.java:549)
         - locked <0x00002aaaf8009868> (a com.sleepycat.je.dbi.EnvironmentImpl)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:237)
         at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:229)
         at com.sleepycat.je.Environment.<init>(Environment.java:211)
         at com.sleepycat.je.Environment.<init>(Environment.java:165)
    Stack trace #2:
    "main" prio=10 tid=0x000000005622f000 nid=0x5d94 runnable [0x00000000419fe000]
    java.lang.Thread.State: RUNNABLE
         at com.sleepycat.je.tree.IN.findEntry(IN.java:2086)
         at com.sleepycat.je.dbi.CursorImpl.searchAndPosition(CursorImpl.java:2194)
         at com.sleepycat.je.cleaner.UtilizationProfile.getFirstFSLN(UtilizationProfile.java:1242)
         at com.sleepycat.je.cleaner.UtilizationProfile.populateCache(UtilizationProfile.java:1200)
         at com.sleepycat.je.recovery.RecoveryManager.recover(RecoveryManager.java:221)
         at com.sleepycat.je.dbi.EnvironmentImpl.finishInit(EnvironmentImpl.java:549)
         - locked <0x00002aaaf8009868> (a com.sleepycat.je.dbi.EnvironmentImpl)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:237)
         at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:229)
         at com.sleepycat.je.Environment.<init>(Environment.java:211)
         at com.sleepycat.je.Environment.<init>(Environment.java:165)
    It looks like it is spinning in UtilizationProfile.getFirstFSLN(). Examining the syscalls using strace shows that JE is looping around the same 3 log files over and over again (note the lseek calls on fd 147):
    14719 lseek(147, 2284064, SEEK_SET) = 2284064
    14719 read(147, "\310\f9\311\v\0\330\331\"\0:\0\0\0\2{\231\302\36\27\0\340\24\0\0\33\201\230\0\340\24\0"..., 4096) = 4096
    14719 lseek(95, 3030708, SEEK_SET) = 3030708
    14719 read(95, "B\17\36\373\v\0\350<.\0B\0\0\0\2{3\323\36\27\0+\21\0\0\210\221\230\0+\21\0"..., 4096) = 2469
    14719 lseek(95, 3025672, SEEK_SET) = 3025672
    14719 read(95, "#(\2571\v\0b*.\0\202\0\0\0\2{\26\323\36\27\0\0160\0\0f\221\230\0Z\22\0"..., 4096) = 4096
    14719 lseek(74, 35084, SEEK_SET) = 35084
    14719 read(74, "\361\34s\272\v\0\300\210\0\0h\0\0\0\2{\240\323\36\27\0\346\36\0\0\215\225\230\0\350\22\0"..., 4096) = 4096
    14719 lseek(74, 90663, SEEK_SET) = 90663
    14719 read(74, "\2\16P\341\v\0#;\1\0B\0\0\0\2{\275\323\36\27\0\366/\0\0003H.\0$\3\0"..., 4096) = 4096
    14719 lseek(147, 2284064, SEEK_SET) = 2284064
    14719 read(147, "\310\f9\311\v\0\330\331\"\0:\0\0\0\2{\231\302\36\27\0\340\24\0\0\33\201\230\0\340\24\0"..., 4096) = 4096
    14719 lseek(95, 3030708, SEEK_SET) = 3030708
    14719 read(95, "B\17\36\373\v\0\350<.\0B\0\0\0\2{3\323\36\27\0+\21\0\0\210\221\230\0+\21\0"..., 4096) = 2469
    From the output of lsof we can determine the names of the log files associated with the above file descriptors, but I'm not sure if this is useful or not. We also ran DbVerifyLog against the entire DB log and it produced no results, so this indicates to me that the DB is not corrupt after all (although it could be that DbVerifyLog simply does not check the utilization profile meta data).
    I had a look at the change log for 4.1.17 but couldn't see any relevant bugs. It's probably worth noting that this DB does not use HA, duplicates, or secondary indexes.
    So I guess I have two questions:
    1) Is this a bug? Presumably the DB should recover even in the event of a human error induced failure like this?
    2) Can we recover the DB somehow, e.g. using DbDump/DbLoad?
    Let me know if you need any additional information.
    Cheers,
    Matt

    1) Is this a bug? Presumably the DB should recover even in the event of a human error induced failure like this?It is probably a bug, but it depends on exactly what happened during the original crash.
    2) Can we recover the DB somehow, e.g. using DbDump/DbLoad?If you can't open the environment, only DbDump -R (salvage mode) can be used, which requires identifying the active data by hand. Not a good option.
    Let me know if you need any additional information.Do you happen to have the original stack trace, where you ran out of FDs?
    How long did the looping go on before you killed the process?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • LMS 4.0.1 cannot access rmeng database via Open Database Schema Support

    Hi,
    we are running LMS 3.2 and I am using Open Database Schema Support to transfer the data from cmf, ani and rmeng databases.
    We now want to migrate to  LMS 4.0.1 and I am trying to use the same scripts to do the same. I am using iAnyWhere JDBC Access Method, the same one I use successfully with LMS 3.2. Can access the tables in cmf and ani databases, but cannot the tables in rmeng database. I do not think my scripts are wrong, I more think enabling access to databases does not work correctly.
    I tried to run dbaccess.pl in debug mode and can see there are errors for example while enabling access to the table  NETWORK_DEVICES in rmeng, please see the log below.
    We tried to restart LMS and also the whole server ( it is MS Windows 2008 R2 on VMware ), but this did not help.
    Appreciate if anyone has an idea what could be wrong.
    Thanks,
    Vlad
    D:\BAT>%cwbin%perl  %cwbin%dbaccess.pl install debug
    Enter new Password for database view user [lmsdatafeed]:
    Re-enter new Password:
    Executing on cmf...
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatafeed' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
    Executing on ani...
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatafeed' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
    Executing on dfmFh...
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatafeed' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
    Executing on rmeng...
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatafeed' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: User ID 'lmsdatagrp' does not exist (DBD: execute failed).
       WARN: Table 'SYSLOG_TODAY' not found (DBD: execute failed).
       WARN: Table 'SYSLOG_LASTDAY' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: External login for server 'CMFSA' could not be found (DBD: execute fail
    ed).
       WARN: Remote server 'CMFSA' could not be found (DBD: execute failed).
       WARN: Unable to connect to server 'CMFSA': [Microsoft][ODBC Driver Manager] T
    he specified DSN contains an architecture mismatch between the Driver and Applic
    ation (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'Network_devices' not found (DBD: execute failed).
       WARN: Table 'NETWORK_DEVICES' not found (DBD: execute failed).
       WARN: Table 'DEVICE_INVENTORY' not found (DBD: execute failed).
       WARN: Table 'MEMORY_INVENTORY' not found (DBD: execute failed).
       WARN: Table 'DEVICE_CREDENTIAL_STATUS' not found (DBD: execute failed).
       WARN: Table 'PROCESSOR_INVENTORY' not found (DBD: execute failed).
       WARN: Table 'DEVICE_INVENTORY_COLLECTION_STATUS' not found (DBD: execute fail
    ed).
       WARN: Table 'DEVICE_CONFIG_ARCHIVE_STATUS' not found (DBD: execute failed).
       WARN: Table 'CHANGE_AUDIT_HISTORY' not found (DBD: execute failed).
       WARN: Table 'MODULE_INVENTORY' not found (DBD: execute failed).
       WARN: Table 'PORT_INVENTORY' not found (DBD: execute failed).
       WARN: Table 'DEVICE_ENERGYWISE' not found (DBD: execute failed).
       WARN: Table 'PORT_ENERGYWISE' not found (DBD: execute failed).
       WARN: Table 'SYSLOG_TODAY' not found (DBD: execute failed).
       WARN: Table 'SYSLOG_LASTDAY' not found (DBD: execute failed).

    Hello Joseph,
    thanks for the reply. I was confused because I am not getting any such errors on LMS 3.2 server.
    I am using many Java programs that work well with LMS3.2, but let me mention one that is probably the best to demonstrate the issue I have with the new version. It works well on LMS3.2 with all databases. On LMS4.0.1 it works well with cmf and ani, but cannot dig any data from rmeng.
    Here is the batch I run on the LMS 4.0.1 server ( it is not the whole file as it is much larger ):
    =================================================================================
    @echo *************************************
    @echo Starting Creating MySQL Mirror file
    @echo *************************************
    set classpath=d:\progra~2\CSCOpx\lib\classpath\;d:\bat\
    @echo Common Services db, Network_Device view
    java CW_Java_CreateSQL cmf password Network_Devices CW_Network_Devices CW_Network_Devices.txt Device_Id Device_Display_Name Management_IpAddress Host_Name Domain_Name sysObjectID Device_Category Device_Series Device_Model UDF0 UDF1 UDF2 UDF3 UDF4 UDF5 UDF6 UDF7 UDF8 UDF9 UDF10 UDF11 UDF12 UDF13 UDF14 UDF15
    goto jv1_%errorlevel%
    :jv1_0
    @echo Campus Manager db, End Hosts view
    java CW_Java_CreateSQL ani password End_Hosts CW_End_Hosts CW_End_Hosts.txt HostName UserName IPAddress SubnetMask MACAddress DeviceName Device LastSeen Port VLAN Subnet Notes PrefixLength PortDuplex VTPDomain VlanId PortName IPv6Address dot1xEnabled associatedRouters
    goto jv2_%errorlevel%
    :jv2_0
    @echo Resource Manager Essentials db, Device_Inventory view
    java CW_Java_CreateSQL rmeng password Device_Inventory CW_Device_Inventory CW_Device_Inventory.txt Device_Id Device_Display_Name Device_Description Device_Location Device_Contact Device_SW_Version User_Defined_Serial_Number
    goto jv3_%errorlevel%
    :jv3_0
    @echo Resource Manager Essentials db, Module_Inventory view
    java CW_Java_CreateSQL rmeng password Module_Inventory CW_Module_Inventory CW_Module_Inventory.txt Device_Id Device_Display_Name Module_Name Vendor_Type SW_Version FW_Version Slot_Number Oper_Status Admin_Status
    goto jv4_%errorlevel%
    :jv4_0
    @echo Resource Manager Essentials db, Port_Inventory view
    java CW_Java_CreateSQL rmeng password Port_Inventory CW_Port_Inventory CW_Port_Inventory.txt Device_Id Device_Display_Name Port_Name Port_Desc Port_Type Port_Admin_Status Port_Oper_Status Port_Speed Port_Duplex_Mode Is_Link_Port Is_Access_Port Is_Trunk_Port Is_Port_Channel Vlan_Name VlanId VTPDomain Neighbour_Type
    goto jv5_%errorlevel%
    :jv5_0
    @echo Resource Manager Essentials db, Processor_Inventory view
    java CW_Java_CreateSQL rmeng password Processor_Inventory CW_Processor_Inventory CW_Processor_Inventory.txt Device_Id Device_Display_Name Processor_model_name Processor_vendor_type Processor_manufacturer serialnumber
    goto jv6_%errorlevel%
    :jv6_0
    @echo *************************************
    @echo Finished Creating MySQL Mirror file
    @echo *************************************
    =================================================================================
    Here is the log:
    =================================================================================
    Starting Creating MySQL Mirror file
    D:\BAT>set classpath=d:\progra~2\CSCOpx\lib\classpath\;d:\bat\
    Common Services db, Network_Device view
    D:\BAT>java CW_Java_CreateSQL cmf password Network_Devices CW_Network_Devices CW_Ne
    twork_Devices.txt Device_Id Device_Display_Name Management_IpAddress Host_Name D
    omain_Name sysObjectID Device_Category Device_Series Device_Model UDF0 UDF1 UDF2
    UDF3 UDF4 UDF5 UDF6 UDF7 UDF8 UDF9 UDF10 UDF11 UDF12 UDF13 UDF14 UDF15
    D:\BAT>goto jv1_0
    Campus Manager db, End Hosts view
    D:\BAT>java CW_Java_CreateSQL ani password End_Hosts CW_End_Hosts CW_End_Hosts.txt
    HostName UserName IPAddress SubnetMask MACAddress DeviceName Device LastSeen Por
    t VLAN Subnet Notes PrefixLength PortDuplex VTPDomain VlanId PortName IPv6Addres
    s dot1xEnabled associatedRouters
    D:\BAT>goto jv2_0
    Resource Manager Essentials db, Device_Inventory view
    D:\BAT>java CW_Java_CreateSQL rmeng password Device_Inventory CW_Device_Inventory C
    W_Device_Inventory.txt Device_Id Device_Display_Name Device_Description Device_L
    ocation Device_Contact Device_SW_Version User_Defined_Serial_Number
    Exception in thread "main" java.sql.SQLException: [Sybase][ODBC Driver][SQL Anyw
    here]Table 'Device_Inventory' not found
            at ianywhere.ml.jdbcodbc.IIStatement.executeQuery(Native Method)
            at ianywhere.ml.jdbcodbc.IStatement.executeQuery(IStatement.java:201)
            at CW_Java_CreateSQL.main(CW_Java_CreateSQL.java:102)
    D:\BAT>goto jv3_1
    Error Creating MySQL Mirror file
    =================================================================================
    The line 102 in Java program is this line:
    rs = stmt.executeQuery(query);
    Attached is the full Java program - CW_Java_CreateSQL.java
    Thank you,
    Vlad

  • Broken link to Oracle9i Database Globalization Support Guide Release 2 (9.2

    http://otn.oracle.com/documentation/oracle9i.html
    broken link to Oracle9i Database Globalization Support Guide Release 2 (9.2)
    http://download.oracle.com/docs/html/A96529_01/toc.htm
    only appear 404 error message.

    Hi Hannuri,
    I am not encountering this issue. Perhaps is has been resolved. Please confirm if you are still having this problem.
    Thanks and regards,
    Les

  • Archiving information from huge database

    I am having some problem with the backups of a huge database (300GB), I am thinking options for backing up this database but in other place, it's a kind of DRP, but it is too complicated to move 300GB from one place to another in the network. I check tools like Shared Plex from Quest, but rigth now I'd like to know if there is an application for archiving old information. I start thinking if there is a tool for archiving the old information and if this app can put the information back in the DB?.
    I only need if there are options for doing this.
    Does anybody know?

    whats the amount of redo per day? you can put the db in archivelog mode and backup the archivelog files! you don't need to take a full backup! well, its very difficult to decide which records to move. new records may be dependent on old records, in that case old records can not be moved or deleted!
    Thanks,
    G

  • Database not supported on oracle9i

    Hello,
    in our company we use zfd4.01 ir6 and oracle 8 for inventory. All is
    working fine. Now we have to upgrade our oracle 8 to oracle 9i. When
    inventory starts, we get error 627 - database not supported, even though
    novell supports oracle 9i since ir6. Is there any opportunity to solve the
    problem?
    Thanks
    Robert Schmid

    Robertschmid,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
    - Check all of the other support tools and options available at
    http://support.novell.com.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://support.novell.com/forums)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://support.novell.com/forums/faq_general.html
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Database Systems Supported By J2SE

    Good day,
    May i know what are the open source database management systems supported by Java?is Interbase supported?
    Do I need any additional drivers?
    thanks...

    there are many open source database system supported by java...
    example : postgresql, mysql, hsql
    and, yes, you do need a driver(jdbc) ...or you can use odbc-jdbc that provided by
    java...

Maybe you are looking for

  • Trying to tween a colour change on an mc with AS...

    Hi I have a Flash front-end that loads menu data from an XML file, duplicates an empty movie clip and populates a dynamic text field with each menu item mc and then populates the text fields with the text from the XML. With help from others in the fo

  • Reloading everything back onto laptop after a disc erase due to grey screen - steps

    1) I have had grey screen problem & have done start jo sox re install. Now I want to reload all my files from time capsule back onto my 17inch pro. What are the steps The mac17 has had full backup just before it stopped. Last piece of software loaded

  • Difference on variable _Ago

    Hi, I have created a variable 'Sold_Ago' and all is right.. but when I need to do the difference between 'Sold' and 'Sold_Ago' the system display the error: Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000

  • ERP Integrator: Data Rows Marked as Invalid

    Hi, I executed data write-back rules with "Import from source" option and met this warning: "ERPI Process Start, Process ID: 19 ERPI Logging Level: DEBUG (5) ERPI Log File: /tmp/aif_19.log Jython Version: 2.1 Java Platform: java1.4.2_14 COMM Writebac

  • Ipod shuffle won't stay connected in itunes

    I have an old ipod shuffle, the kind that's white and looks like a flash drive.  I just got a new Windows 7 laptop, and installed itunes on it.  I spent a few hours transferring all of my music over to itunes on the new laptop.  Then, when I plugged