Streaming Oracle Data to HBase

Hi All,
I am trying to stream data from Oracle table to a HBase table. Below are the details :
Machine 1
This machine has Oracle GoldenGate 11.1 for Java, Oracle Database 11.2 and all necessary jar files of HBase, Hadoop, GoldenGate, etc.
I've compiled and created jar of the handler for GG for Java's pump process and placed in GG for Java's "dirprm" directory.
Properties file for GG for Java's pump process has also been placed in "dirprm" directory
Machine 2
This is where Hadoop and HBase is running.
The intend is the stream data from Machine 1 to Machine 2
This exercise is based on the attachment (OGG_HBase_Integration.docx) supplied with Doc ID 1586211.1. This attachment has the handler Java code, the properties file and GG commands. The portion of the Java handler code is listed below that raises exception.
    try {
         Configuration hbaseConf = HBaseConfiguration.create();
         hbaseConf.set("hbase.zookeeper.quorum", "192.168.1.10");
         hbaseAdmin = new HBaseAdmin(hbaseConf);
         HTableDescriptor tdesc = new HTableDescriptor (Bytes.toBytes(hbaseTableName));
         HColumnDescriptor cfDesc = new HColumnDescriptor(Bytes.toBytes(hbaseCfName));
         tdesc.addFamily(cfDesc);
         hpool = new HTablePool();
         if (hbaseAdmin.tableExists(hbaseTableName)) {
            htable = hpool.getTable(Bytes.toBytes(hbaseTableName));
         } else {
            hbaseAdmin.createTable(tdesc);
            htable = hpool.getTable(Bytes.toBytes(hbaseTableName));
    } catch (Exception e) {
       logger.info("Exception caught when initilizing the HBase Table " );
       throw new ConfigException("Could not initialize HBase table " + hbaseTableName, e);
In the Java handler, where should I specify the address of machine where HBase is running?
I am getting below error in pump's report file as well :
Error occured in javawriter.c[269]: Error occurred (Java exception): Error calling static void Method UserExitMain.main:
  org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userExitDataSource' defined in class path resource [com/goldengate/atg/datasource/DataSource-context.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Factory method [public final com.goldengate.atg.datasource.GGDataSource com.goldengate.atg.datasource.factory.DataSourceFactory.getDataSource()] threw exception; nested exception is com.goldengate.atg.util.ConfigException: Could not initialize HBase table cinfo
Any idea that I am missing anything? Your help is most appreciated.
Thanks,

Hi All,
I am trying to stream data from Oracle table to a HBase table. Below are the details :
Machine 1
This machine has Oracle GoldenGate 11.1 for Java, Oracle Database 11.2 and all necessary jar files of HBase, Hadoop, GoldenGate, etc.
I've compiled and created jar of the handler for GG for Java's pump process and placed in GG for Java's "dirprm" directory.
Properties file for GG for Java's pump process has also been placed in "dirprm" directory
Machine 2
This is where Hadoop and HBase is running.
The intend is the stream data from Machine 1 to Machine 2
This exercise is based on the attachment (OGG_HBase_Integration.docx) supplied with Doc ID 1586211.1. This attachment has the handler Java code, the properties file and GG commands. The portion of the Java handler code is listed below that raises exception.
    try {
         Configuration hbaseConf = HBaseConfiguration.create();
         hbaseConf.set("hbase.zookeeper.quorum", "192.168.1.10");
         hbaseAdmin = new HBaseAdmin(hbaseConf);
         HTableDescriptor tdesc = new HTableDescriptor (Bytes.toBytes(hbaseTableName));
         HColumnDescriptor cfDesc = new HColumnDescriptor(Bytes.toBytes(hbaseCfName));
         tdesc.addFamily(cfDesc);
         hpool = new HTablePool();
         if (hbaseAdmin.tableExists(hbaseTableName)) {
            htable = hpool.getTable(Bytes.toBytes(hbaseTableName));
         } else {
            hbaseAdmin.createTable(tdesc);
            htable = hpool.getTable(Bytes.toBytes(hbaseTableName));
    } catch (Exception e) {
       logger.info("Exception caught when initilizing the HBase Table " );
       throw new ConfigException("Could not initialize HBase table " + hbaseTableName, e);
In the Java handler, where should I specify the address of machine where HBase is running?
I am getting below error in pump's report file as well :
Error occured in javawriter.c[269]: Error occurred (Java exception): Error calling static void Method UserExitMain.main:
  org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userExitDataSource' defined in class path resource [com/goldengate/atg/datasource/DataSource-context.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Factory method [public final com.goldengate.atg.datasource.GGDataSource com.goldengate.atg.datasource.factory.DataSourceFactory.getDataSource()] threw exception; nested exception is com.goldengate.atg.util.ConfigException: Could not initialize HBase table cinfo
Any idea that I am missing anything? Your help is most appreciated.
Thanks,

Similar Messages

  • Oracle Streams VS Oracle Data Guard

    Hello,
    Could you please explain me what are the different between Oracle Streams VS Oracle Data Guard?
    They are completely different or similar purposes?
    Thanks.

    812322 wrote:
    Hello,
    Could you please explain me what are the different between Oracle Streams VS Oracle Data Guard?
    They are completely different or similar purposes?
    Thanks.http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:14672061404704

  • Will Oracle Data Guard be replaced by Oracle Stream soon?

    Will Oracle Data Guard be replaced by Oracle Stream soon?
    In my opinion Oracle Stream can replace Oracle Data Guard completely.
    Message was edited by:
    frank.qian

    While some of the technologies that underpin Streams are being increasingly incorporated into DataGuard, it's quite unlikely that DataGuard will go away.
    Streams is the successor to Advanced Replication, which is designed to allow a source database to propagate data to a distinct database in a different environment without necessarily having to have the two databases tightly coupled. You can have different databases in different regions managed by different DBA groups who don't necessarily care whether any of the other systems are up using Streams (or Advanced Replication before it). Failing over between these systems, while possible, requires a fair amount of custom scripting, but is certainly possible.
    DataGuard, on the other hand, is designed to allow you to have multiple copies of the same database that are tightly coupled for high availability. Similar in concept, but there are very different trade-offs in the design.
    That said, Streams and Logical Standby both use very similar technologies to mine the redo information for change records. As DataGuard uses Logical Standby more and more, potentially as a replacement for physical standby, they'll use more and more of the same underlying technologies. They'll still be very different products.
    Justin

  • Difference between oracle Data Guard and Replication

    Hello Friends,
    I would like to know the main difference between oracle replication and data guard. Theoretically both are same then where is the difference. Can replication or dataguard be configured in a single PC.
    Thanks

    Dataguard :
    Oracle Data Guard is primarily for high availability, data protection, and disaster recovery. An Oracle Data Guard Standby instance provides transactionally consistent copies of the production database, in the event of an unplanned outage or instance maintenance the Data Guard Primary Role Standby Role can be switched so the Standby instance can provide Production services.
    Replication :
    Oracle Advanced Replication and Oracle Streams are Oracle's solution for replicating data and internal structures to remote databases. Advanced Replication is primarily trigger based where Streams is a rule/handler based replication process and considerably more efficient and less error prone than Advanced Replication
    refer,
    http://www.xtivia.com/database-management/replication/oracle-replication
    Thanks

  • How to export a data as an XML file from oracle data base?

    could u pls tell me the step by step procedure for following questions...? how to export a data as an XML file from oracle data base? is it possible? plz tell me itz urgent requirement...
    Thankz in advance
    Bala

    SQL> SELECT * FROM v$version;
    BANNER
    Oracle DATABASE 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS FOR 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    5 rows selected.
    SQL> CREATE OR REPLACE directory utldata AS 'C:\temp';
    Directory created.
    SQL> declare                                                                                                               
      2    doc  DBMS_XMLDOM.DOMDocument;                                                                                       
      3    xdata  XMLTYPE;                                                                                                     
      4                                                                                                                        
      5    CURSOR xmlcur IS                                                                                                    
      6    SELECT xmlelement("Employee",XMLAttributes('http://www.w3.org/2001/XMLSchema' AS "xmlns:xsi",                       
      7                                  'http://www.oracle.com/Employee.xsd' AS "xsi:nonamespaceSchemaLocation")              
      8                              ,xmlelement("EmployeeNumber",e.empno)                                                     
      9                              ,xmlelement("EmployeeName",e.ename)                                                       
    10                              ,xmlelement("Department",xmlelement("DepartmentName",d.dname)                             
    11                                                      ,xmlelement("Location",d.loc)                                     
    12                                         )                                                                              
    13                   )                                                                                                    
    14     FROM   emp e                                                                                                       
    15     ,      dept d                                                                                                      
    16     WHERE  e.DEPTNO=d.DEPTNO;                                                                                          
    17                                                                                                                        
    18  begin                                                                                                                 
    19    OPEN xmlcur;                                                                                                        
    20    FETCH xmlcur INTO xdata;                                                                                            
    21    CLOSE xmlcur;                                                                                                       
    22    doc := DBMS_XMLDOM.NewDOMDocument(xdata);                                                                           
    23    DBMS_XMLDOM.WRITETOFILE(doc, 'UTLDATA/marco.xml');                                                                  
    24  end;                                                                                                                  
    25  /                                                                                                                      
    PL/SQL procedure successfully completed.
    .

  • Powerpivot Data Refresh Not working with Oracle Data Source in sharePoint 2013

    I am using SQL Server 2012 PowerPivot for Excel 2010. Getting the following error in SharePoint 2013 environment, when using Oracle data source within a workbook -
    EXCEPTION: Microsoft.AnalysisServices.SPAddin.DataRefreshException: Engine error during processing of OLE DB or ODBC error: The specified module could not be found..:
    <Site\PPIV workbook>---> Microsoft.AnalysisServices.SPAddin.DataRefreshException: OLE DB or ODBC error:
    The specified module could not be found..   
     at Microsoft.AnalysisServices.SPAddin.DataRefresh.ASEngineInstance.ProcessDataSource(String server, String databaseName, String datasourceName, SecureStoreCredentialsWrapper
    runAsCredentials, SecureStoreCredentialsWrapper specificConfigurationCredentials, DataRefreshService dataRefreshService, String fileUrlForTracing)     -
    -- End of inner exception stack trace ---   
     at Microsoft.AnalysisServices.SPAddin.DataRefresh.ASEngineInstance.ProcessDataSource(String server, String databaseName, String datasourceName, SecureStoreCredentialsWrapper
    runAsCredentials, SecureStoreCredentialsWrapper specificConfigurationCredentials, DataRefreshService dataRefreshService, String fileUrlForTracing)   
     at Microsoft.AnalysisServices.SPAddin.DataRefresh.DataRefreshService.ProcessingJob(Object parameters)
    I created a simple Excel 2013 PPIV workbook with an oracle data source and uploaded that to SharePoint 2013, but no change in the results - still getting the above error.
    What is this error? We have installed Oracle client (64-bit, since we use 64-bit Excel and Sp is also 64-bit) on SSAS PPIV Server and SharePoint Content DB Server. Do we need
    to install it anywhere else?
    Thanks,
    Sonal

    Hi Sonal,
    To use PowerPivot for SharePoint on SharePoint 2013, it is required to install PowerPivot for SharePoint with the Slipstream version of SQL Server 2012 SP1. If you install SQL Server 2012 and then use the upgrade version of SQL Server 2012 SP1 to upgrade,
    the environment will not support SharePoint 2013.
    I would suggest you refer to the following articles:
    Install SQL Server BI Features with SharePoint 2013 (SQL Server 2012 SP1):
    http://technet.microsoft.com/en-us/library/jj218795.aspx
    Upgrade SQL Server BI Features to SQL Server 2012 SP1:
    http://technet.microsoft.com/en-us/library/jj870987.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Oracle Data Profiling and Data Quality

    Hi,
    How to create metabase for Oracle Data Profiling and Data Quality.Is metabase and repository are same.

    Hi,
    You can create a metabase in the Metabase Manager:
    - Expand Control Admin
    - Click on Metabases
    - in the Metabases window, right-click on the white area and select Add...
    - go through the wizard to create your metabase
    This is documented in the ODQ/ODP tutorial (http://www.oracle.com/technology/products/oracle-data-quality/pdf/oracledq_tutorial.pdf) and in the Documentation (in Metabase Manager or Oracle Data Quality go to Help and then Manuals).
    Thanks,
    Julien

  • Performance issue with Oracle data source

    Hi all,
    I've a rather strange problem that I'm stuck on need some assistance on.
    I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
    The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
    ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
    [XXX]
    Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
    Description=DataDirect 5.2 Oracle Wire Protocol
    AlternateServers=
    ApplicationUsingThreads=1
    ArraySize=60000
    CachedCursorLimit=32
    CachedDescLimit=0
    CatalogIncludesSynonyms=1
    CatalogOptions=0
    ConnectionRetryCount=0
    ConnectionRetryDelay=3
    DefaultLongDataBuffLen=1024
    DescribeAtPrepare=0
    EnableDescribeParam=0
    EnableNcharSupport=0
    EnableScrollableCursors=1
    EnableStaticCursorsForLongData=0
    EnableTimestampWithTimeZone=0
    HostName=999.999.999.999
    LoadBalancing=0
    LocalTimeZoneOffset=
    LockTimeOut=-1
    LogonID=xxx
    Password=xxx
    PortNumber=1521
    ProcedureRetResults=0
    ReportCodePageConversionErrors=0
    ServiceType=0
    ServiceName=xxx
    SID=
    TimeEscapeMapping=0
    UseCurrentSchema=1
    Can anyone please advise on this lack of performance.
    Thanks in advance
    Bagpuss

    One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
    Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
    With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

  • Unable to replicate oracle data into timesten

    I have created CACHE GROUP COMPANY_MASTER
    Cache group :-
    Cache Group TSLALGO.COMPANY_MASTER_TT:
      Cache Group Type: Read Only
      Autorefresh: Yes
      Autorefresh Mode: Incremental
      Autorefresh State: On
      Autorefresh Interval: 1 Minute
      Autorefresh Status: ok
      Aging: No aging defined
      Root Table: TSLALGO.COMPANY_MASTER
      Table Type: Read Only
    But whenever I start timesten server the following lock seen in ttxactadmin <dsn_name>
    Program File Name: timestenorad
    30443   0x7fab902c02f0        7.22     Active      Database  0x01312d0001312d00   IX    0
                                                       Table     1733200              S     4221354128           TSLALGO.COMPANY_MASTER
                                                       Row       BMUFVUAAAAaAAAAFBy   S     4221354128           SYS.TABLES
                                                       Row       BMUFVUAAACkAAAALAF   Sn    4221354128           SYS.CACHE_GROUP
    Due to the following lock oracle data is not replicated in timesten.
    When we check sqlcmdid it shows following output
    Query Optimizer Plan:
    Query Text: CALL ttCacheLockCacheGp(4, '10752336#10751968#10751104#10749360#', 'S', '1111')
      STEP:             1
      LEVEL:            1
      OPERATION:        Procedure Call
      TABLENAME:
      TABLEOWNERNAME:
      INDEXNAME:
      INDEXEDPRED:
      NONINDEXEDPRED:
    Please suggest why timesten take lock on following table.

    966234 wrote:
    Unable to download Oracle Data Integrator with version 11.1.1.6.Hope this could be resolved ASAP.What is the file you are trying to download? Is it for Windows or Linux or All Platforms?
    Thanks,
    Hussein

  • Unable to download Oracle Data Integrator-Version 11.1.1.6(Important)

    Unable to download Oracle Data Integrator with version 11.1.1.6.Hope this could be resolved ASAP.

    966234 wrote:
    Unable to download Oracle Data Integrator with version 11.1.1.6.Hope this could be resolved ASAP.What is the file you are trying to download? Is it for Windows or Linux or All Platforms?
    Thanks,
    Hussein

  • Can SQL*Plus connect via ODBC to NON-Oracle data source?????

    I am struggling to understand something. I downloaded Oracle instance client, SQL*Plus and ODBC components with the hopes of being able to connect via SQL*Plus to a non-Oracle/ODBC compliant database.
    Is this possible? Or is SQL*Plus ability to connect via ODBC only to an Oracle data source??
    Thanks...

    sqlplus only connects to oracle. you can use the odbc driver from instant client to allow other applications to access oracle via odbc (e.g. excel). if you need to connect to non-oracle odbc database (ms-access, foxpro, etc.) you need odbc driver for those sources.
    you can use sqldeveloper to connect to oracle and non-oracle databases. check otn product info for sqldeveloper for more details.

  • Error while installing Oracle Data miner 10G Release 2

    Hello,
    I am a student involved in research in Data mining. I am new to Oracle Database and data miner.
    I installed Oracle Enterprise Manager 10g Grid Control Release 2 (10.2.0.1). Now I am trying to install ORacle data miner (10.2.0.1). However, at the time of installation ODM gives the following error:
    "specified data mining server is not compatible. 10.1.0.4.0."
    I have installed Oracle 10.2.0.1 but when I login using SqlPlus I get the following information -
    SQL*Plus: Release 10.1.0.4.0 - Production on Sun Jul 23 09:52:41 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL>
    I would be really obliged if someone can help me with this.
    Thanks in advance
    Pooja

    Hi ,
    Download and install the product version(10.2.0.1.) of Oracle Data Mining....
    Simon

  • Oracle Data Miner 10.1.0.2 Interoperate with Database 10g Release 2

    Hi all,
    I cannot connect from Oracle Data Miner to a newly upgraded Database 10g Release 2 with Data Mining option. This database was 10.1.0.2 before upgrade, and I could connect via Oracle Data Miner before the upgrade (though it needs to be upgraded to 10.1.0.3+ for data mining to function).
    I have similar problem for a new installation on another computer. The error message in either case is "Cannot connect to specified Data Mining Server. Check connection information and try again."
    I can use SQL*Plus to login as the data mining user using the net service corresponding to the connect string. I check the v$option and DBA_REGISTRY as per the Data Mining Admin. documentation to verify that the data mining option exists and is valid. I am able to use the same connect string "host:port:SID" to connect from Analytical Workspace Manager to verify that the connectivity is OK.
    Furthermore, some Oracle by Example seems not valid for a DB of version 10.2. For example, at the URL http://www.oracle.com/technology/obe/obe10gdb/bidw/odm/odm.htm#p, the point 6 <ORACLE_HOME>\dm\lib\odmapi.jar is not applicable, because the path <ORACLE_HOME>\dm no longer exists.
    Therefore, I have query if Oracle Data Miner 10.1.0.2 can work with DB 10.2? What procedure should I follow? Please advise.
    Thanks and regards,
    lawman

    I am waiting on the beta version since I have installed Oracle10gR2.
    I've been checking the OTN website every day to see when it is released.
    If it is not a bother, can you send me an email when I can download it.
    Thanks in advance.
    Have a wonderful day/weekend,
    Andy

  • Oracle Data Integrator 11.1.1.5 Work Schema - List of Privileges

    Hi All,
    Oracle Data Integrator 11.1.1.5.
    Extracting data from Oracle DB for Oracle EBS 12.1.3.
    Customer created read-only schema (XXAPPS) to extract the data from EBS.
    For ODI Work schema we now created one schema 'XBOL_ODI_TEMP' on the source DB. We are now looking for appropriate privileges that needs to be granted to XXAPPS and 'XBOL_ODI_TEMP' so that we won’t face the any error messageS related to permissions when we run ODI scenario?
    We are now facing the error message : ODI-1227: Task SrcSet0 (Loading) fails on the source ORACLE connection VTB_ORACLE_EBS_1213.
    Caused By: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist.
    Similar previliges can be granted to the work schema on target.
    Venkat

    i think it would be fine with only one schema(user) created at the source system which has got read access on the tables of the EBS DB. Now to resolve this error, assuming XXAPPS user is the one used,
    in the topology --> data server(for EBS) --> physical schema the EBS schema name could be selected for Schema and XXAPPS as the work schema(for all ODI work related objects e.g. CDC)
    Also, in the Data server the user XXAPPS needs to be used which has read access to EBS tables.
    Now everytime ODI generates a query it will access a table lets say DUMMY as ,<EBS Schema>.DUMMY thus the reference is made.
    Alternatively, you can create synonyms for EBS tables in XXAPPS schema.

  • Accessing oracle data types of a different schema

    I am having trouble accessing the Oracle data types in a stored procedure. The owner of the stored procedures and the oracle data types is "ODSCUST". I am trying to access this using "CUSTOM" ID from my java program. Recently our DBAs have made a change in schema owners so the ID loging into the Oracle instance is not the owner of the data types or the stored procedures. The "CUSTOM" user has the grants to access all the types and stored procedures. There are also public synonymns created for the oracle data types and stored procedure.
    Here is a snipit of the code:
    DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    String user="custom";
    String password="test";
    String database="sbpair11";
    Connection conn = DriverManager.getConnection
    ("jdbc:oracle:oci:@" + database, user, password);
    OracleCallableStatement stmt = (OracleCallableStatement)conn.prepareCall( "begin ods_cust.retrieveCustPrflData(?, ?, ?, ?, ?, ?, ?); end;" );
    stmt.setString( 1, "5592556485");
    stmt.registerOutParameter( 2, OracleTypes.STRUCT, "KSCOPEACCOUNT" );
    stmt.registerOutParameter( 3, OracleTypes.ARRAY, "KSCOPERSUUSOCARRAY" );
    stmt.registerOutParameter( 4, OracleTypes.ARRAY, "KSCOPECUTADDRARRAY" );
    stmt.registerOutParameter( 5, OracleTypes.STRUCT, "KSCOPECONTACT" );
    stmt.registerOutParameter( 6, OracleTypes.STRUCT, "KSCOPEPROFILE" );
    stmt.registerOutParameter( 7, OracleTypes.ARRAY, "KSCOPEEMAILARRAY" );
    stmt.executeUpdate();
    Here is the response I get back when I run the program:
    Exception in thread "main" java.sql.SQLException: ORA-04043: object "CUSTOM"."KSCOPEACCOUNT" does not exist
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    at oracle.jdbc.oci8.OCIDBAccess.check_error(OCIDBAccess.java:2321)
    at oracle.jdbc.oci8.OCIDBAccess.getOracleTypeADT(OCIDBAccess.java:2516)
    at oracle.jdbc.oracore.OracleTypeADT.initMetadata(OracleTypeADT.java:460)
    at oracle.jdbc.oracore.OracleTypeADT.init(OracleTypeADT.java:407)
    at oracle.sql.StructDescriptor.initPickler(StructDescriptor.java:249)
    at oracle.sql.StructDescriptor.<init>(StructDescriptor.java:204)
    at oracle.sql.StructDescriptor.createDescriptor(StructDescriptor.java:138)
    at oracle.jdbc.driver.OracleCallableStatement.registerOutParameter(OracleCallableStatement.java:164)
    at Array5.main(Array5.java:40)
    Note:
    When I do a "desc KSCOPEACCOUNT" or "desc CUSTOM.KSCOPEACCOUNT" in sqlplus as the "CUSTOM" id it works but not from Java. Initially I was using the JDBC thin driver but was recomended to switch to OCI which still didn't resolve this issue.

    Way too little info here for a definite conclusion.
    Some thoughts:
    1.) Did you take down the database when you were moving the datafiles? If so, is this the first time the batch job is running since the move? If so, perhaps the cache is cold?
    2.) How much physical I/O is the batch job doing? Has the volume of physical I/O changed significantly after the move? What are the characteristics of the lun on which the newly mounted filesystem is built? Is it a smaller number of spindles than before?
    Ultimately, you need to profile the batch process, and determine where the most time is being consumed.
    -Mark

Maybe you are looking for