Handling million rows using jdbc

approximately how much time would it take to use jdbc and do a simple data import from an oracle database having 500 million rows?

Without a lot more information my answer would be too long.
It is likely that something is wrong with the design of a project if it involves bring 500 million records into a running java application.
Using a 1000 byte record size it would take 19 hours just to move the data over a 100mbit ethernet connection.

Similar Messages

  • Handling any query using JDBC ?

    Hi there,
    I was wondering if their actually exist a way of handling/processing any SELECT statement a user would enter.
    ex: the user enter SELECT * FROM SCOTT.EMP;
    Is it possible to retrieve using JDBC teh metadata associated with "*" ?
    any help would be welcome.
    Regards
    Eric.

    When you get back the result of a query in JDBC you get a ResultSet object.
    Off of that you can obtain a ResultSetMetaData object which fully describes the result set - column names, types, scale and precision, etc. You can use that for dynamic processing of your result set, for example if you wanted to display it in a table.

  • Database package for handling SQL queries using JDBC/ODBC bridge

    I perform a package using J2ME which can stores the contact.It consisting of the fascilities of the editing,Inserting, Deleting,Updating records
    i have write import statements as
    import javax.microedition.midlet*;
    import javax.microedition.lcdui.*;
    but when i write import java.sql.* error message shown AS this package does not exist.
    So for J2ME using J2ME wireless toolkit 2.5 cldc which package i want to import to implement the sql statements i implement the Database using the text Files.
    Requirement is that all SQL statements are implemented.also Which other equivalent packages are there for handling the databases in J2ME.
    Please tell me all the package & Answer of the above problem.

    Hey
    I was told that I can (through Java code) read all the registered DSNs and that way basically modify my application to populate it with a list of databases in the system and then when the user clicks on a database, the program would function as before...i.e. read the list of tables with it..yadda yadda yadda
    Can anyone guide me on how to do that? read a list of DSNs in the system etc

  • Problem deleting rows using JDBC

    Hi,
    I have a problem with the following code listed below. I am attempting to delete a user's details from a SQLServer db based on the user selected from a combobox. The code given below is just a section and relates to a JButton I have on the application that performs the delete command.
    public void deleteButton_actionPerformed(ActionEvent e) {
    try{
    String opName = (String)userNameBox.getSelectedItem();
    String deleteQuery = "select * from Operator where OperatorName = " + "'"+opName+"'";
    //Load Database Driver Class
    Class.forName(JDBC_DRIVER);
    //Establish a connection to the Database
    connection = DriverManager.getConnection(DATABASE_URL);
    //Create Select statement for returning all usernames to the combobox
    statement2 = connection.createStatement(java.sql.ResultSet.TYPE_SCROLL_SENSITIVE, java.sql.ResultSet.CONCUR_READ_ONLY);
    //dQuery.setString(1,opName);
    resultSet = statement2.executeQuery(deleteQuery);
    while (resultSet.next()){
    resultSet.deleteRow();
    catch(Exception exception){
    exception.printStackTrace();}
    To explain the code, to begin with a String holds the value of the selected field of the combobox i have in my app. I have the SQL select statement the using this variable as the where clause.
    The usual connection and statement objects are declared etc and then the query is executed. Within the While loop I then attempt to delete the row that the query has retrieved for me. However this results in the following error.
    java.sql.SQLException: [Microsoft][ODBC SQL Server Driver]Invalid attribute/option identifier
    Any help is much appreciated.
    Thanks
    Alan

    Thanks for the reply. I understand your comment about
    using the delete statement and it seems I am making
    an obvious error. However I was using the select
    statement to generate a resultSet object with which I
    can delete elements from. Hence the reason for the
    use of the select statement.This is brain dead, the classic newbie mistake. The database vendor has optimized their code more than you ever will in making those deletes, so use their code. Not only will they do it faster than you will, but you'll save yourself (n+1) network roundtrips (1 for the SELECT, 1 for each DELETE you issue). Better to learn SQL and use a WHERE clause to do it on the database side.
    You have however solved the problem I had with
    directly executing the delete statement as an SQL
    String. I was using the following statement:
    String deleteQuery = "select * from Operator where
    OperatorName = " + "'"+opName+"'";
    The wildcard character is not required and it was
    this that gave me problems and made me look to using
    a select query and deleting results from a
    resultSet.Sounds like you need a SQL book. I recommend "SQL For Smarties".
    %

  • How to handle the BOLB column Using JDBC adapter

    Hi,
      I want to sending of a BOLB column from a DB2 database table to another DB2 database, and the sender and receiver both by using JDBC adapter. The two tables in each other database have the same columns.
      Here is the table's structure:
      <ID>string type</ID>
      <PDF>blob type</PDF>
      Also, i haved create two DataTypes in PI system.
      Sender DataType:
      ns:DT_PDF_Req xmlns:ns="http://XXXXX.com/sap/xi">
       <row>
         <ID>1</ID>
         <PDF></PDF>
       </row>
      </ns:DT_PDF_Req>
      Receiver DataType:
      <ns0:DT_PDF_Res xmlns:ns0="http://XXXX.com/sap/xi">
      <STMT>
      <dbtable action="INSERT">
      <table>tablename</table>
      <access>
       <ID></ID>
       <PDF></PDF>
      </access>
      </dbtable>
      </STMT>
      </ns0:DT_PDF_Res>
    When test this interface, i found the that we can get the data, but when execute the insert stms  the follow errors occured  in RWB :
    <ERROR>
    Could not execute statement for table/stored proc. "DBDPUSER.pdf" (structure "STMT") due to com.ibm.db2.jcc.b.nm: DB2 SQL Error: SQLCODE=-103, SQLSTATE=42604, SQLERRMC=255044462d312e330a25c7ec8fa20a352030206f626a0a3c3c2f4c656e677468203620, DRIVER=3.50.153
    JDBC Message processing failed, due to Error processing request in sax parser: Error when executing statement for table/stored proc. 'DBDPUSER.pdf' (structure 'STMT'): com.ibm.db2.jcc.b.nm: DB2 SQL Error: SQLCODE=-103, SQLSTATE=42604, SQLERRMC=255044462d312e330a25c7ec8fa20a352030206f626a0a3c3c2f4c656e677468203620, DRIVER=3.50.153
    </ERROR>
    Can anybody tell me how to resolve the problems?
    And tell me about how to deal BOLB column using PI.
    Best Regards
    Terry

    Hi Terry Qin,
    I understand, you are getting below XML from sender JDBC channel. But when you getting SAX parser error, in receiver JDBC.
    <ns:DT_PDF_Req xmlns:ns="http://XXXXX.com/sap/xi">
    <row>
    <ID>1</ID>
    <PDF></PDF>
    </row>
    </ns:DT_PDF_Req>
    I think it is because the XML which is going to receiver JDBC channel is not well formed (because pdf can containg < & characters).
    You can achive this sceanrio, by selecting receiver JDBC channel as Message Protocol "Native SQL Format" [Link1|http://help.sap.com/saphelp_nwpi711/helpdata/en/44/7c24a75cf83672e10000000a114a6b/frameset.htm] you can send non XML to receiver channel.
    Before that, you have convert above input XML into SQL statement, using Java Mapping.
    Regards,
    Raghu_Vamsee

  • How to release row lock by using jdbc

    hi, currently we are using jdbc to create a connection and create a row lock , is there anyway to release the row lock? right now i am using resultset.close(), but this cause me problem since it release other resultset's row lock too. please help.

    hi, from your post, i understood that u know how to do row locking..
    How was it done ??
    I'm currently looking for answer to do row locking in Microsoft Access...
    These are the SQL stmt i've done without success..
    SELECT * FROM BSPerson WITH UPDLOCK WHERE ID = 'P001';
    SELECT * FROM BSPerson WHERE ID = 'P001' FOR UPDATE;
    Both stmt having error........
    Please help !
    A miliion thanks....

  • Migration of million rows from remote table using merge

    I need to migrate (using merge) almost 15 million rows from a remote database table having unique index (DAY, Key) with the following data setup.
    DAY1 -- Key1 -- NKey11
    DAY1 -- Key2 -- NKey12
    DAY2 -- Key1 -- NKey21
    DAY2 -- Key2 -- NKey22
    DAY3 -- Key1 -- NKey31
    DAY3 -- Key2 -- NKey32
    In my database, I have to merge all these 15 million rows into a table having unique index (Key); no DAY in destination table.
    First, it would be executed for DAY1 and the merge command will insert following two rows. For DAY2, it would update two rows with the NKey2 (for each Key) values and so on.
    Key1 -- NKey11
    Key2 -- NKey12
    I am looking for the best possible approach. Please note that I cannot make any change at remote database.
    Right now, I am using the following one which is taking huge time for DAY2 and so on (mainly update).
    MERGE INTO destination D
      USING (SELECT /*+ DRIVING_SITE(A) */ DAY, Key, NKey
                   FROM source@dblink A WHERE DAY = v_day) S
      ON (D.Key = S.Key)
    WHEN MATCHED THEN
       UPDATE SET D.NKey = S.NKey
    WHEN NOT MATCHED THEN
       INSERT (D.Key, D.NKey) VALUES (S.Key, S.NKey)
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;Edited by: 986517 on Feb 14, 2013 3:29 PM
    Edited by: 986517 on Feb 14, 2013 3:33 PM

    MERGE INTO destination D
      USING (SELECT /*+ DRIVING_SITE(A) */ DAY, Key, NKey
                   FROM source@dblink A WHERE DAY = v_day) S
      ON (D.Key = S.Key)
    WHEN MATCHED THEN
       UPDATE SET D.NKey = S.NKey
    WHEN NOT MATCHED THEN
       INSERT (D.Key, D.NKey) VALUES (S.Key, S.NKey)
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;The first remark I have to emphasize here is that the hint /*+ DRIVING_SITE(A) */ is silently ignored because in case of insert/update/delete/merge the driving site is always the site where the insert/update/delete is done.
    http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/#more-809
    Right now, I am using the following one which is taking huge time for DAY2 and so on (mainly update).The second remark is that you've realised that your MERGE is taking time but you didn't trace it to see where time is being spent. For that you can either use the 10046 trace event or at a first step get the execution plan followed by your MERGE statement.
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;The third remark is related to the DML error logging : be aware that unique keys will empeach the DML error loggig to work correctly.
    http://hourim.wordpress.com/?s=DML+error
    And finally I advise you to look at the following blog article I wrote about enhancing an insert/select over db-link
    http://hourim.wordpress.com/?s=insert+select
    Mohamed Houri
    www.hourim.wordpress.com

  • How to eliminate inserting  Duplicate rows into database using JDBC Adapter

    File->Xi->JDBC
    In above Scenario if the file has two rows their values are identical, then how can we eliminated inserting  Duplicate rows into database using JDBC Adapter

    Database is a consumer of a SERVICE (SOA!!!!!!).
    Database plays a business system role here!!!!
    Mapping is part of an ESB service
    Adaptor is a technology adapted to ESB framework to support specific protocol.
    ESB accomplish ESB duties such as transformation, translation, routing. Routing use a protocol accepted by the consumer. In a JDBC consumer it is JDBC protocol and hence it a JDBC adaptor.
    There is clear separation on responsibilities among business system and ESB. ESB do not participate in business decision or try to get into business system data layer.
    So who ever are asking people to check duplicate check as part of mapping (an ESB service) may not understand integration practice.
    Please use an adaptor module which will execute the duplicate check with business system in a plug and play approach and separate that from ESB service so that people can build integration using AGILE approach.
    Thanks

  • Counting rows in JDBC

    I've recently come across the problem of needing to count the number of rows in a potentially large resultset.
    I did quite a bit of digging and found that the only suggested answers appear to be:
    1. Do a SELECT COUNT(*) before doing the query.
    2. Make the ResultSet scrollable, and use rset.last() and rset.getRow().
    I have problems with both approaches:
    1a. Two queries are issued rather than one - this could be quite expensive.
    1b. The queries I am executing are dynamic - thus some clever logic is required to parse the statement and work out what we should be counting.
    1c. The number of hits may change between the counting and the actual query.
    2a. Definitely most simple, but the Oracle JDBC driver iterates through all the rows to get to the last one, meaning that the whole ResultSet is loaded in to the client-side cache, and thus killing the middle-tier unnecessarily.
    2b. The results have to be scrollable - not a problem for me, but is too expensive for some.
    I had settled on number 2 for a while, but performance was just too nasty when the ResultSet had more than a few million rows in it, and thus I've spent the last few days looking around for better alternatives.
    select inner_2.*, rownum rowcount from (
    select inner_1.*, (rownum*-1) neg_row, rownum pos_row from (
    select COL1, COL2, COL3
    from TABLE1, TABLE2
    where TABLE1.COL1 = TABLE2.COL2
    order by event_type
    ) inner_1
    order by neg_row asc
    ) inner_2
    order by pos_row asc
    Let me explain. This query is determining the rowcount and returning the required columns all in one statement. All I need to do is look at the first row, and read the ROWCOUNT column (the last column) to know how many rows are in the resultset.
    The disadvantage of this method is that an extra two sorts are required, and that there are three extra columns on the result set, which the caller didn't ask for. Hiding the extra three column is easy if you want a really clean solution.
    These are definitely prices worth paying for me but I acknowledge that not everyone will find this price worth paying. However, having been immensely frustated with the lack of options for solving this problem, I thought I would share my solution with you all.
    And on a less-altruistic note, if anyone can tell me how to optimise it further, has a better solution, or knows why it doesn't actually work like it appears to, I'd be more than interested to hear your comments.
    Marcus

    If you need a consistent resultset, you could create a global temp table-- each session would then query the temp table and see a consistent resultset over time. This has the disadvantage that the database has to duplicate the resultset for each session, which probably only moves the caching problem.
    If your sessions don't go on forever and you have sufficient UNDO space, you can set the transaction isolation level to serializable. That will "freeze" the database to the point in time that the session begins. You need to have suffient UNDO, however, for the database to be able to reconstruct the database to that point in time and you need to be able to handle the situation where a user starts a session, walks away for the weekend, and comes back with an error indicating they need to reissue their query.
    If you want to really get sophisticated, you can use Oracle Workspace Manager to "branch" the data for each user. Workspace Manager would allow you to keep a consistent view of data forever without duplicating data for each session.
    The explain plan approach is useful as a rough estimate, not as an exact count. It's based on the statistics that the CBO is looking at, which ought to be close, but could easily be off by a factor of 2 or more depending on the complexity of the query. To get an actualt honest-to-goodness count, you need to incur the cost of a count(*). It seems to me, though, that if the data is particularly volitile, with the count constantly changing, it wouldn't be particularly useful to know the exact count at a point in time.
    Note that the SQL chunking method I outlined above is much more efficient retrieving the first few pages than the last few. If the resultset is large, my assumption is that people aren't actually going page-by-page through the entire thing-- that would seem incedibly inefficient. If they're likely looking at the first few pages and the last few pages, you'd want to have slightly different queries for the first and last few pages-- the inner query's ORDER BY clause would have to be reversed (i.e. ASC for the first pages, DESC for the last few pages). Then, the middle pages would be the most expensive.
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com/askDDBC

  • Delete from 95 million rows table ...

    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.

    user8604530 wrote:
    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.how many rows in the table BEFORE the DELETE?
    how many rows in the table AFTER the DELETE?
    How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    Handle:     user8604530
    Status Level:     Newbie
    Registered:     Mar 10, 2010
    Total Posts:     64
    Total Questions:     26 (22 unresolved)
    I extend to you my condolences since you rarely get your questions answered.

  • Inserting 10 million rows in to a table hangs

    HI through toad iam using a simple for loop to insert 10 million rows into a table by saying
    for i in 1 ......10000000
    insert.................
    It hangs ........ for lot of time
    is there a better way to insert the rows in to the table....?
    i have to test for performance.... and i have to insert 50 million rows in its child table..
    practically when the code moves to production it will have these many rows...(may be more also) thats why i have to test for these many rows
    plz suggest a better way for this
    Regards
    raj

    Must be a 'hardware thing'.
    My ancient desktop (pentium IV, 1.8 Ghz, 512 MB), running XE, needs:
    MHO%xe> desc t
    Naam                                      Null?    Type
    N                                                  NUMBER
    A                                                  VARCHAR2(10)
    B                                                  VARCHAR2(10)
    MHO%xe> insert /*+ APPEND */ into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:04:09.71
    MHO%xe> drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:31.50
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.04
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:02:44.12
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:09.46
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:00.15
    MHO%xe> insert /*+ APPEND */ into t
      2   with my_data as (
      3   select level n, 'abc' a, 'def' b from dual
      4   connect by level <= 10000000
      5   )
      6   select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:03.89
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:27.17
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.15
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:56.10Yea, 'cached' it a bit (ofcourse ;) )
    But the append hint seems to knibble about 50 sec off anyway (using NO indexes at all) on my 'configuration'.

  • Insert/select one million rows at a time from source to target table

    Hi,
    Oracle 10.2.0.4.0
    I am trying to insert around 10 million rows into table target from source as follows:
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);There is a unique index on target table on col1,col2
    I was having issues with undo and now I am getting the follwing error with temp space
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPI believce it would be easier if I did bulk insert one million rows at a time and commit.
    I appriciate any advice on this please.
    Thanks,
    Ashok

    902986 wrote:
    NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);
    I don't know if it has any bearing on the case, but is that WHERE clause on purpose or a typo? Should it be:
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.COL1 and f.col2 = m.col2);Anyway - how much of your data already exists in target compared to source?
    Do you have 10 million in source and very few in target, so most of source will be inserted into target?
    Or do you have 9 million already in target, so most of source will be filtered away and only few records inserted?
    And what is the explain plan for your statement?
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);As your error has to do with TEMP, your statement might possibly try to do a lot of work in temp to materialize the resultset or parts of it to maybe use in a hash join before inserting.
    So perhaps you can work towards an explain plan that allows the database to do the inserts "along the way" rather than calculate the whole thing in temp first.
    That probably will go much slower (for example using nested loops for each row to check the exists), but that's a tradeoff - if you can't have sufficient TEMP then you may have to optimize for less usage of that resource at the expense of another resource ;-)
    Alternatively ask your DBA to allocate more room in TEMP tablespace. Or have the DBA check if there are other sessions using a lot of TEMP in which case maybe you just have to make sure your session is the only one using lots of TEMP at the time you execute.

  • How can i put a file into blob using jdbc !?

    Hi
    i tried to put a file into blob , but got a problem.....
    My environment:windows 2000pro,JBuilder 5.0 enterprise,oracle 8.1.6,(not install oracle jdbc driver )
    a part of program(my program is very uglily,if anyone want,later i paste it ba....~_~)
    //Statement stmt2=null;
    //Resultset rs2;
    //opa1 is the blob data
    void saveBlobTableToDisk(Connection con) {
    try {
    stmt2=con.createStatement();
    sqlStr2="SELECT * FROM emp3 where id=1004";
    rs2=stmt2.executeQuery(sqlStr2);
    while (rs2.next()) {
    Blob aBlob=rs2.getBlob("opa1");
    i got the exception :
    " null
    java.lang.UnsupportedOperationException
         at sun.jdbc.odbc.JdbcOdbcResultSet.getBlob(JdbcOdbcResultSet.java:4174)
         at test3.Frame1.saveBlobTableToDisk(Frame1.java:48)
         at test3.Frame1.<init>(Frame1.java:26)
         at test3.Application1.<init>(Application1.java:5)
         at test3.Application1.main(Application1.java:8) "
    and the windows pop up a messagebox said that(about) my memory "0x09af007f" could not read, error in javaw.exe .
    Later i used (ResultSet)getBinaryStream() to solve it. but getBinaryStream() only return a InputStream,so that i can make blob to a file,but i can't make a file to blob using jdbc.....
    I am very stupid that installing sun java, oracle jdbc driver etc....(because i must set a lot of thing such as classpath,java_home etc), Can i only use JBuilder to do that ?
    Or i must install oracle jdbc driver ?
    Thanks.
    D.T.

    My guess here is that Sun's JDBC-ODBC bridge doesn't handle the BLOB datatype. Most ODBC drivers don't support that datatype, so I wouldn't expect the bridge to.
    Is there a reason that you can't use the Oracle driver?
    Justin

  • How to update row by row  in   Jdbc Adapter sender  ?

    Hi friends ,
                      No i am reading data from a table using select query and resulting data i am keeping in the FTP folder as XML File.
                      I want to
                     1. to  know how many rows i read ? 
                     2. Update the  read completed time in each row of the sender side table . 
               (   I am  using <b>select * from a table where tag='n'  </b> . I am giving this in <b>Query SQL Statement</b> of JDBC Sender adapter processing parameter .
    I am writing update query as update table set tag='y' where tag='n' .
                         Will it perform row by row ?
                     3. Insert in to another R3 System  table  the rows which i read  as a log  .
                          Can  you please give  procedure to do that .
                        Expecting your reply asap .
                        Thank you
    Best Regards.,
    V.Rangarajan

    Hi raj ,
                 Thanks for ur reply .   I am new to xi . Just i am doing a scenario . I can able to read  the ms-sql server table data using jdbc Sender  adapter .
                   Can i use RFC Adapter to insert the values to R3 table ?   
                    If  i have mapped  to rfc fields will it store into the table once we read  the data from ms_sql server table using select query of JDBC Sender  Adapter ?
    Best Regards
    V.Rangarajan

  • SQL error when using JDBC

    I am trying to use JDBC to perform an update on a single row, but I keep getting the following exception:
    java.sql.SQLException: ORA-03120: two-task conversion routine: integer overflow
    at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java, Compiled Code)
    at oracle.jdbc.ttc7.Oall7.receive(Oall7.java, Compiled Code)
    at ...
    I cannot find any flaws in the sql-string I am trying to execute, and the exact same string works fine in SQL+. Can anyone please explain this?
    I am using Oracle8.04 and the 'jdbc80520-nt' driver.

    Can u post your sql Query.

Maybe you are looking for