Download of big tables ( 50000 rows)

Hello,
i could not download CSV files from an interactive report if the results are very big, the same happens if i try to 'data unload' in text format. The downloads breaks after a while and my browser shows: connection interrupted.
I use Apex 3.1 on Oracle 9iR2 or Apex 3.1.2 on Oracle 11gR1.
BTW: i use the modplsql interface.
Can someone help me ?
Wolfgang

Hi Wolfgang,
I don't think that this is an APEX specific issue as the limit for Interactive Reports in CSV format is 65536 (please see: Interactive Report - .CSV Download limit to 65536 rows This could be an Apache configuration issue.
Martin
[http://apex-smb.blogspot.com/]

Similar Messages

  • HS ODBC GONE AWAY ON BIG TABLE QRY

    Hello,
    I have an HS ODBC connection set up pointing to a MySQL 5.0 database on Windows using mysql odbc 3.51.12. Oracle XE is on the same box and tnsames, sqlnet.ora, and HS ok is all set up.
    The problem is I have a huge table 100 mill rows, in MySQL, and when I run a query in Oracle SQL Developer it runs for about two minutes then I get errrors ORA-00942 lost connection, or gone away.
    I can run a query against a smaller table in the schema and it returns rows quickly. So I know the HS ODBC connection is working.
    I noticed the HS service running on Windows starts up and uses 1.5 gig of memory and the CPU time maxes to 95%, on the big table query, then the connection drops.
    Any advice on what to do here. There doesn't seem to be any config settings with HS service to limit or increase the rows, or increase the cache.
    MySQL does have some advanced ODBC driver options that I will try.
    Does anyone have any suggestions on how to handle this overloading problem??
    Thanks for the help,

    FYI, HS is Oracle Hetrogenous service to connect to non-oracle databases.
    I actually found a workaround. The table is so large the query crashes. So I broke table up with 5 MySql views, and now am able to query the views using select insert Oracle stored procedure into Oracle table.

  • Table.Join/Merge in Power Query takes extremly long time to process for big tables

    Hi,
    I tried to simply merge/inner join two big tables(one has 300,000+ rows after filtering and the other has 30,000+ rows after filtering) in PQ. However, for this simple join operation, PQ took at least 10 minutes (I killed the Query Editor after 10
    minutes' processing) to load the preview.
    Here's how I did the join job: I first loaded tables into the workbook, then did the filtering for each table and at last, used the merge function to do the join based on a same field.
    Did I do anything wrong here? Or is there any way to improve the load efficiency?
    P.S. no custom SQL was used during the process. I was hoping the so called "Query Folding" can help speed the process, but it seems it didn't work here.
    Thanks.
    Regards,
    Qilong

    Hi!
    You should import the source tables
    in Access. This will speed up the work of
    PQ in several times.

  • Regarding the SAP big tables in ECC 6.0

    Hi,
    We are having SAP ECC 6.0 running on Oracle 10.2g database. Please can anyone of you give fine details on the big tables as below. What are they? Where are they being used? Do they need to be so big? Can we clean them up?
    Table          Size
    TST03          220 GB
    COEP          125 GB
    SOFFCONT1      92 GB
    GLPCA          31 GB
    EDI40          18GB
    Thanks,
    Narendra

    Hello Narendra,
    TST03 merits special attention, certainly if it is the largest table in your database. TST03 contains the contents of spool requests and it often happens that at some time in the past there was a huge number of spool data in the system causing TST03 to inflate enormously. Even if this spool data was cleaned up later Oracle will not shrink the table automatically. It is perfectly possible that you have a 220 GB table containing virtually nothing.
    There are a lot of fancy scripts and procedures around to find out how much data is actually in the table, but personally I often use a quick-and-dirty check based on the current statistics.
    sqlplus /
    select (num_rows * avg_row_len)/(1024*1024) "MB IN USE" from dba_tables where table_name = 'TST03';
    This will produce a (rough) estimate of the amount of space actually taken up by rows in the table. If this is very far below 220 GB then the table is overinflated and you do best to reorganize it online with BRSPACE.
    As to the other tables: there are procedures for prevention, archiving and/or deletion for all of them. The best advice was given in an earlier response to your post, namely to use the SAP Database Management Guide.
    Regards,
    Mark

  • GTT table getting Row Exclusive lock

    I have a procedure which loads a table which is used for reporting.
    Currently it is a single process which picks up most of the data after joining 2-3 big tables(around 70-80GB) and then loading the main table.
    The joins are on PI and also partitions are also being used. Then a lot of updates are done on the main table to update other columns which are being picked from different tables.
    As the processing is happening on huge amount of data(around 1M), so processing is taking a lot of time.
    This process is taking around 40-45 minutes.
    I am planning to use parallelism to run it in 75 sessions. So the initial big insert will be faster and later on all the updates will be done on smaller chunks which will benefit the performance.
    I planned to use GTT table so that i dont have to create 75 temp tables for each sessions.
    But while using GTT table(ON COMMIT DELETE ROWS) in parallel sessions, i am seeing that the GTT table is getting Row Exclusive lock and the time taken by overall process is increasing by lot.
    So i tested by using 75 temp tables. There i saw that the performance has increased by lots as assumed initially.
    Please let me know if there is any other way or how to remove the locks on GTT table.

    First you should question why you think you need 75 GTT.
    Also your question contains no useful information
    (no four digit version of Oracle, no OS, no number of processors) , this paragrah
    Currently it is a single process which picks up most of the data after joining 2-3 big tables(around 70-80GB) and then loading the main table.
    The joins are on PI and also partitions are also being used. Then a lot of updates are done on the main table to update other columns which are being picked from different tables.
    As the processing is happening on huge amount of data(around 1M), so processing is taking a lot of time.
    This process is taking around 40-45 minutes.tells nothing,
    so basically your questions boils down to
    - Hey I come from a sqlserver background (this is just an educated guess), how can I find a workaround to wreck Oracle to make it work like sqlserver.
    75 parallel sessions on say 4 to 8 processors is a very bad idea, as these sessions will be simple competing for resources.
    Also a row exclusive lock is just that: a row level lock. This isn't a problem, usually.
    Sybrand Bakker
    Senior Oracle DBA

  • How to use partioning for big table

    Hi,
    Oracle 10gR2/Redhat4
    RAC database
    ASM
    I have a big table TRACES that will also grow very fast, actually I have 15 000 000 rows.
    TRACES (ID NUMBER,
    COUNTRY_NUM NUMBER,
    Timestampe NUMBER,
    MESSAGE VARCHAR2(300),
    type_of_action VARCHAR(20),
    CREATED_TIME DATE,
    UPDATE_DATE DATE)
    The querys that asked this table are and the made a lot of I/O in disks!!
    select count(*) as y0_
    from TRACES this_
    where this_.COUNTRY_NUM = :1
    and this_.TIMESTAMP between :2 and :3
    and lower(this_.MESSAGE) like :4;
    SELECT *
    FROM (SELECT this_.id ,
    this_.TIMESTAMP
    FROM traces this_
    WHERE this_.COUNTRY_NUM = :1
    AND this_.TIMESTAMP BETWEEN :2 AND :3
    AND this_.type_of_action = :4
    AND LOWER (this_.MESSAGE) LIKE :5
    ORDER BY this_.TIMESTAMP DESC)
    WHERE ROWNUM <= :6;
    I have 16 distinct COUNTRY_NUM in the table and the TIMESTAMPE is a number that the application insert in the table.
    My question is the best solution to tune this table is to use partitioninig to a smal parts?
    I need to made a partioning using a list by COUNTRY_NUM and date (YEAR/mounth) , is it a best way to it?
    NB: for an example of TRACES in my test database
    1 select COUNTR_NUM,count(*) from traces
    2 group by COUNTR_NUM
    3* order by COUNTR_NUM
    SQL> /
    COUNTR_NUM COUNT(*)
    -1 194716
    3 1796581
    4 1429393
    5 1536092
    6 151820
    7 148431
    8 76452
    9 91456
    10 91044
    11 186370
    13 76
    15 29317
    16 33470

    Hello,
    You can automate and use dbms_scheduler to add monthly partition. Here is an example of your partitioned table with monthly partitions
    CREATE TABLE traces (
       id NUMBER,
       country_num NUMBER,
       timestampe NUMBER,
       MESSAGE VARCHAR2 (300),
       type_of_action VARCHAR (20),
       created_time DATE,
       update_date DATE
    TABLESPACE TEST_DATA - your tablespace_name
    PARTITION BY RANGE (created_time)
       (PARTITION traces_200901
           VALUES LESS THAN
              (TO_DATE (' 2009-02-01 00:00:00',
                        'SYYYY-MM-DD HH24:MI:SS',
                        'NLS_CALENDAR=GREGORIAN'
           TABLESPACE test_data,  -- Here you can put partition on difference tablespaces meaning different data files residing on diferent disks (Reducing i/o coententions)
       PARTITION traces_200902
          VALUES LESS THAN
             (TO_DATE (' 2009-03-01 00:00:00',
                       'SYYYY-MM-DD HH24:MI:SS',
                       'NLS_CALENDAR=GREGORIAN'
          TABLESPACE test_data);Regards

  • Performance question - Caching data of a big table

    Hi All,
    I have a general question about caching, I am using an Oracle 11g R2 database.
    I have a big table about 50 millions of rows that is accessed very often by my application. Some query runs slow and some are ok. But (obviously) when the data of this table are already in the cache (so basically when a user requests the same thing twice or many times) it runs very quickly.
    Does somebody has any recommendations about caching the data / table of this size ?
    Many thanks.

    Chiwatel wrote:
    With better formatting (I hope), sorry I am not used to the new forum !
    Plan hash value: 2501344126
    | Id  | Operation                            | Name          | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows |  A-Time  | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |  0 | SELECT STATEMENT        |                    |      1 |        |      |  7232 (100)|      |      |  68539 |00:14:20.06 |    212K|  87545 |      |      |          |
    |  1 |  SORT ORDER BY                      |                |      1 |  7107 |  624K|  7232  (1)|      |      |  68539 |00:14:20.06 |    212K|  87545 |  3242K|  792K| 2881K (0)|
      2 |  NESTED LOOPS                      |                |      1 |        |      |            |      |      |  68539 |00:14:19.26 |    212K|  87545 |      |      |          |
    |  3 |    NESTED LOOPS                      |                |      1 |  7107 |  624K|  7230  (1)|      |      |  70492 |00:07:09.08 |    141K|  43779 |      |      |          |
    *  4 |    INDEX RANGE SCAN                | CM_MAINT_PK_ID |      1 |  7107 |  284K|    59  (0)|      |      |  70492 |00:00:04.90 |    496 |    453 |      |      |          |
    |  5 |    PARTITION RANGE ITERATOR        |                |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:03.32 |    141K|  43326 |      |      |          |
    |*  6 |      INDEX UNIQUE SCAN              | D1T400P0      |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:01.71 |    141K|  43326 |      |      |          |
    |*  7 |    TABLE ACCESS BY GLOBAL INDEX ROWID| D1_DVC_EVT    |  70492 |      1 |    49 |    2  (0)| ROWID | ROWID |  68539 |00:07:09.17 |  70656 |  43766 |      |      |          |
    Predicate Information (identified by operation id):
      4 - access("ERO"."MAINT_OBJ_CD"='D1-DEVICE' AND "ERO"."PK_VALUE1"='461089508922')
      6 - access("ERO"."DVC_EVT_ID"="E"."DVC_EVT_ID")
      7 - filter(("E"."DVC_EVT_TYPE_CD"='END-GSMLOWLEVEL-EXCP-SEV-1' OR "E"."DVC_EVT_TYPE_CD"='STR-GSMLOWLEVEL-EXCP-SEV-1'))
    Your user has executed a query to return 68,000 rows - what type of user is it, a human being cannot possibly cope with that much data and it's not entirely surprising that it might take quite some time to return it.
    One thing I'd check is whether you're always getting the same execution plan - Oracle's estimates here are out by a factor of about 95 (7,100 rows predicted vs. 68,500 returned) perhaps some of your variation in timing relates to plan changes.
    If you check the figures you'll see about half your time came from probing the unique index, and half came from visiting the table. In general it's hard to beat Oracle's caching algorithms, but indexes are often much smaller than the tables they cover, so it's possible that your best strategy is to protect this index at the cost of the table. Rather than trying to create a KEEP cache the index, though, you MIGHT find that you get some benefit from creating a RECYCLE cache for the table, using a small percentage of the available memory - the target is to fix things so that table blocks you won't revisit don't push index blocks you will revisit from memory.
    Another detail to consider is that if you are visiting the index and table completely randomly (for 68,500 locations) it's possible that you end up re-reading blocks several times in the course of the visit. If you order the intermediate result set from the from the driving table first you may find that you're walking the index and table in order and don't have to re-read any blocks. This is something only you can know, though.  THe code would have to change to include an inline view with a no_merge and no_eliminate_oby hint.
    Regards
    Jonathan Lewis

  • Max(serial_no) on a big table

    Hello Gurus,
    I am using max(serial_no) on a big table
    how to get performance of such query
    SELECT MAX(SERIAL_NO) FROM DOCUMENTS agr WHERE STATUS NOT IN ('A', 'NA');
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 1 | 8 | 5250 |
    | 1 | SORT AGGREGATE | | 1 | 8 | |
    | 2 | TABLE ACCESS FULL | DOCUMENTS | 846K| 6613K| 5250 |
    No index onn STATUS column
    thanks in advance

    NOT IN does not consider index.I wouldn't say so generally
    SQL> explain plan
       for
          select max (empno)
            from emp
           where empno not in
                    (7369,
                     7499,
                     7521,
                     7566,
                     7654,
                     7698,
                     7782,
                     7788,
                     7839,
                     7844,
                     7876,
                     7900)
    Explain complete.
    SQL> select * from table (dbms_xplan.display ())
    PLAN_TABLE_OUTPUT                                                                                                                                    
    Plan hash value: 1767367665                                                                                                                          
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                               
    |   0 | SELECT STATEMENT            |        |     1 |     4 |     2   (0)| 00:00:01 |                                                               
    |   1 |  SORT AGGREGATE             |        |     1 |     4 |            |          |                                                               
    |   2 |   FIRST ROW                 |        |     1 |     4 |     2   (0)| 00:00:01 |                                                               
    |*  3 |    INDEX FULL SCAN (MIN/MAX)| PK_EMP |     1 |     4 |     2   (0)| 00:00:01 |                                                               
    Predicate Information (identified by operation id):                                                                                                  
       3 - filter("EMPNO"<>7369 AND "EMPNO"<>7499 AND "EMPNO"<>7521 AND                                                                                  
                  "EMPNO"<>7566 AND "EMPNO"<>7654 AND "EMPNO"<>7698 AND "EMPNO"<>7782 AND                                                                
                  "EMPNO"<>7788 AND "EMPNO"<>7839 AND "EMPNO"<>7844 AND "EMPNO"<>7876 AND                                                                
                  "EMPNO"<>7900)                                                                                                                         
    18 rows selected.

  • Very Big Table (36 Indexes, 1000000 Records)

    Hi
    I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
    I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
    1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
    2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
    3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
    Which way has more efficiency?
    Do I have to take "master-detail" relations in mind when building Forms on this table?
    Are there any other suggestions?
    I am using Oracle 8.1.7 database.
    Please Help.

    Hi everybody
    I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
    Q: Have you gathered statistics on the tables in your database?
    A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
    Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
    A: Actually I do not know what you mean by "10046 level 8".
    Q: what OS and what kind of server (hardware) are you using
    A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
    Q: how many concurrent user do you have an how many transactions per hour
    A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
    Q: How fast should your queries be executed
    A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
    Q:please show use the explain plan of these queries
    A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

  • Gather table stats takes time for big table

    Table has got millions of record and it is partitioned. when i analyze the table using the following syntax it takes more than 5 hrs and it has got one index.
    I tried with auto sample size and also by changing the estimate percentage value like 20, 50 70 etc. But the time is almost same.
    exec dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'TEST',granularity =>'ALL',ESTIMATE_PERCENT=>100,cascade=>TRUE);
    What i should do to reduce the analyze time for Big tables. Can anyone help me. l

    Hello,
    The behaviour of the ESTIMATE_PERCENT may change from one Release to another.
    In some Release when you specify a "too high" (>25%,...) ESTIMATE_PERCENT in fact you collect the Statistics over 100% of the rows, as in COMPUTE mode:
    Using DBMS_STATS.GATHER_TABLE_STATS With ESTIMATE_PERCENT Parameter Samples All Rows [ID 223065.1]For later Release, *10g* or *11g*, you have the possibility to use the following value:
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZEIn fact, you may use it even in *9.2*, but in this release it is recommended using a specific estimate value.
    More over, starting with *10.1* it's possible to Schedule the Statistics collect by using DBMS_SCHEDULE and, specify a Window so that the Job doesn't run during production hours.
    So, the answer may depends on the Oracle Release and also on the Application (SAP, Peoplesoft, ...).
    Best regards,
    Jean-Valentin

  • Extract big table to a delimited file

    Hi Gurus,
    A big table of size more than 4 GB from 10g DB needed to be extracted/exported into a text file,
    the column delimiter is "&|" and row delimiter is "$#".
    I cannot do it from TOAD as it is hanging while extraction of big table.
    Any suggestion will be highly appreciated.
    Thanks in advance.

    >
    A big table of size more than 4 GB from 10g DB needed to be extracted/exported into a text file,
    the column delimiter is "&|" and row delimiter is "$#".
    I cannot do it from TOAD as it is hanging while extraction of big table.
    Any suggestion will be highly appreciated.
    >
    You will need to write your own code to do the unload.
    One possibility is to write a simple Java program and use JDBC to unload the data. This will let you unload the data to any client you run the app on.
    The other advantage of using Java for this is that you can easily ZIP the data as you unload it and use substantially less storage for the resulting file.
    See The Java Tutorials for simple examples of querying an Oracle DB and processing the result set.
    http://docs.oracle.com/javase/tutorial/jdbc/overview/index.html
    Another possibility is to use UTL_FILE. There are plenty of examples in the SQL and PL/SQL forum if you search for them.
    There is also a FAQ for 'How do I read of write an Excel file (note - this also includes delimited files).
    SQL and PL/SQL FAQ

  • How to Slice big table in chunks

    I am trying to derive a piece of generic logic that would cut in chunks of definite size any big table. The goal is to perform update in chunks and avoid rollback too small issues. The full table scan on the update is unavoidable, since the update target every row of the table.
    The BIGTABLE has 63 millions rows. The purpose of the bellow SQL to give the ROWID every two million rows. So I am using the auto row numering field 'rownum' and perfrom a test to see I could. I expected the fist chunk to have 2 millons rows but in fact it is not the case:
    Here is the code +(NOTE I had many problems with quotes, so some ROWID appears without their enclosing quotes or they disappear from current output here)+:
    select rn, mod, frow, rownum from (
        select rowid rn ,  rownum frow, mod(rownum, 2000000) mod 
      from bigtable order by rn) where mod = 0
    SQL> /
    RN                        MOD       FROW     ROWNUM
    AAATCjAA0AAAKAVAAd          0    4000000          1
    AAATCjAA0AAAPUEAAv          0   10000000          2
    AAATCjAA0AAAbULAAx          0    6000000          3
    AAATCjAA0AAAsIeAAC          0   14000000          4
    AAATCjAA0AAAzhSAAp          0    8000000          5
    AAATCjAA0AABOtGAAa          0   26000000          6
    AAATCjAA0AABe24AAE          0   16000000          7
    AAATCjAA0AABjVgAAQ          0   30000000          8
    AAATCjAA0AABn4LAA3          0   32000000          9
    AAATCjAA0AAB3pdAAh          0   20000000         10
    AAATCjAA0AAB5dmAAT          0   22000000         11
    AAATCjAA0AACrFuAAW          0   36000000         12
    AAATCjAA6AAAXpOAAq          0    2000000         13
    AAATCjAA6AAA8CZAAO          0   18000000         14
    AAATCjAA6AABLAYAAj          0   12000000         15
    AAATCjAA6AABlwbAAg          0   52000000         16
    AAATCjAA6AACBEoAAM          0   38000000         17
    AAATCjAA6AACCYGAA1          0   24000000         18
    AAATCjAA6AACKfBABI          0   28000000         19
    AAATCjAA6AACe0cAAS          0   34000000         20
    AAATCjAA6AAFmytAAf          0   62000000         21
    AAATCjAA6AAFp+bAA6          0   60000000         22
    AAATCjAA6AAF6RAAAQ          0   44000000         23
    AAATCjAA6AAHJjDAAV          0   40000000         24
    AAATCjAA6AAIR+jAAL          0   42000000         25
    AAATCjAA6AAKomNAAE          0   48000000         26
    AAATCjAA6AALdcMAA3          0   46000000         27
    AAATCjAA9AAACuuAAl          0   50000000         28
    AAATCjAA9AABgD6AAD          0   54000000         29
    AAATCjAA9AADiA2AAC          0   56000000         30
    AAATCjAA9AAEQMPAAT          0   58000000         31
    31 rows selected.
    SQL> select count(*) from BIGTABLE where rowid < AAATCjAA0AAAKAVAAd ;
      COUNT(*)
        518712             <-- expected around 2 000 000
    SQL> select count(*) from BIGTABLE where rowid < AAATCjAA0AAAPUEAAv ;
      COUNT(*)
       1218270     <-- expected around 4 000 000
    SQL> select count(*) from BIGTABLE where rowid < AAATCjAA0AAAbULAAx ;
      COUNT(*)
       2685289    <-- expected around 6 000 000Amzingly, This code works perfectly for small tables but fails for big tables. Does anybody has an explanation and possibly a solution to this?
    Here is the full code of the SQL that is suppposed to generated all the predicates I need to add to the UPdate statements in order to cut them in piece :
    select line  from (
       with v as (select rn, mod, rownum frank from (
           select rowid rn ,  mod(rownum, 2000000) mod
               from BIGTABLE order by rn ) where mod = 0),
          v1 as (
                  select rn , frank, lag(rn) over (order by frank) lag_rn  from v ),
          v0 as (
                  select count(*) cpt from v)
        select 1, case
                    when frank = 1 then ' and rowid  <  ''' ||  rn  || ''''
                    when frank = cpt then ' and rowid >= ''' || lag_rn ||''' and rowid < ''' ||rn || ''''
                    else ' and rowid >= ''' || lag_rn ||''' and rowid <'''||rn||''''
                 end line
    from v1, v0
    union
    select 2, case
               when frank =  cpt then   ' and rowid >= ''' || rn  || ''''
              end line
        from v1, v0 order by 1)
    and rowid  <  AAATCjAA0AAAKAVAAd
    and rowid >= 'AAATCjAA0AAAKAVAAd' and rowid < 'AAATCjAA0AAAPUEAAv''
    and rowid >= 'AAATCjAA0AAAPUEAAv' and rowid < 'AAATCjAA0AAAbULAAx''
    and rowid >= 'AAATCjAA0AAAbULAAx' and rowid < 'AAATCjAA0AAAsIeAAC''
    and rowid >= 'AAATCjAA0AAAsIeAAC' and rowid < 'AAATCjAA0AAAzhSAAp''
    and rowid >= 'AAATCjAA0AAAzhSAAp' and rowid < 'AAATCjAA0AABOtGAAa''
    and rowid >= 'AAATCjAA0AAB3pdAAh' and rowid < 'AAATCjAA0AAB5dmAAT''
    and rowid >= 'AAATCjAA0AAB5dmAAT' and rowid < 'AAATCjAA0AACrFuAAW''
    and rowid >= 'AAATCjAA0AABOtGAAa' and rowid < 'AAATCjAA0AABe24AAE''
    and rowid >= 'AAATCjAA0AABe24AAE' and rowid < 'AAATCjAA0AABjVgAAQ''
    and rowid >= 'AAATCjAA0AABjVgAAQ' and rowid < 'AAATCjAA0AABn4LAA3''
    and rowid >= 'AAATCjAA0AABn4LAA3' and rowid < 'AAATCjAA0AAB3pdAAh''
    and rowid >= 'AAATCjAA0AACrFuAAW' and rowid < 'AAATCjAA6AAAXpOAAq''
    and rowid >= 'AAATCjAA6AAA8CZAAO' and rowid < 'AAATCjAA6AABLAYAAj''
    and rowid >= 'AAATCjAA6AAAXpOAAq' and rowid < 'AAATCjAA6AAA8CZAAO''
    and rowid >= 'AAATCjAA6AABLAYAAj' and rowid < 'AAATCjAA6AABlwbAAg''
    and rowid >= 'AAATCjAA6AABlwbAAg' and rowid < 'AAATCjAA6AACBEoAAM''
    and rowid >= 'AAATCjAA6AACBEoAAM' and rowid < 'AAATCjAA6AACCYGAA1''
    and rowid >= 'AAATCjAA6AACCYGAA1' and rowid < 'AAATCjAA6AACKfBABI''
    and rowid >= 'AAATCjAA6AACKfBABI' and rowid < 'AAATCjAA6AACe0cAAS''
    and rowid >= 'AAATCjAA6AACe0cAAS' and rowid < 'AAATCjAA6AAFmytAAf''
    and rowid >= 'AAATCjAA6AAF6RAAAQ' and rowid < 'AAATCjAA6AAHJjDAAV''
    and rowid >= 'AAATCjAA6AAFmytAAf' and rowid < 'AAATCjAA6AAFp+bAA6''
    and rowid >= 'AAATCjAA6AAFp+bAA6' and rowid < 'AAATCjAA6AAF6RAAAQ''
    and rowid >= 'AAATCjAA6AAHJjDAAV' and rowid < 'AAATCjAA6AAIR+jAAL''
    and rowid >= 'AAATCjAA6AAIR+jAAL' and rowid < 'AAATCjAA6AAKomNAAE''
    and rowid >= 'AAATCjAA6AAKomNAAE' and rowid < 'AAATCjAA6AALdcMAA3''
    and rowid >= 'AAATCjAA6AALdcMAA3' and rowid < 'AAATCjAA9AAACuuAAl''
    and rowid >= 'AAATCjAA9AAACuuAAl' and rowid < 'AAATCjAA9AABgD6AAD''
    and rowid >= 'AAATCjAA9AABgD6AAD' and rowid < 'AAATCjAA9AADiA2AAC''
    and rowid >= 'AAATCjAA9AADiA2AAC' and rowid < 'AAATCjAA9AAEQMPAAT''
    and rowid >= 'AAATCjAA9AAEQMPAAT''
    33 rows selected.
    SQL> select count(*) from BIGTABLE where  1=1 and rowid  <  AAATCjAA0AAAKAVAAd ;
      COUNT(*)
        518712
    SQL> select count(*) from BIGTABLE where  1=1 and rowid  >= 'AAATCjAA9AAEQMPAAT'' ;
      COUNT(*)
       1846369Nice but not accurate....

    Yes it works as intended now : +( still this annoying issue of quotes, so some rowid appear without enclosing quotes)+
    from (select rn, rownum frow, mod(rownum, 2000000) mod
            from  (select rowid rn from BIGTABLE  order by rn)
            order by rn
    where mod = 0
    SQL> /
    RN                        MOD       FROW     ROWNUM
    AAATCjAA0AAAVNlAAQ          0    2000000          1
    AAATCjAA0AAAlxyAAS          0    4000000          2
    AAATCjAA0AAA2CRAAQ          0    6000000          3
    AAATCjAA0AABFcoAAn          0    8000000          4
    AAATCjAA0AABVIDAAi          0   10000000          5
    AAATCjAA0AABoSEAAU          0   12000000          6
    AAATCjAA0AAB3YrAAf          0   14000000          7
    AAATCjAA0AACE+oAAS          0   16000000          8
    AAATCjAA0AACR6dAAR          0   18000000          9
    AAATCjAA0AACe8AAAa          0   20000000         10
    AAATCjAA0AACt3CAAS          0   22000000         11
    AAATCjAA6AAAPXrAAT          0   24000000         12
    AAATCjAA6AAAgO4AA5          0   26000000         13
    AAATCjAA6AAAwKfAAu          0   28000000         14
    AAATCjAA6AABAQBAAH          0   30000000         15
    AAATCjAA6AABREdAA9          0   32000000         16
    AAATCjAA6AABhFIAAT          0   34000000         17
    AAATCjAA6AABxyZAAj          0   36000000         18
    AAATCjAA6AACA5CAAm          0   38000000         19
    AAATCjAA6AACNJBAAN          0   40000000         20
    AAATCjAA6AACbLgAAV          0   42000000         21
    AAATCjAA6AACoukAAD          0   44000000         22
    AAATCjAA6AAFsS8AAO          0   46000000         23
    AAATCjAA6AAF36JAAa          0   48000000         24
    AAATCjAA6AAHJzoAAv          0   50000000         25
    AAATCjAA6AAKMCHAAv          0   52000000         26
    AAATCjAA6AAL2RbAAT          0   54000000         27
    AAATCjAA9AABLLbAAH          0   56000000         28
    AAATCjAA9AACmSyAAA          0   58000000         29
    AAATCjAA9AAEO/nAAe          0   60000000         30
    AAATCjAA9AAEbC7AAI          0   62000000         31
    31 rows selected.
    SQL> select count(*) cpt from BIGTABLE where rowid < AAATCjAA0AAAVNlAAQ ;
           CPT
       1999999
    SQL> select count(*) cpt from BIGTABLE where rowid >= 'AAATCjAA6AAAgO4AA5' and rowid < AAATCjAA6AAAwKfAAu ;
           CPT
       2000000

  • Query to get rows betwen x and y + table total row count

    Hi,
    I try get rows between e.g. 100 and 150 when I sort by some column
    My query is now like this
    SELECT  *
    FROM
      (SELECT a.*,
        row_number() OVER (ORDER BY OWNER ASC) rn,
        COUNT(1) over() mrn
      FROM (SELECT * FROM all_objects) a
    WHERE ROWNUM BETWEEN least(100,mrn) AND 150It is not very fast if I run that kind select from big table
    Any tips to optimize this query?
    Regards,
    Jari

    Hi,
    I'm so bad with SQL :(
    Here is sample where I need that kind query.
    http://actionet.homelinux.net/htmldb/f?p=100:504
    It looks standard Apex report but it is not.
    It is just test to use query to output html using htp package
    I send parameter start row, max rows, column I like order by.
    Now I do query for cursor
        /* Report query */
        l_sql := '
          SELECT * FROM(
            SELECT a.*, row_number() OVER (ORDER BY '
            || l_sort_col
            || ' '
            || l_sort_ord
            || ') rn, COUNT(1) over() mrn
            FROM(' || l_sql || ') a
          ) WHERE rn BETWEEN LEAST(' || l_start_row || ',mrn) AND ' || l_last_row
        ;Then I loop cursor and output html.
    You can order report by click heading and there is that pagination
    I did not know that I could find examples by "pagination query" =), thanks for that tip for both of you.
    I know Apex have reports ,
    but I need have features that Apex templates can not provide at least yet.
    So I have study that generating html tables using custom package instead Apex report should be best approach to get layouts I need.
    Regards,
    Jari
    Edited by: jarola on Jan 21, 2011 10:38 PM

  • Big tables from MySql to MSSql error Connection timeout

    HI,
    I have a big database around 46GB in Mysql format and I managed to convert all the database to MSsql except two tables, the biggest ones. When I try to migrate those 2 tables, one by one , after a while I get the error message "Connection timeout and
    was disabled"
    I encreased the timeout from SSMA option from 15 to 1440 and decreased the basc from 1000 to 500 and same thing, The tables have 52 mil rows and 110 milion rows with 1,5 GB and 6.5 GB.
    What can I do to migrate them
    Thank You

    Hi,
    According to your description, we need to verify that if you have installed the latest version of
    MySQL ODBC driver. If you have installed it, in order to make a shorter duration transaction and avoid the timeout, you can try to reduce the batch size to lower value such as 200 or 100 in
    SSMA option. Also, as Raju’s post, you can try to use incremental data migration in SSMA to migrate lager tables, then check if it is successful.
    In addition, you can use other methods to migrate big tables from MySQL to SQL Server. For example, you can copy the data directly from SQL Server using OpenQuery, and you could include WHERE clause to limit the rows. For more details, please review this
    blog :Migrate MySQL to Microsoft SQL Server. Or you can write queries for MySQL to export your data as csv, and then use the
    BULK INSERT features of SQL Server to import the csv data.
    Thanks
    Lydia Zhang

  • How to process big table piece by piece

    I have two big table each with 10000000 rows.
    I want to compare the content of this 2 tables.
    since table is big, i don't want to save this two table in disk first, then do comparation.
    so my processing mode is like below
    while (haveMoreRows) {
    get 500 rows from table 1;
    get 500 rows from table 1;
    compare this 500 rows.
    i implements this with a statement with TYPE_SCROLL_INSENSITIVE attr and use absolute() method to locate row.
    but when comparing, just at absolute() method with a large row number, such as absolute(100000), Out of memory error occured.
    it seems that JDBC does not release the resource.
    so how to solve this proplem?
    i just want a 500 rows row buffer, when rows in buffer finish processing, just discard this 500 rows,
    and start next run. how to implements this.
    thanks in advanced.

    Hi
    Fristly do not use TYPE_SCROLL_INSENSITIVE it is extrimly havy for DBServer try to use if possible TYPE_FORWARD_ONLY which is default.
    I advise to use multithreading you have to create two thread one for first table and seconde for seconde table. Create locks....
    I hope this example will help
    public class myThread extends Thread {
    // buffor
         private Vector dbPart;
    // my class for  data from DB table (kind of structure)
         private PojDRej p;
         private int counter = 1;
    // for security.... it edns thread
         private boolean dead = false;
         private ResultSet rs;
         private Connection conn;
    // if vector is full
         private boolean full = false;
    // number of rows
         private int total = 0;
    // limit in your case 500
         private int limit;
    public void run() {
              while (!dead) {
                   try {
                        while (rs.next()) {
                             p = new PojDRej(rs);
                             try {
                                  put(p);
                             } catch (InterruptedException e2) {
                                  // TODO Auto-generated catch block
                                  e2.printStackTrace();
                        dead = true;
                        full = true;
                   } catch (SQLException e) {
                        // TODO Auto-generated catch block
                        e.printStackTrace();
                        dead = true;
                        full = true;
              try {
                   rs.close();
                   if (conn != null)
                        conn.close();
              } catch (SQLException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
    // here you force to stop a control thread when you try to get your (500) part of data
    public synchronized Vector getDBPart() throws InterruptedException {
              while (!full && !dead)
                   wait();
              Vector dbAll = new Vector(dbPart);
              dbPart.clear();
              full = false;
              notifyAll();
              return dbAll;
    // the moste important part, stops our thread when buffer is full and waits until some class gets data from it calling getDBPart() method
         private synchronized void put(PojDRej p) throws InterruptedException {
              while (dbPart.size() >= limit) {
                   full = true;
                   notifyAll();
                   wait();
              dbPart.add(p);
    I think that everything is clear. Good Luck                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • Can i transfer music from one itunes account to another?

    Can I transfer music from one of my itunes account to another?

  • Archive log files are not being created

    I am doing some testing of the backup and recovery of our databases. I have a database which is in archive log mode. I have added some records to a table and I am expecting to see some archive files being written to but nothing is being produced. We

  • Sample for use of parameters in Forms

    Problem: database has data for several years. i want to select a year to calculate data. the year must be a parameter, the calculation is a program unit. can anayone send me an example how to pass the paramnter (year). i'll be glad too with a descrip

  • What is the name for a text file on an iPod?

    This may not be quite where to post this question, but dumb as it is...it's been a question of mine for a while...I think what I am looking for is like 3 letters long or something...bear with my stupidity please...I just really want to know!   Window

  • Unable to read blob attributes in the Asset Filter

    Hi, I am unable to read a blob attribute in the Asset Filter using the following code. The String attributes work fine. Can I get some examples for reading blob attributes in the filter? I am using Webcenter Sites 11gr1 public void filterAsset(IFilte