Large tables truncated or withheld from webhelp

I'm running into a major issue trying to include a large table in my WebHelp build.  I'm using RoboHelp 8 in Word.  When I include a large table (6 columns x 180 rows) the table is either truncated or withheld from the compiled WebHelp.
I've tried several things to resolve it, but they all end in the same results.  I've tried importing the table from its original Word file.  I've tried breaking it up into many smaller tables.  I've tried building a new table in Word, then copying the data.  Oddly enough if I build the table blank and compile--the table appears.  But once I copy data into the table, it disappears.
RoboHelp seems unable to process the table, as when I've broken the single table into several smaller tables, it chokes, doesn't include the table or even put the topic in the TOC, even though it is in the source file.
Any ideas?  I've not been able to find anything in the forums or anywhere else online. 
Many thanks!

Can you tell us what you mean "using RoboHelp in Word"? Do you mean you are using it as your editor or that you are using the RoboHelp for Word application? If the later, is there a reason why you can't use the RoboHelp HTML application? This is much more suited to producing WebHelp. Personally I wouldn't touch the HTML that Word creates with a bargepole.

Similar Messages

  • LARGE TABLE IN ORACLE

    How could i migrate large tables multi-million row from Oracle 8i to 9i.Pls explain me the process

    Vijay,
    this is the wrong forum for this question. Ask yourself these questions though:
    Is this an insitu upgrade, hence there will be data migration scripts as part of the upgrade, otherwise, there may be transportable tablespaces you can use.
    Are the machines disparate, Is the data temporal in nature, can you subsection the datamove, holding back on the volatile data until you need to switch over?

  • Deleting rows from very large table

    Hello,
    I need to delete rows from a large table, but not all of them, so I can't use truncate. The delete condition is based on one column, something like this:
    delete from very_large_table where col1=100;
    There's an index (valid, B-tree) on col1, but it still goes very slow. Is there any instruction which can help delete rows faster?
    Txh in adv.
    A.

    Your manager doesn't agree to your running an EXPLAIN PLAN? What is his objection? Sounds like the prototypical 'pointy-hair boss'.
    Take a look at these:
    -- do_explain.sql
    spool explain.txt
    -- do EXPLAIN PLAN on target queries with current index definitions
    truncate table plan_table
    set echo on
    explain plan for
    <insert query here>
    set echo off
    @get_explain.sql
    -- get_explain.sql
    set linesize 120
    set pagesize 70
    column operation     format a25
    column query_plan     format a35
    column options          format a15
    column object_name     format a20
    column order           format a12
    column opt           format a6
    select     lpad(' ',level) || operation "OPERATION",
         options "OPTIONS",
         decode(to_char(id),'0','COST = ' || NVL(to_char(position),'n/a'),object_name) "OBJECT NAME",
         cardinality "rows",     
         substr(optimizer,1,6) "OPT"
    from     plan_table
    start     with id = 0
    connect by prior id = parent_id
    There are probably newer, better ways, but this should work with all living versions of Oracle and is something I've had in my back pocket for several years now. It's not actually executing the query or dml in question, just running an explain plan on it.

  • How to truncate the values from the table

    Hi All,
    I am working on an issue..where we are first deleting all the records from the table and then based on few conditions we are putting the records back in that table...when we tried to run this program along with few others those who are doing almost the same stuff we are having issues...we tried to schedule few jobs related to these programs only...but after a ceratin amount of time couple of jobs got canceled...I was talking to my basis guy and he said the problem is ratehr then truncating the records from the table we are deleting the records and it's taking lots and lots of space to execute...so we need to truncate the records from the table insted of deleting it...we are using the following the statement right now:
        DELETE FROM ZTUS_PG.
        COMMIT WORK.
    So can you please tell me how can we truncate the values from this table instead of just deleting them and what would be effect of this.
    Thanks,
    Rajeev Gupta

    I don't think basis is saying you should delete all the records from the table. They are saying remove the table and it's contents (a much faster thing to do). I'm not sure this the right thing to do, but you can have a look at:
    http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.apdv.sample.doc/doc/admin_scripts/s-truncate-db2.htm
    Something like:
    EXEC SQL.
      TRUNCATE TABLE ZTUS_PG REUSE STORAGE
    ENDEXEC.
    COMMIT WORK.                      "Empty table is committed here
    Rob
    Edited by: Rob Burbank on Dec 1, 2008 4:06 PM

  • Retrieve data from a large table from ORACLE 10g

    I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
    The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
    Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
    When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
    But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
    Any help on this problem will be highly appriciated.
    Thanks in advance...
    -Jahedur Rahman
    Edited by: Jahedur on May 16, 2010 11:42 PM

    Girish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
    1."export the data into another media into the hard drive."
    What does it mean by this line i.e. another media into hard drive???
    ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
    2."I am not able to connect to the database directly because of license issue"
    huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
    ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
    3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
    ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
    4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
    ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
    E.g: 1 to 20,000 records in 1st phase
    20,001 to 40,000 records in 2nd phase
    40,001 to ...... records in 3nd phase
    and so on...
    Please let me know if this does not clarify your confusions... :)
    Thanks...
    -Jahedur Rahman
    Edited by: user13114507 on May 12, 2010 11:28 PM

  • How do I open and use a large table from Word in Pages?

    I upgraded my MBP from Snow Leopard to Mountain Lion a couple days ago.  I knew that my old Word application wouldn't work, but several Apple people assured me that Pages could handle my old docs, including tables.
    So, I purchased and installed Pages '09 this morning.  I opened the table I use all the time - - and most of it is missing!  Apparently, Pages doesn't handle large tables.
    I need help - desperately!  This table contains all my medical expenses for the year, so I have to have it.
    Thanks

    J,
    Besides needing to have your Table Object Floating, you should know that the maximum number of rows in Pages Tables is 999.
    If you continue to have problems opening your Word document and its table in Pages, try one of the free Office Clones, LibreOffice, OpenOffice, IBM Lotus Symphony, etc.
    I'd bet that at least one of those free apps will work if Pages doesn't. By the way, have you tried viewing your Word document in Quick Look? To do that, click on the filename in Finder and hit the Spacebar key. You won't be able to do anything but view the document in Quick Look, but it would give you confidence that your file is OK.
    Jerry

  • Lookup from a large table?

    Hi, All
    Very new to APEX. If I have a very large table of securities, and want the user to be able to lookup the security a number of different ways (Name, CUSIP or ticker), is there a way to do this in APEX? When they are faced with entering a "security id", they need to find a way to retrieve the correct one.
    Thanks

    Hi
    Yes, this is very simple.
    Essentially, you create a number of items where they can enter the details.
    In you case, let's call them P1_SEC_ID, P1_NAME, P1_CUSIP and P1_TICKER.
    Next create a button called GO (or whatever you want) that branches back to the same page.
    Not create a report region with a source something like this...
    SELECT *
    FROM my_big_table
    WHERE INSTR(UPPER(sec_id),UPPER(:P1_SEC_ID)) > 0
    OR    INSTR(UPPER(name),UPPER(:P1_NAME)) > 0
    OR    INSTR(UPPER(cusip),UPPER(:P1_CUSIP)) > 0
    OR    INSTR(UPPER(ticker),UPPER(:P1_TICKER)) > 0Next, make the report rgion conditional on the request value being 'GO' and this should work for you.
    Cheers
    Ben

  • SELECTing from a large table vs small table

    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please help

    user597961 wrote:
    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal".

  • How to efficiently select random rows from a large table ?

    Hello,
    The following code will select 5 rows out of a random set of rows from the emp (employee) table
    select *
      from (
           select ename, job
             from emp
           order by dbms_random.value()
    where rownum <= 5my concern is that the inner select will cause a table scan in order to assign a random value to each row. This code when used against a large table can be a performance problem.
    Is there an efficient way of selecting random rows from a table without having to do a table scan ? (I am new to Oracle, therefore it is possible that I am missing a very simple way to perform this task.)
    thank you for your help,
    John.
    Edited by: 440bx on Jul 10, 2010 6:18 PM

    Have a look at the SAMPLE clause of the select statement. The number in parenthesis is a percentage of the table.
    SQL> create table t as select * from dba_objects;
    Table created.
    SQL> explain plan for select * from t sample (1);
    Explained.
    SQL> @xp
    PLAN_TABLE_OUTPUT
    Plan hash value: 2767392432
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |   725 | 70325 |   289   (1)| 00:00:04 |
    |   1 |  TABLE ACCESS SAMPLE| T    |   725 | 70325 |   289   (1)| 00:00:04 |
    8 rows selected.

  • Large table, primary key constraint

    I have migrated a table from 8i to 9i that is over 300 million rows. I migrated the the table to a 9i database without constraints or indexes.
    I have successfully created a composite index of two columns, t1 varchar2(512), t2 varchar2(32). This index took nearly 16 hours to create.
    I am now trying to create a primary key based on that index with the following sql:
    alter table table1
    add constraint table1_t1_t2_pk primary key(t1,t2)
    using index table1_t1_t2_idx
    nologging
    This process has taken over 24 hours and is well into the second day. Studio reports it will take an additional 15 hours to create.
    My questions are these?
    1. Is my syntax okay?
    2. I thought that by creating a primary key on an existing index, that another index is not being created. I thought it would be faster this way. Why is it taking a lot longer to create then the index it is based upon?
    3. Is there a more efficient method (other than parallel query) to create this index/constraint on such a large table? What happens when I go production and need to recreate this index if I have a failure. I have never had to do this before. I can't be down for 48 hours to create an index. What other alternatives do I have?
    The table is partit[i]Long postings are being truncated to ~1 kB at this time.

    Is INDEX table1_t1_t2_idx UNIQUE? If it's not that might explain why building the primary key constraint takes longer.
    I think the USING INDEX clause with an existing index is intended mainly for different UNIQUE constraints to share the same index. In your situation I think you would be better off just building the primary key constraint.
    Cheers, APC

  • Table truncated, but segments were not cleaned

    Hi,
    I am running oracle 11g 11.2.0.1.0 version on ASM.
    I have truncated a 3.5 GB partitioned table, but the extents are not getting dropped and space is not getting released.
    the table do not have any rows now, but the dba_extents and dba_segments are still showing 3.5 gb size.
    Please suggest what could be the possible reason.
    Please let me know if additional information is required to answer my qustion.
    regards

    Here's a demo which shows that if a table is created with a large INITIAL, a TRUNCATE doesn't shrink the initial size.  A MOVE with a STORAGE clause can.
    SQL> create table hkc_test_99 (id_column number, data_column varchar2(10)) storage (initial 64M);
    Table created.
    SQL> insert into hkc_test_99 select 1,'One' from dual;
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select extent_id, bytes/1024 from user_Extents where segment_name = 'HKC_TEST_99';
    EXTENT_ID BYTES/1024
             0      65536
    1 row selected.
    SQL>
    SQL> truncate table hkc_test_99;
    Table truncated.
    SQL> select extent_id, bytes/1024 from user_Extents where segment_name = 'HKC_TEST_99';
    EXTENT_ID BYTES/1024
             0      65536
    1 row selected.
    SQL>
    SQL> alter table hkc_test_99 move storage (initial 64K);
    Table altered.
    SQL> select extent_id, bytes/1024 from user_Extents where segment_name = 'HKC_TEST_99';
    EXTENT_ID BYTES/1024
             0         64
    1 row selected.
    SQL>
    Hemant K Chitale

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Select All in large table

    My product has some very large tables (using RANGE_PAGING). I want to allow the end user to do a Select All operation, and then press a command button to act on all selected rows, but want my backing code to detect a Select All has been done, rather than attempt to retrieve all rows from the table. Is this doable with a RichTable?
    I'm porting an existing UI over to ADF -- the old UI handled this by having a separate select all button (column header), which would get unset if any row got unselected, and the backing code could interrogate that. I was wondering if there was a more ADF-ish way of handling this.
    Using Oracle JDeveloper 11g Release 1 (11.1.1.6.0)
    Edited by: user12614476 on Dec 5, 2011 1:58 PM
    (added JDeveloper version info)

    Are you really using 11.1.1.6? If so, I guess you should be asking in one of the internal Oracle forums.
    But, no, there's no "more ADF-ish" way to my knowledge :)
    John

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

  • How to subdivide 1 large TABLE based on the output of a VIEW

    I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
    I am groping with the following strategy in PL/SQL:
    1 -- set up cursor, execute the view (so I have the list of identifiers)
    2 -- create a second cursor (or loop?) which:
    accepts each of the identifiers in turn
    executes a query (EXECUTE IMMEDIATE?) on the larger table
    INSERTs (or appends?) each resultset into the GTT
    3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.
    Can anyone point me to code that would "spoon feed" me on this? Or suggest the best / better way to go about it?
    The scale of the issue here -- GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
    Thanks,
    Rob

    Welcome to the forum!
    >
    I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
    Can anyone point me to code that would "spoon feed" me on this? Or suggest the best / better way to go about it?
    The scale of the issue here -- GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
    >
    No - there is no code to point you to.
    As many of the previous responses indicate part of the concern is that you seem to have already chosen and partially implemented a solution but the information you provided makes us question whether you have adequately analyzed and defined the actual problem and processing that needs to happen. Here's why I have questions about your approach
    1. GTT - a red flag issue - these tables are generally not needed in Oracle. So when you, or anyone says they plan to use one it raises a red flag. People want to be sure you really need one rather than not using a table at all, or just using a regular table instead of a GTT.
    2. Double nested CURSOR loops - a DOUBLE red flag issue - this is almost always SLOW-BY-SLOW (row-by-row) processing at its worst. It is seldom needed, doesn't perform well and won't scale. People are going to question this choice and rightfully so.
    3. EXECUTE IMMEDIATE - a red flag issue or at least a yellow/warning flag. This is definitely a legitimate methodology when it is needed but may times developers resort to it when it isn't needed because it seems easier than doing the hard work of actually defining ALL of the requirements. It seems easier because it appears that it will allow and work for those 'unexpected' things that seem to come up in new development.
    Unfortunately most of those unexpected things come up because the developer did not adequately define all of the requirements. The code may execute when those things arise but it likely won't do the right thing.
    Seeing all three of those red flag issues in the same question is like waving a red flag at a charging bull. The responses you get are all likely to be of the 'DO NOT DO THAT' variety.
    You are correct that a work table is appropriate when there is business logic to be applied to a set of data that cannot be applied using SQL alone. Use a regular table unless
    1. you plan to have multiple sessions working with the table simutaneously,
    2. each session needs to work with ONLY their own data in that table and not data from other sessions
    3. the data does NOT need to be available after the session ends
    4. you actually need a GTT to take advantage of the automatic data preservation (ON COMMIT PRESERVE/DELETE) functionality
    Remember - when a session ends the data in the GTT is gone. That can makek it very difficult to troubleshoot data related problems since a different session can't see what data is in the table. Even if a GTT is needed for the final product it is very useful to use a regular table so that the data can be examined after test runs to help find and fix problems. Then after development is complete and initial testing is done a GTT would be substituted and final testing performed.
    So the main remaining question is why you need to perform multiple dynamic queries to get the data populated into the work table? Especially why is a nested cursor loop needed? My suspicion is that you have the queries stored in a query table and one of your loops extracts the query and executes it dynamically.
    How many queries are we talking about? Do these queries change from run to run? Please provide more detail of the process and an example query for the selection filtering as well as a typical dynamic query you plan to use.

Maybe you are looking for

  • My Lumia 720 theme is not working after Amber upda...

    Hi, Today I updated my Lumia 720 with amber update . I got all the new feature like Glance , color profile etc. but now theme color are is not working. On change of theme color, tiles color is not changing. Tiles are always in black color. I am also

  • Vendor cash discount

    Hi experts please give me solution for how to calculate the cash discount on vendor line items as per the payment term within 30 days  15% cash discount , if we pay the amount before due date(any day from 1 to 30 days) we will receive the cash discou

  • Email adresses - Spam protection - Options

    I do understand the tight security and anti SPAM policy on the ePrint services, to prevent abusive use and prevent complaining users about wasting paper and ink etc. However, since there is an option to only allow printing from 'allowed senders', it

  • Premiere-CC Poor Quality Logos on export with Hardware Acceleration.

    This is an old and new issue. Have an XDCam-HD preset Sequence.  XDcam footage is green screen, layer below is a white background.  Layer above keyed footage is an imported PSD with 2 logos on same layer with transparancy. Sequence settings:  MRQ and

  • Recommendations for LMS - focus on exams

    I'm looking for recommendations for a LMS mainly for scoring exams.  We're currently producing course material and exams with Captivate 4 and using the LMS component of Connect Pro.  We've run into numerous issues with applying time limits and preven