Help needed in Partitioning Table

Hi,
We can partiton a table by the following
create table sales (
product_id number,
trans_amt number,
sales_dt date,
state_code varchar2(2)
partition by list (state_code)
partition ct values ('CT'),
partition ca values ('CA'),
partition def values (default)
But if we dont know the values like 'CT' , 'CA' ....how can we do the partition

The concept behind a list is that you "have" a list.

Similar Messages

  • Help needed for SAP Tables Relationships

    Hi All,
    I am new to ERP and need help regarding standard SAP Tables.
    Please share the document that contains the details of SAP Tables and Standard SAP FM that are provided by SAP.
    All helpful answers will be rewarded.
    Regards,
    Udaya.

    Hi,
    Please go to the following link.
    http://www.erpgenie.com/abap/tables.htm
    http://www.erpgenie.com/abap/tables_sd.htm
    http://www.erpgenie.com/abap/tables_mm.htm
    http://www.erpgenie.com/abap/tables_fi.htm
    Regards
    Jean

  • Help needed in partitioning a Table

    Hi,
    We can partiton a table by the following
    create table sales (
         product_id     number,
         trans_amt     number,
         sales_dt     date,
         state_code     varchar2(2)
    partition by list (state_code)
    partition ct values ('CT'),
    partition ca values ('CA'),
    partition def values (default)
    But if we dont know the values like 'CT' , 'CA' ....how can we do the partition

    If you really don't know the values at all, or values that can change in future, use hash-partitioning.
    For values that you mostly know and is unlikely to change, use list-partitioning, and accept you to do some maintenance in shuffling values around the partitions when they change.
    But, hopefully you do know your states and they don't change too often.

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • !! Urgent Help Needed in Database Table Recovery !!

    Hi,
    What happened is I add a new field to the table. It's a field which need reference but I didn't include it and proceed with table adjustment in under SE11->Utilities->Database Adjustment. I encountered an error during adjustment saying that the reference field is not set and I can't proceed further since the table is now locked. There's 2 options, to continue adjustment and to unlock the table. When I choose continue adjustment, it still lead to the same problem, so i choose to unlocked it thinking that I could be able to change it since there's a prompt saying that I might loose some data. Once I clicked on onlocked, then I it tells me that the adjustment is done. However, I found out that now my table was not editable. It's saying that it's not in the database.
    Can this table be recovered without the lost of data? Please anyone, I need this help urgently. THanks you in advance.

    Your DBAs might be able to help you.
    Rob

  • Help needed in Exporting tables data through SQL query

    Hi All,
    I need to write a shell script(ksh) to take some of the tables data backup.
    The tables list is not static, and those are selecting through dynamic sql
    query.
    Can any body tell help me how to write the export command to export tables
    which are selected dynamically through SQL query.
    I tried like this
    exp ------ tables = query \" select empno from emp where ename\= \'SSS\' \"
    but its throws the following error
    EXP-00035: QUERY parameter valid only for table mode exports
    Thanks in advance,

    Hi,
    You can dynamically generate parameter file for export utility using shell script. This export parameter file can contain any table list you want every time. Then simply run the command
    $ exp parfile=myfile.txt

  • Query Needed for Partitioning table

    Hi,
    I have created a table called Test. There is a column named business_name.
    There are several businesses like ABC,BCD,ADE....
    There will be lakhs of rows corresponding to each business, i mean there will be lakhs of entires corresponding to ABC,BCD....
    So i like to partition the table according to business_name so that the search will be more faster.As we had partitioned according to the business_name, i hope we need to search only on the partition corresponding to the particular business.
    can any one provide the Query to partition the table ' TEST ' according to the column ' business_name ' .
    Also can anyone provide Query to modify the already existing table ' TEST ' to incorporate partition for the column ' business_name '.

    We can partiton a table by the following
    create table Generalledger (
         record_id     number,
         business_name     varchar2(3)
         sales_dt     date,
         amount     number(10)
    partition by list (business_name)
    partition ct values ('ABC'),
    partition ca values ('BCD'),
    partition def values (default)
    But if we dont know the values like 'ABC' , 'BCD'
    ....how can we do the partitionuse SQL to generate part (or all) of your DDL statement. The following will output one partition statement for each business_name:
    SELECT DISTINCT 'partition p_' || BUSINESS_NAME || ' values (''' ||
                     BUSINESS_NAME || '''),'
    FROM GENERALLEDGER;

  • Help Needed Tagging Complex Tables

    I am often asked to create accessible versions of PDFs for clients that contain complex tables -- e.g., merged and/or blank cells, multiple levels of subcategory headings. Are there any online resources that might help me understand (a) IF it is possible to create accessible versions of these tables and, if so, (b) HOW to tag them. In the ideal world, we would go back to the original document and revise the table, but that's not always possible.
    For example, a document I am working on currently includes a table that looks something like this:
    Category
    Period 1
    Period 2
    FTE
    $
    FTE
    $
    A. First Heading
        1. First Subhead
            a. Topic A
    34
    45
            b. Topic B
    54
    63
    Subtotal
    108
       2. Second Subhead
           a. Topic A
    etc.
    I understand how to tag the "Period 1" and "Period 2" headings as column headings that span 2 columns, but I'm not clear on how to handle the blank cells and/or the subheadings. Sorry for such an open-ended questions, but I'm not sure where to look - all the documentation I've seen seems to address fairly simple tables. I am using Acrobat XI for Mac.
    Any guidance would be greatly appreciated. Thank you!

    Yes, it is possible to create accessible versions of the these tables. There are several possible solutions.  In the example above, you are correct that Period 1 and Period 2 need to be tagged as <TH> with Span = 2 Column. Scope also needs to be set to Column.  FTE and $ also need to be tagged as TH with Scope = Column. All the cells in Column 1 (A. First Heading, 1. First Subhead, etc.) are <TH> with Scope = Row. 
    Because this is a complex table you need to Associate the table content with these headings using <ID> tags. 
    This is unfortunately where things can get a little time consuming.  The process can be done within Acrobat using the Touch Up Reading Order (TOR) tool (Accessibility Tool Panel). To briefly summarize -- Select the TOR, select the Table, and click the Table Button on the TOR Menu to open up the Table Editor.  The TOR Menu will disappear, and the table will be visible. You can click on individual cells, right click and select Cell Properties. 
    Select one of the header cells, right click and then select Table Cell Properties. There you will see the ID, which is usually something memorable like TD_08934_121.  Changes this to something meaningful like PERIOD_1 and P1_FTE. The name must be unique within the file. 
    Click OK to save and exit the Table Cell Properties window.
    Using the TOR select all of the content cells beneath that header and right click to open the Cell Properties.  Acrobat will let you select more than one cell by using the Shift Key, if there are no conflicting values in Cell Properties. Note: At times you may need to work one cell at a time.
    Once the Table Cell Properties is open, click the plus next to Associate Header Cell IDs and select the appropriate Heading ID that you defined above.
    Repeat until all the content cells are associated with their appropriate column headings, subheadings and row headings.
    That's method 1. 
    Method 2 would be to use the CommonLook plug-in.  Adding Table Cell IDs is CommonLook's strong point, but the price is high and there is a learning curve. And I've experienced some troublesome issues involving borders and shading in some files.  Still if you have Commonlook available it can be a REAL timesaver for tagging complex tables.
    Lastly, there are some tricks and workarounds that would make the structure of this table simple and therefore accessible without all of this.  If you are experienced  working in the Tags Panel in Acrobat. In the example above you could convert the first row of content  Headings Period 1 and Period 2 to Artifacts and then entirely delete that row from the table within the tags panel,  Then add alternative text to the second row of headings so FTE will always read as "Period 1 FTE" or Period 2 FTE" and so forth. Some purists may object to this method, but it would work.
    NOTE: Any of these methods require further checking with a screen reader to ensure the tables are recognized by tables and no mistakes were made in assigning the headings to the cells and I would also check to make sure any blank cells read as "Blank" as they may be mistagged in the tags panel by the authoring application (e.g., MS Word). 

  • Help needed with locking tables+Mysql

    Hello!
    I have a table "A" which is of kind "auto_increament" (field "A1")
    When I do an insert in A I need to know the next "auto_increament" index because I have to encrypt that index into another field in A lets say this field is called A2.
    I understand that I have to do like like I do below (the semantic syntax) but I dont know the java syntax for locking tables and getting the next "auto_increament" index. The "lock type" of the ttable should be so now one else can write to the table.
    --------semantic syntax---------------
    1) LOCK A
    2) Get "next" index from A
    3) Make an insert
    4) UNLOCK A
    Very greatful for help!
    Regards/D_S

    http://www.mysql.com/search/?q=jdbc+autoincrement&base=http%3A%2F%2Fdev.mysql.com&lang=en&doc=0&m=a

  • Help needed in making table name and column name dynamic

    please check the below query? in the below message
    Message was edited by:
    460425
    Message was edited by:
    460425

    thanks Dmytro,
    below is the script i was looking for it got it..any way thanks.
    just need to replace dbms_output.put_line with utl_file.put_line
    to put the code on server directory.
    and execute as and when required.
    Reg.
    AAK
    CREATE OR REPLACE PROCEDURE p_ad_log
    IS
         CURSOR tbl_cursor IS
              SELECT table_name FROM user_tables WHERE table_name IN('EMP','EMP1') ;
         CURSOR col_cursor( cp_table_name varchar2) IS
              SELECT column_name FROM user_tab_columns WHERE table_name=cp_table_name;
         --v_file_handle      utl_file.file_type;
         --v_file_dir          varchar2(30)      :=     'DIRECTORY PATH'     ';
         --v_file_name     varchar2(30)       :=       'AD_TRIGGER_TEXT.TXT';
         tbl_cursor_value     tbl_cursor%ROWTYPE;
         col_cursor_value     col_cursor%ROWTYPE;
         v_string          varchar2(4000);
         v_string_val     varchar2(4000);
    BEGIN
         DELETE audit_triggers_status;
         COMMIT;
         --v_file_handle := utl_file.fopen(v_file_dir,v_file_name,'W',32000);
         OPEN tbl_cursor;
         LOOP
              FETCH tbl_cursor into tbl_cursor_value;
              EXIT WHEN tbl_cursor%NOTFOUND;
              OPEN col_cursor(tbl_cursor_value.table_name);
              DBMS_OUTPUT.PUT_LINE( 'CREATE OR REPLACE TRIGGER' ||' ad_'||tbl_cursor_value.table_name); -- short name for audit trigger coz table name will be appended to it and result should not exceed 30 char
              DBMS_OUTPUT.PUT_LINE( 'BEFORE INSERT OR UPDATE OR DELETE ON '||tbl_cursor_value.table_name);
              DBMS_OUTPUT.PUT_LINE( 'FOR EACH ROW');
              DBMS_OUTPUT.PUT_LINE( 'BEGIN');
              v_string:='INSERT INTO'||' ad_'||tbl_cursor_value.table_name||'(';
              v_string_val:='values(';
              INSERT INTO audit_triggers_status( table_name,trigger_name,audit_flag) VALUES (tbl_cursor_value.table_name,' ad_'||tbl_cursor_value.table_name,'Y');
                   LOOP
                        FETCH col_cursor into col_cursor_value;
                        EXIT WHEN col_cursor%NOTFOUND;
                        v_string:=v_string||col_cursor_value.column_name||',';
                        v_string_val:=v_string_val||':new.'||col_cursor_value.column_name||',';
                   END LOOP;
                   CLOSE COL_CURSOR;
              v_string:=substr(v_string,1,length(v_string)-1);
              v_string_val:=substr(v_string_val,1,length(v_string_val)-1);
              v_string:=v_string||') ';
              v_string_val:=v_string_val||');';
              --DBMS_OUTPUT.PUT_LINE(v_string||v_string_val);
              DBMS_OUTPUT.PUT_LINE('IF INSERTING THEN');
              DBMS_OUTPUT.PUT_LINE('     '||v_string||v_string_val);
              DBMS_OUTPUT.PUT_LINE('END IF;');
              DBMS_OUTPUT.PUT_LINE('IF UPDATING THEN');
              DBMS_OUTPUT.PUT_LINE('     '||v_string||v_string_val);
              DBMS_OUTPUT.PUT_LINE('END IF;');
              DBMS_OUTPUT.PUT_LINE('IF DELETING THEN');
              --DBMS_OUTPUT.PUT_LINE('     '||v_string||REPLACE(v_string_val,':new.',':old.');
              V_STRING_VAL:=REPLACE(v_string_val,':new.',':old.');
              DBMS_OUTPUT.PUT_LINE('     '||v_string||v_string_val);
              DBMS_OUTPUT.PUT_LINE('END IF;');
              DBMS_OUTPUT.PUT_LINE('END '||' ad_'||tbl_cursor_value.table_name||';');
              DBMS_OUTPUT.PUT_LINE(' ');
         END LOOP;
         CLOSE TBL_CURSOR;
         COMMIT;
    END;

  • Help needed in Derived Table

    Dear Experts,
    Can anyone tell me in which scenario we will go for derived table?  We are creating a report on top of a Cube,
    Thanks,
    Kind Regards,
    Sathish Kumar.N

    Hi Sathish,
    Hope this helps:
    http://scn.sap.com/thread/2019524
    http://www.forumtopics.com/busobj/viewtopic.php?p=684054&sid=13e89b7f7b9e97d6921e9899023e3929
    http://www.forumtopics.com/busobj/viewtopic.php?p=770433&sid=1f947ce435f0c0b96ebed248a1c908a0
    regards,
    M

  • Help needed with nested tables

    I have two identical relations R and S defined as:
    CREATE TYPE table_typ AS TABLE OF VARCHAR2(8);
    CREATE TABLE R(
    a INTEGER,
    b table_type)
    NESTED TABLE b STORE as b_1;
    CREATE TABLE S(
    a INTEGER,
    b table_type)
    NESTED TABLE b STORE as b_2;
    Both have two tuples each:
    INSERT INTO r VALUES (1, table_type('a','b'));
    INSERT INTO r VALUES (2, table_type('d'));
    INSERT INTO s VALUES (1, table_type('b','c'));
    INSERT INTO s VALUES (3, table_type('e'));
    Would it be possible to write a query that "unions" R and S so that nested tables in tuples (in R and S) that agree on the A attribute are merged together, while tuples that do not agree on A still appear in the result? That is, the result I am looking for is:
    A B
    1 TABLE_TYPE('a','b','c')
    2 TABLE_TYPE('d')
    3 TABLE_TYPE('e')
    I've tried a simple union, but it does not work. Any help on this would be greatly appreciated.
    Thanks,
    Laura

    I don't have 10g, but the following is a 9i solution. I know there has to be a better way. About all I can say for the code below is that it works. I am only posting it because it is better than nothing, until somebody with 10g posts something better. I have included the stragg function by Tom Kyte.
    scott@ORA92> -- test data:
    scott@ORA92> CREATE OR REPLACE TYPE table_type AS TABLE OF VARCHAR2 (8);
      2  /
    Type created.
    scott@ORA92> CREATE TABLE r(
      2    a INTEGER,
      3    b table_type)
      4    NESTED TABLE b STORE as b_1
      5  /
    Table created.
    scott@ORA92> CREATE TABLE s(
      2    a INTEGER,
      3    b table_type)
      4    NESTED TABLE b STORE as b_2
      5  /
    Table created.
    scott@ORA92> INSERT INTO r VALUES (1, table_type('a','b'));
    1 row created.
    scott@ORA92> INSERT INTO r VALUES (2, table_type('d'));
    1 row created.
    scott@ORA92> INSERT INTO s VALUES (1, table_type('b','c'));
    1 row created.
    scott@ORA92> INSERT INTO s VALUES (3, table_type('e'));
    1 row created.
    scott@ORA92> commit
      2  /
    Commit complete.
    scott@ORA92> -- start of code from Tom Kyte:
    scott@ORA92> create or replace type string_agg_type as object
      2  (
      3       total varchar2(4000),
      4 
      5       static function
      6            ODCIAggregateInitialize(sctx IN OUT string_agg_type )
      7            return number,
      8 
      9       member function
    10            ODCIAggregateIterate(self IN OUT string_agg_type ,
    11                        value IN varchar2 )
    12            return number,
    13 
    14       member function
    15            ODCIAggregateTerminate(self IN string_agg_type,
    16                          returnValue OUT  varchar2,
    17                          flags IN number)
    18            return number,
    19 
    20       member function
    21            ODCIAggregateMerge(self IN OUT string_agg_type,
    22                      ctx2 IN string_agg_type)
    23            return number
    24  );
    25  /
    Type created.
    scott@ORA92> create or replace type body string_agg_type
      2  is
      3 
      4  static function ODCIAggregateInitialize(sctx IN OUT string_agg_type)
      5  return number
      6  is
      7  begin
      8        sctx := string_agg_type( null );
      9        return ODCIConst.Success;
    10  end;
    11 
    12  member function ODCIAggregateIterate(self IN OUT string_agg_type,
    13                             value IN varchar2 )
    14  return number
    15  is
    16  begin
    17        self.total := self.total
    18        || ','
    19        || value;
    20        return ODCIConst.Success;
    21  end;
    22 
    23  member function ODCIAggregateTerminate(self IN string_agg_type,
    24                               returnValue OUT varchar2,
    25                               flags IN number)
    26  return number
    27  is
    28  begin
    29        returnValue := ltrim(self.total,',');
    30        return ODCIConst.Success;
    31  end;
    32 
    33  member function ODCIAggregateMerge(self IN OUT string_agg_type,
    34                           ctx2 IN string_agg_type)
    35  return number
    36  is
    37  begin
    38        self.total := self.total || ctx2.total;
    39        return ODCIConst.Success;
    40  end;
    41 
    42 
    43  end;
    44  /
    Type body created.
    scott@ORA92> CREATE or replace
      2  FUNCTION stragg(input varchar2 )
      3  RETURN varchar2
      4  PARALLEL_ENABLE AGGREGATE USING string_agg_type;
      5  /
    Function created.
    scott@ORA92> -- end of code from Tom Kyte
    scott@ORA92> -- usage of above function:
    scott@ORA92> COLUMN c FORMAT A15
    scott@ORA92> SELECT a, stragg (column_value) AS c
      2  FROM   (SELECT t1.a, t1.column_value
      3            FROM   (SELECT r.a, c.column_value
      4                 FROM   r, TABLE (r.b) c) t1,
      5                (SELECT s.a, d.column_value
      6                 FROM   s, TABLE (s.b) d) t2
      7            WHERE  t1.a = t2.a
      8            UNION
      9            SELECT t1.a, t2.column_value
    10            FROM   (SELECT r.a, c.column_value
    11                 FROM   r, TABLE (r.b) c) t1,
    12                (SELECT s.a, d.column_value
    13                 FROM   s, TABLE (s.b) d) t2
    14            WHERE  t1.a = t2.a)
    15  GROUP  BY a
    16  UNION ALL
    17  SELECT a, stragg (column_value) AS c
    18  FROM   (SELECT t1.a, t1.column_value
    19            FROM   (SELECT r.a, c.column_value
    20                 FROM   r, TABLE (r.b) c) t1,
    21                (SELECT s.a, d.column_value
    22                 FROM   s, TABLE (s.b) d) t2
    23            WHERE  t1.a = t2.a (+)
    24            AND    t2.a IS NULL
    25            UNION
    26            SELECT t2.a, t2.column_value
    27            FROM   (SELECT r.a, c.column_value
    28                 FROM   r, TABLE (r.b) c) t1,
    29                (SELECT s.a, d.column_value
    30                 FROM   s, TABLE (s.b) d) t2
    31            WHERE  t1.a (+) = t2.a
    32            AND    t1.a IS NULL)
    33  GROUP  BY a
    34  /
             A C
             1 a,b,c
             2 d
             3 e

  • Help Need for External Table

    Gurus,
    While i reading the data from CSV using External Table in some columns the records having the special symbol like 'new line feed'.
    How can i trim that one.
    please help in this issue.

    Hi,
    Use Substr or Replace functions on them.

  • Help needed on SAP tables

    I have a Report in BEx ...It has information related to Vendor & total openitems for him.
    i need to verfiy this data in ECC.
    In which table i can find this information??
    Thanks
    Nilesh

    Hi
    It has related to MM.
    So please find the below link it will provide the Source tables for all MM DS.
    http://wiki.sdn.sap.com/wiki/display/BI/BWSDMMFIDATASOURCES
    Regards
    Ram.

  • Help needed in Joining tables for  Help view

    Hi All ,
    My requirement is create a search help , using a view which will join four table ,
    I was able to do using a databaser view and joining below tables
    KNA1     MANDT     =     KNVV     MANDT
    KNA1     KUNNR     =     KNVV     KUNNR
    TVV5     MANDT     =     KNVV     MANDT
    TVV5     KVGR5     =     KNVV     KVGR5
    TVV5     MANDT     =     TVV5T     MANDT
    TVV5     KVGR5     =     TVV5T     KVGR5
    But thes is doing a inner join and help is not providing valuse where tghere are no entries for KNVV-KVGR5
    for outer join i came to know we use hep view s instead of database viow ,
    But i have problem joining these table s while creating Help View .
    Any Help will be appricaited
    Thanks
    Vinay Kolla

    Hi Vinay,
    Use the tabls in below given order to get the right view.
    KNVV
    KNA1
    TVV5
    TVV5T
    KNA1-MANDT  = KNVV-MANDT
    KNA1-KUNNR = KNVV-KUNNR
    TVV5-MANDT = KNVV-MANDT
    TVV5-KVGR5 = KNVV-KNVV
    TVV5-MANDT = TVV5T-MANDT
    TVV5-KVGR5 = TVV5T-KVGR5
    Regard
    Anees

Maybe you are looking for

  • Clearing alerts in EM 10g

    I have set up EM 10g on a W2K environment and everything is working fine but I have alerts on some targets that I have already taken care of but the do not "go away". Is there a way to clear alerts? For instance, last weekend an archive log disk got

  • Where is the "Send" tab?

    We have a free account and send 3-4 agreements each month. Suddenly the "Send" tab is completely missing. How can I send contracts? I've logged in and out using Firefox and Explorer.

  • Can the CIM server listen on the HTTPS port 5989?

    In the man page of init.wbem it is mentioned that it starts or stops the CIM boot manager.And at startup, the CIM Object Manger Listens for RMI connections on RMI port 5987 and for XML/HTTP connections on HTTP port 5988. But 5988 is not the secured p

  • Any way possible to have different New Eamil alret sounds for each account

    I have 4 different email accounts on my iPhone. Is there a way to have a different New Email Alert sound for each account. Most of the times I am only interested in emails I get in one or two of the accounts and if I had a different Alert for them I

  • No configuration is defined for the entry BIRGMLQuery

    Hi: I´m working with VC EHP1 SP4. I have my VC Iview working ok with a BI_Query with Compile Runtime=flash but when I change the compiler runtime to Webdynpro the following error is display at runtime:: com.sap.tc.wd4vc.intapi.info.exception.WD4VCRun