Change lock type on large table

I have a table which is about 2G size. the lock type on this table is set as AllPage.  I think this cause performance issue when issue select query, many SH lock applied. I try to change the lock type to datarow, which should have no lock when issue select as document indicated.
but when I try it with DBArtisan, it took longtime and never end. Then I stop the app and connect it again. I found the lock type do change it to datarow. but there is a new table name like mytab_3309ac22 created.
then I try to change lock type again on other table and got following info:
You can not run Alter Table Lock in this database because the 'select into/bulkcopy' option is off.
Looks like dboption changed. So want to know if this is safe for data when change lock type? how to ensure it is done properly?

Hi Kent,
The problem is that allpages has different physical structure than datarows.
Changing from one format to another requires to rewrite the whole table.
There are a few ways to achieve this goal.
a) running the alter table directly:
alter table x lock datarows
This command causes to make the change internally. This is the fastest way to do that, but it requries to set the 'select into' option on the database. And that might be problematic because it breaks the  the transaction log sequence.
If you don't have the option set on the database you are not able to run the command.
b) doing it manually:
1. create a second table with datarows scheme
2. copy the rows to the table
3. dropping the old and renaming the new table
If you have a very busy system with little time for downtime then that would be the only solution. You can create triggers to manually log all the changes on the old table if that process would take too long.
At the end you would have to add all the foreign keys to the new table. And of course recreate the indexes.
I don't exactly know what DBArtisan is doing, but since you don't have the 'select into' option then probably it is running the second option. This new table with name mytab_3309ac22 might be the effect of the operation broken in the middle. How many rows has this table have? And what is the scheme?
If you have plenty of time then I would recommend to run the alter table command together with the 'select into' option. Anyway you won't have any progress bar during this operation so it is very hard to predict the time of the operation.
HTH,
Adam

Similar Messages

  • Change lock type on live table

    I have 2 ASE Server. One is for live, one is for dev.
    There is a table mytab, which lock type is DataPage. I want to change it to DataRow.
    On dev server, I use tool to get the script to change the type is simple. like:
    USE mydb
    go
    setuser 'dbo'
    go
    alter table mytab lock DATAROWS
    go
    but on live production, I get the script is different. It is trying to delete all pk, fk, sp, trigger related to mytab and create a another table mytab23434,....
    What is the reason why it create a so messy script only for change the lock type.
    So I want to know if it is safe just run alter table mytab lock DATAROWS on live.
    If I use Sybase central and try to get the script to change lock type on both dev and live, they are simple and same:
    USE mydb
    go
    setuser 'dbo'
    go
    alter table mytab lock DATAROWS
    go
    Because I am going to do it on live. So want to confirm it is safe just to run:
    alter table mytab lock DATAROWS

    Thanks, Mark. I worry about data loss. If no data loss, it is my defined 'safe' because when I try it on dev server, I fond out weird thing:
    The rows in the table is more than before after change lock type from DataPage to DataRow.
    With my test, mytab has 637,255 row. After change lock type, the row number is 637,282. So I am afraid of doing it on production!
    With command ALTER TABLE dbo.mytab LOCK DATAROWS,   I got message as:
    Warning: Row size (3839 bytes) could exceed row size limit, which is 1964 bytes.
    Non-clustered index (index id = 2) is being rebuilt.
    Non-clustered index (index id = 26) is being rebuilt.
    Warning: The schema for table 'mytab' has changed. Drop and re-create each trigger on this table that uses the 'if update(column-name)' clause.

  • JDBC & changed object type in nested table

    Once I add new column in object type in nested table.. I did step by step like it was wrinten in OraDoc..convert data e.t.c
    All programs work fine exept JDBC drivers..I decompile it and it's crashing on oracle.sql.ArrayDescriptor.java in metod
    private int toLengthFromLocator(byte abyte0[])
    after trying to execute this strane query in this method:
    oraclepreparedstatement = (OraclePreparedStatement)m_conn.prepareStatement("SELECT count(*) FROM TABLE( CAST(? AS " + getName() + ") )");
    this method exist and crash in 90 and 92 drivers.. maybe oracleTypeCOLLECTION.java
    public Datum unlinearize(byte abyte0[], long l, Datum datum, long l1, int i, work incorrect.. and cause call of this method..
    2driver developers: please trace your code with changed nested table

    damn... nobody reply me.. but this bug fixed on metalinc..

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Change data type from number(11,6) to number (11,8)

    I have a table, which has a million records...I have to change data type of some of the columns from number (11,6) to number (11,8).
    I thought of taking the backup of the table , truncating the original table, change the data type and then insert data back.
    Is it a safe option considering so many records ?

    the first number is precision and the second is scale:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14196/schema002.htm#sthref472
    so yes, 11,6 can hold larger numbers than 11,8
    create table foobar (two number(5,2),
                         three number(5,3))
    insert into foobar values (123.45,12.345); 
    --works fine
    insert into foobar values (123.45,123.45);
    --gives error ORA-01438: value larger than specified precision allowed for this column

  • Error in sync group with large tables

    Since a few days ago the automated sync process configured in windows azure portal is failing. The following message appears
    SqlException Error Code: -2146232060 - SqlError Number:40550, 
    Message: The session has been terminated because it has acquired too many locks. 
    Looking for the error on internet I've found the following post 
    http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx
    Basically it says that in order to increase the application transaction size it's necessary to include some parameters in the remote and local provider. There's an example script for that.
    But how can i apply this change if my data sync process was created through azure web portal? Is there a way to access the sync scripts? How can I increase the transaction size from azure portal?
    Please, any help is welcome
    Alvaro

    Hi Alvaro,
    I’m afraid that there’s no method to access the sync scripts and increase the transaction size from azure portal when using SQL Azure data sync group.
    The error 40550 occurs when sessions consuming greater than one million locks. You can use the following DMVs to monitor your transactions in SQL Azure. Usually, the solution of this error is to read or modify fewer rows in a single transaction.
    sys.dm_tran_active_transactions
    sys.dm_tran_database_transactions
    sys.dm_tran_locks
    sys.dm_tran_session_transactions
    In your scenario, to overcome the error 40550, I recommend you use the
    bcp utility or SQL Server Integration Services (SSIS) to move data from large table to SQL Azure.
    With bcp utility, you can divide your data into multiple sections and upload each section by executing multiple bcp commands simultaneously. With SSIS, you can divide your data into multiple files on the file system and upload each file by executing multiple
    Streams simultaneously.
    Reference:
    Optimizing Data Access and Messaging - SQL Azure Connection Management
    Thanks,
    Lydia Zhang
    If you have any feedback on our support, please click
    here.
    Lydia Zhang
    TechNet Community Support

  • I keep getting a message after start up saying "Completer wants to make changes. Type your password to allow this (which I have not done). Have screen shots. What do I do? Read this is malware. Running Yosemite OS X 10.10.1

    Have mac book pro, OS X Yosemite 10.10.1.  I get this after start up: "Completer wants to make changes. Type your password to allow this" (which I have not entered. When I cancel, there is some unknown "installer" icon on my desktop. If I move it to the trash, this all just starts over on the next time it is restarted. Have screenshots, but for some reason I can not attach to this message right now. Heard this is malware...HELP!

    There is no need to download anything to solve this problem.
    If Safari crashes on launch and you don't have another web browser, you should be able to launch Safari by starting up in safe mode.
    You may have installed the "Genieo" or "InstallMac" ad-injection malware. Follow the instructions on this Apple Support page to remove it.
    Back up all data before making any changes.
    Besides the files listed in the linked support article, you may also need to remove this file in the same way:
    ~/Library/LaunchAgents/com.genieo.completer.ltvbit.plist
    If there are other items with a name that includes "Genieo" or "genieo" alongside any of those you find, remove them as well.
    One of the steps in the article is to remove malicious Safari extensions. Do the equivalent in the Chrome and Firefox browsers, if you use either of those.
    After removing the malware, remember to reset your home page in all the web browsers affected, if it was changed.
    If you don't find any of the files or extensions listed, or if removing them doesn't stop the ad injection, then you may have one of the other kinds of adware covered by the support article. Follow the rest of the instructions in the article.
    Make sure you don't repeat the mistake that led you to install the malware. Chances are you got it from an Internet cesspit such as "Softonic" or "CNET Download." Never visit either of those sites again. You might also have downloaded it from an ad in a page on some other site. The ad would probably have included a large green button labeled "Download" or "Download Now" in white letters. The button is designed to confuse people who intend to download something else on the same page. If you ever download a file that isn't obviously what you expected, delete it immediately.
    In the Security & Privacy pane of System Preferences, select the General tab. The radio button marked Anywhere  should not be selected. If it is, click the lock icon to unlock the settings, then select one of the other buttons. After that, don't ignore a warning that you are about to run or install an application from an unknown developer.
    Still in System Preferences, open the App Store or Software Update pane and check the box marked
              Install system data files and security updates (OS X 10.10 or later)
    or
              Download updates automatically (OS X 10.9 or earlier)
    if it's not already checked.

  • Address database perf with large tables for attachments (.doc,.jpg,.pdf..etc)

    Hello Folks,
    I have a database with large  tables that contain attachments of various kinds of files such as *.doc, *.docx, *.xlsx, *.jpg, *.pdf, *, *.jpeg. Overtime this database has grown so quickly and quite a handful to maintain.  
    I've been thinking of a proper approach to redesign the database's tables' struct with a little change on the application side.
    How do you implement an idea in which the physical file attachment is not appended to these tables but instead just a path to that file which is located in another storage drive? Hence, the column's data type could be just a string rather than an image.
    This topic first appeared in the Spiceworks Community

    Hi,
    I'm sorry you are having this problem, here is another post about the same problem, where the cause of the problem is described:
    https://support.mozilla.com/en-US/questions/894442
    A bug has been filed to track resolution of the issue here, because a true fix isn't yet available:
    https://bugzilla.mozilla.org/show_bug.cgi?id=703015
    I apologize for the inconvenience.
    Regards,
    Michelle

  • To get Lock object for Standard Table

    Hi all,
    i want to update table FKK_GPSHAD,so i need corresponding lock object for this table.can any one sughgest me the solution for this.
    Thanks in advance,...

    Hi,
    Lock objects are use in SAP to avoid the inconsistancy at the time of data is being insert/change into database.
    SAP Provide three type of Lock objects.
    Read Lock(Shared Locked)
    protects read access to an object. The read lock allows other transactions read access but not write access to
    the locked area of the table
    Write Lock(exclusive lock)
    protects write access to an object. The write lock allows other transactions neither read nor write access to
    the locked area of the table.
    Enhanced write lock (exclusive lock without cumulating)
    works like a write lock except that the enhanced write lock also protects from further accesses from the
    same transaction.
    You can create a lock on a object of SAP thorugh transaction SE11 and enter any meaningful name start with EZ Example EZTEST_LOCK.
    Use: you can see in almost all transaction when you are open an object in Change mode SAP could not allow to any other user to open the same object in change mode.
    Example: in HR when we are enter a personal number in master data maintainance screen SAP can't allow to any other user to use same personal number for changes.
    Technicaly:
    When you create a lock object System automatically creat two function module.
    1. ENQUEUE_<Lockobject name>. to insert the object in a queue.
    2. DEQUEUE_<Lockobject name>. To remove the object is being queued through above FM.
    You have to use these function module in your program.
    check this link for example.
    http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eea5446011d189700000e8322d00/content.htm
    tables:vbak.
    call function 'ENQUEUE_EZLOCK3'
    exporting
    mode_vbak = 'E'
    mandt = sy-mandt
    vbeln = vbak-vbeln
    X_VBELN = ' '
    _SCOPE = '2'
    _WAIT = ' '
    _COLLECT = ' '
    EXCEPTIONS
    FOREIGN_LOCK = 1
    SYSTEM_FAILURE = 2
    OTHERS = 3
    if sy-subrc 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    endif.
    Normally ABAPers will create the Lock objects, because we know when to lock and how to lock and where to lock the Object then after completing our updations we unlock the Objects in the Tables
    http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eea5446011d189700000e8322d00/content.htm
    purpose: If multiple user try to access a database object, inconsistency may occer. To avoid that inconsistency and to let multiple user give the accessibility of the database objects the locking mechanism is used.
    Steps: first we create a loc object in se11 . Suppose for a table mara. It will create two functional module.:
    1. enque_lockobject
    1. deque_lockobject
    before updating any table first we lock the table by calling enque_lockobject fm and then after updating we release the lock by deque_lockobject.
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eea5446011d189700000e8322d00/content.htm
    GO TO SE11
    Select the radio button "Lock object"..
    Give the name starts with EZ or EY..
    Example: EYTEST
    Press Create button..
    Give the short description..
    Example: Lock object for table ZTABLE..
    In the tables tab..Give the table name..
    Example: ZTABLE
    Save and generate..
    Your lock object is now created..You can see the LOCK MODULES..
    In the menu ..GOTO -> LOCK MODULES..There you can see the ENQUEUE and DEQUEUE function
    Lock objects:
    http://www.sap-img.com/abap/type-and-uses-of-lock-objects-in-sap.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eea5446011d189700000e8322d00/content.htm
    Match Code Objects:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/41/f6b237fec48c67e10000009b38f8cf/content.htm
    http://searchsap.techtarget.com/tip/0,289483,sid21_gci553386,00.html
    See this link:
    http://www.sap-img.com/abap/type-and-uses-of-lock-objects-in-sap.htm
    Check these links -
    lock objects
    Lock Objects
    Lock Objects
    Please reward points if useful.
    Regards
    Rose

  • How to change recon.account directly in Table with out changing customizing

    Hi Gurus,
    I want to change Customer reconciliation account but this is not possible any more due to display option for reconciliation account in customizing.
    I know I can change customizing and then change the reconciliation account, but there is a better way doing this.
    We can change this directly in the table without changing customizing, but I don't know how to do this. Does any one know how to change directly in the table?
    Many thanks in advance

    Hi there,
    The steps that you should take include:
    1. go to SE16
    2. put knb1
    3. on the field selection input the customer you want to edit
    4. Execute
    5. type to the transaction field /h and press Enter
    6. double click to the customer line item you want to change the recon
    7. this will navigate you to the debug environment
    8. on the debug environment press find/search button -or- CTRL+F to find a keyword code
    9. you should see  if code = 'SHOW'.
    10. now please double click on the word "code" to set the Breakpoint
    11. Press F7
    12. you will see the program routine will stop at if code = 'SHOW'.
    13. Double click on the word 'code'
    14. 'code' will appear on the righ hand side with value = 'SHOW'
    15. change the value of 'SHOW' to 'EDIT' by double clicking the Edit icon (pencil) and replacing the value of SHOW to EDIT
    16. press Enter
    17. press F8
    18. you can now change the recon field
    19. Press SAVE button
    20. Double click 2x (or more) on the STOP sign, until your breakpoint is gone
    21. press F8 and you should see the message of "Database Record successfully created"
    Good luck!
    Regards,
    Fausto
    Edited by: Fausto Jahja on Jul 21, 2009 5:26 PM
    Edited by: Fausto Jahja on Jul 21, 2009 5:50 PM

  • Short Dump While Changing Non-KeyField of Sorted Table

    Hello,
    A short dump occurs while trying to change a field of a sorted table item passed as CHANGING parameter.  Are non-key fields in a sorted structure protected? The error message does not explicitely say so. Can anyone link to documentation explaining this behavior? 
    best regards,
    JNN

    You are passing by reference with CHANGING.
    It looks like read-only stuff can't be passed to method CHANGING.
    When I pass a literal or constant, there is a syntax error.
    System could have given you a syntax error instead of dump.
    Have a look at this snippet. Both method calls throw syntax error.
    CLASS mainclass DEFINITION.
      PUBLIC SECTION.
        CLASS-METHODS main.
      PRIVATE SECTION.
        CLASS-METHODS passref CHANGING cv_test TYPE i.
    ENDCLASS.                    "mainclass DEFINITION
    CLASS mainclass IMPLEMENTATION.
      METHOD main.
        "pass literal by ref
        passref(
          CHANGING
            cv_test = '2'
        "pass constant by ref
        CONSTANTS lc_test TYPE i VALUE 1.
        passref(
          CHANGING
            cv_test = lc_test
      ENDMETHOD.                    "main
      METHOD passref.
        "nothing
      ENDMETHOD.                    "passref
    ENDCLASS.                    "mainclass IMPLEMENTATION
    START-OF-SELECTION.
      mainclass=>main( ).

  • Itunes 10.6.1.7 problem: when I change the file "media type" from 'Music' to 'Podcast' the file disapears from ITUNES. I do this via (1) right click, (2) select 'Get Info', (3) select 'options' tab, and (4) change media type. What is the problem?

    Itunes 10.6.1.7 problem: when I change the file "media type" from 'Music' to 'Podcast' the file disapears from ITUNES. I do this via (1) right click, (2) select 'Get Info', (3) select 'options' tab, and (4) change media type. What is the problem?

    Hi Memalyn
    Essentially, the bare issue is that you have a 500GB hard drive with only 10GB free. That is not sufficient to run the system properly. The two options you have are to move/remove files to another location, or to install a larger hard drive (eg 2TB). Drive space has nothing to do with SMC firmware, and usually large media files are to blame.
    My first recommendation is this: download and run the free OmniDiskSweeper. This will identify the exact size of all your folders - you can drill down into the subfolders and figure out where your largest culprits are. For example, you might find that your Pictures folder contains both an iPhoto Library and copies that you've brought in from a camera but are outside the iPhoto Library structure. Or perhaps you have a lot of purchased video content in iTunes.
    If you find files that you KNOW you do not need, you can delete them. Don't delete them just because you have a backup, since if the backup fails, you will lose all your copies.
    Don't worry about "cleaners" for now - they don't save much space and can actually cause problems. Deal with the large file situation first and see how you get on.
    Let us know what you find out, and if you manage to get your space back.
    Matt

  • Changing data types of  fields in structures

    Hello,
       Currently we are working for upgrade to ECC 6.0 .
       In some transactions we are using BAPI_MATERIAL_SAVEDATA to create material.
      We have some additional fields in MARA and MARC table , which we fill using BAPI_TE_MARA and BAPI_TE_MARC structures.
        In these structures we have different data types for fields earlier in 4.6C
    Now this function module is only working if we change  data types of  fields in these structures only to char type. It is going for dump , if we maintain any other data type.
      Could any body tell me why should we change for char data type ?
    Regards,
      SATYA

    Hi,
    Check enhancement category (Extras --> Enhancement Category) for structures before enhancement. For example, for BAPI_TE_MARA structure, enhancement category is given as character type or numeric type. That means you can add the fields of data type only C and N. No other data types are allowed. If you select as not classified then you can add any data type. This option plays important role while enhancement.
    Regards,
    Prasanth

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

  • How to change set type attribute of a product

    Hi ;
    I created additional fields for products by using "COMM_ATTRSET". It created a table and functions but I dont know how to change this additional attributes.I cant use these attribute set functions because i dont know fragment_id yet. I think there must be a function encapsulates them.
    Is there any BAPI to change set type attributes?
    PS: In forums , I found an FM "COM_PRODUCT_MAINTAIN_MULT_API" but I think it hasnt been used anymore in CRM7.0
    Thanks

    Thanks for your reply ;
    but FM 'COM_PRODUCT_UI_MAINTAIN' doesnt have any generic import parameters to send my data to it in form of  newly created attribute set type table "ZPRODUCT".
    I want to set some fields of ZPRODUCT and i need to send them in this structure,  this table has column FRG_ID , if you set an additional  parameter for the first time , it generates new GUID..I think  'COM_PRODUCT_UI_MAINTAIN'  generates this GUID but i dont know how to use it.
    Can you give an example about using this FM?

Maybe you are looking for