TimesTen synchronize with OLTP-compressed table

Hi,
We are looking at potentially using TimesTen as a frontend data store for our Oracle database. Database is at version 11.2.0.3, and many tables in the databases are OLTP-compressed (Advanced Compression). Is synchronizing with compressed table supported?  Is there any known issues doing that?
Thanks,
George

Thanks Chris, for you quick reply.
George

Similar Messages

  • Why should avoid OLTP compression on tables with massive update/insert?

    Dear expert,
    We are planning oracle OLTP compression on a IS-U system, could you tell me
    Why should avoid OLTP compression on tables with massive update/insert?
    What kind of impact on the performance in the worst case?
    Best regards,
    Kate

    Hi
    When updating compressed data Oracle has to read it, uncompress it and update it.
    The compression is then performed again later asynchronously. This does require a lot more CPU than for a simple update.
    An other drawback is that compression on highly modified tables will generate a major increase in redo / undo log generation. I've experienced it on an DB where RFC tables where by mistake compressed, the redo increase was over 15%.
    Check the remark at the end of  Jonathan Lewis post.
    Regards
    http://allthingsoracle.com/compression-in-oracle-part-3-oltp-compression/
    Possibly this is all part of the trade-off that helps to explain why Oracle doesn't end up compressing the last few rows that get inserted into the block.
    The effect can be investigated fairly easily by inserting about 250 rows into the empty table - we see Oracle inserting 90 rows, then generating a lot of undo and redo as it compresses those 90 rows; then we insert another 40 rows, then generate a lot of undo and redo compressing the 130 rows. Ultimately, by the time the block is full we have processed the first 90 rows into the undo and redo four or five times.

  • How to Add column with default value in compress table.

    Hi ,
    while trying to add column to compressed table with default value i am getting error.
    Even i tried no compress command on table still its giivg error that add/drop not allowed on compressed table.
    Can anyone help me in this .
    Thanks.

    Aman wrote:
    while trying to add column to compressed table with default value i am getting error.This is clearly explain in the Oracle doc :
    "+You cannot add a column with a default value to a compressed table or to a partitioned table containing any compressed partition, unless you first disable compression for the table or partition+"
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5163
    Nicolas.

  • Compressed tables with more than 255 columns

    hi,
    Would anyone have a sql to find out compressed tables with more than 255 columns.
    Thank you
    Jonu

    SELECT table_name,
           Count(column_name)
    FROM   user_tab_columns utc
    WHERE  utc.table_name IN (SELECT table_name
                              FROM   user_tables
                              WHERE  compression = 'ENABLED')
    HAVING Count(column_name) > 255
    GROUP  BY table_name

  • How to Synchronize cube measures with relational fact tables?

    Dear all,
    I built a simple analysis cube on Oracle 10g R2 using AWM.
    The problem is when I change in the column associated with a base measure of my cube and then do cube or measure maintenance using the maintenance wizard of AWM, the measure value does not change and does not reflect the changes i made in the fact table.
    Please tell me how to keep the cube measures on sync with the fact table columns
    Thanks for helping

    Hi there,
    if you delete data in your fact table, i would asume that you have to delete the data in the cube, too. This is not done by "maintain cube".
    According to your load strategy you have delelete the old data e.g. for the whole cube or just for a given time data (day/month).
    You can use the dml command "clear" or just search in this forum for "clear cube".
    Maybe this helps you to "synchronize cube".
    If not, the questions would be:
    Is new data loaded in the cube correctly?
    Is updated data loaded in the cube correctly?
    Is data in the cube, which is not present in the fact tables?
    Good luck!
    Ed

  • Goden Gate: Can it be used to replicae tables with HCC compression ?

    Can we use Golden gate to replicate HCC(Exadata) compressed tables to Non-Exadata platform?

    I don't see much benefit with the current ODI adaptors for planning if it is an EPMA type planning application.
    They have been specifically designed to work with classic planning applications.
    It certainly is possible to load into EPMA interface tables using ODI though there are no direct adaptors and it takes quite a bit of effect to get them into the correct format.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • 11.2 new features for OLTP compression, datapump compression

    Hi All,
    I am working related to datawarehouse and i am looking forward to impliment 11.2 new OLTP compression features. When i am reading some articles its telling me that i need seperate license for that. What about datapump compression and do i need license for that as well?
    Appriciate if some one can share any experience/links about this feature.
    I did some testing and it reduced the space nearly 50%. Hope this would be a great feature to look at.
    Karunika

    If you are working with a data warehouse, why do you want to use the new OLTP compression features? Normally, the older (and free) table compression functionality worked perfectly well for data warehouses.
    I believe that DataPump compression is part of the Advanced Compression Option which does require additional licensing. Straight table compression has been available for a while, though, does not require an additional license (beyond the enterprise edition, not sure if it's available in standard) and is generally ideal for data warehouses.
    Justin

  • 11.2.0.3.3  impdp compress table

    HI ML :
    源库 : 10.2.0.3 compress table
    target : 11.2.0.3.3 impdp 源端的compress tables,在目标端是否是compress table
    之前在10g库直接 通过impdp dblink 导入时候 发现,入库的表需要手工做move compress。
    MOS 文档给的的测试时 在10g开始 支持导入自动维护compress table :
    Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.1 - Release: 9.2 to 11.2
    Information in this document applies to any platform.
    Symptoms
    Original import utility bypasses the table compression or does not compress, if the table is precreated as compressed. Please follow the next example that demonstrates this.
    connect / as sysdba
    create tablespace tbs_compress datafile '/tmp/tbs_compress01.dbf' size 100m;
    create user test identified by test default tablespace tbs_compress temporary tablespace temp;
    grant connect, resource to test;
    connect test/test
    -- create compressed table
    create table compressed
    id number,
    text varchar2(100)
    ) pctfree 0 pctused 90 compress;
    -- create non-compressed table
    create table noncompressed
    id number,
    text varchar2(100)
    ) pctfree 0 pctused 90 nocompress;
    -- populate compressed table with data
    begin
    for i in 1..100000 loop
    insert into compressed values (1, lpad ('1', 100, '0'));
    end loop;
    commit;
    end;
    -- populate non-compressed table with identical data
    begin
    for i in 1..100000 loop
    insert into noncompressed values (1, lpad ('1', 100, '0'));
    end loop;
    commit;
    end;
    -- compress the table COMPRESSED (previous insert doesn't use the compression)
    alter table compressed move compress;
    Let's now take a look at data dictionary to see the differences between the two tables:
    connect test/test
    select dbms_metadata.get_ddl ('TABLE', 'COMPRESSED') from dual;
    DBMS_METADATA.GET_DDL('TABLE','COMPRESSED')
    CREATE TABLE "TEST"."COMPRESSED"
    ( "ID" NUMBER,
    "TEXT" VARCHAR2(100)
    ) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 COMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBS_COMPRESS"
    1 row selected.
    SQL> select dbms_metadata.get_ddl ('TABLE', 'NONCOMPRESSED') from dual;
    DBMS_METADATA.GET_DDL('TABLE','NONCOMPRESSED')
    CREATE TABLE "TEST"."NONCOMPRESSED"
    ( "ID" NUMBER,
    "TEXT" VARCHAR2(100)
    ) PCTFREE 0 PCTUSED 90 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBS_COMPRESS"
    1 row selected.
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 2097152 17 256
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    The table COMPRESSED needs fewer storage space than the table NONCOMPRESSED. Now, let's export the tables using the original export utility:
    #> exp test/test file=test_compress.dmp tables=compressed,noncompressed compress=n
    About to export specified tables via Conventional Path ...
    . . exporting table COMPRESSED 100000 rows exported
    . . exporting table NONCOMPRESSED 100000 rows exported
    Export terminated successfully without warnings.
    and then import them back:
    connect test/test
    drop table compressed;
    drop table noncompressed;
    #> imp test/test file=test_compress.dmp tables=compressed,noncompressed
    . importing TEST's objects into TEST
    . . importing table "COMPRESSED" 100000 rows imported
    . . importing table "NONCOMPRESSED" 100000 rows imported
    Import terminated successfully without warnings.
    Verify the extents after original import:
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 11534336 26 1408
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    => The table compression is gone.
    Cause
    This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
    Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
    •Direct path SQL*Loader
    •CREATE TABLE and AS SELECT statements
    •Parallel INSERT (or serial INSERT with an APPEND hint) statements
    Solution
    The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
    alter table compressed move compress;
    Beginning with Oracle version 10g, DataPump utilities (expdp/impdp) perform direct path operations and so the table compression is maintained, like in the following example:
    - after crating/populating the two tables, export them with:
    #> expdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
    . . exported "TEST"."COMPRESSED" 10.30 MB 100000 rows
    Master table "TEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    and re-import after deletion with:
    #> impdp test/test directory=dpu dumpfile=test_compress.dmp tables=compressed,noncompressed
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "TEST"."NONCOMPRESSED" 10.30 MB 100000 rows
    . . imported "TEST"."COMPRESSED" 10.30 MB 100000 rows
    Job "TEST"."SYS_IMPORT_TABLE_01" successfully completed at 12:47:51
    Verify the extents after DataPump import:
    col segment_name format a30
    select segment_name, bytes, extents, blocks from user_segments;
    SEGMENT_NAME BYTES EXTENTS BLOCKS
    COMPRESSED 2097152 17 256
    NONCOMPRESSED 11534336 26 1408
    2 rows selected.
    => The table compression is kept.
    ===========================================================
    1 到底11.2.0.3 是否 支持impdp自动维护compress table 通过dblink 方式?
    2
    This is an expected behaviour. Import is not performing a bulk load/direct path operations, so the data is not inserted as compressed.
    Only Direct path operations such as CTAS (Create Table As Select), SQL*Loader Direct Path will compress data. These operations include:
    •Direct path SQL*Loader
    •CREATE TABLE and AS SELECT statements
    •Parallel INSERT (or serial INSERT with an APPEND hint) statements
    Solution
    The way to compress data after it is inserted via a non-direct operation is to move the table and compress the data:
    以上意思在10g之前是 必须以上方式才能支持目标端入库压缩表,10g开始支持自动压缩? 貌似 10g也需要手工move。

    ODM TEST:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> create table nocompres tablespace users as select * from dba_objects;
    Table created.
    SQL> create table compres_tab tablespace users as select * from dba_objects;
    Table created.
    SQL> alter table compres_tab compress 3;
    Table altered.
    SQL> alter table compres_tab move ;
    Table altered.
    select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    BYTES/1024/1024 SEGMENT_NAME
                  3 COMPRES_TAB
                  9 NOCOMPRES
    C:\Users\ML>expdp  maclean/oracle dumpfile=temp:COMPRES_TAB2.dmp  tables=COMPRES_TAB
    Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:12 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** dumpfile=temp:COMPRES_TAB2.dmp tables=COMPRES_TAB
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 3 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "MACLEAN"."COMPRES_TAB"                     7.276 MB   75264 rows
    Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
      D:\COMPRES_TAB2.DMP
    Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 12:01:20
    C:\Users\ML>impdp maclean/oracle remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
    Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:01:47 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MACLEAN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "MACLEAN"."SYS_IMPORT_FULL_01":  maclean/******** remap_schema=maclean:maclean1 dumpfile=temp:COMPRES_TAB2.dmp
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "MACLEAN1"."COMPRES_TAB"                    7.276 MB   75264 rows
    Job "MACLEAN"."SYS_IMPORT_FULL_01" successfully completed at 12:01:50
      1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    SQL> /
    BYTES/1024/1024 SEGMENT_NAME
                  3 COMPRES_TAB
    SQL> drop table compres_tab;
    Table dropped.
    C:\Users\ML>exp maclean/oracle tables=COMPRES_TAB file=compres1.dmp
    Export: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:19 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in ZHS16GBK character set and AL16UTF16 NCHAR character set
    About to export specified tables via Conventional Path ...
    . . exporting table                    COMPRES_TAB      75264 rows exported
    Export terminated successfully without warnings.
    C:\Users\ML>
    C:\Users\ML>imp maclean/oracle  fromuser=maclean touser=maclean1  file=compres1.dmp
    Import: Release 11.2.0.3.0 - Production on Fri Sep 14 12:03:45 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V11.02.00 via conventional path
    import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
    . importing MACLEAN's objects into MACLEAN1
    . . importing table                  "COMPRES_TAB"      75264 rows imported
    Import terminated successfully without warnings.
    SQL> conn maclean1/oracle
    Connected.
      1* select bytes/1024/1024 ,segment_name from user_segments where segment_name like '%COMPRES%'
    SQL> /
    BYTES/1024/1024 SEGMENT_NAME
                  8 COMPRES_TAB
                     我的理解 对于direct load 总是能保持compression
    但是 imp默认是conventional path 即使用普通的走buffer cache的INSERT实现导入 所以无法保持compression
    而impdp不管 access_method 是external_table还是direct_path 模式都可以保持compression

  • Problem - Synchronize with Database

    Need some help if anybody is out there
    JDeveloper 10.1.3.4
    Java 1.5
    BC-JSF
    I checked out my project and it worked just fine.
    Since i have altered one of my tables, I need to synchronize my entity object.
    I start the synchronization action - Synchronize with Database , and as usual, my new column is added .
    Also, the wizard is offering to remove and create some key constraints , one of wich I added but others are SysC00...
    After synchronization, my application doesn't work.
    The error I'm getting is -
    500 Internal Server Error
    javax.faces.el.PropertyNotFoundException: Error testing property 'inputValue' in bean of type null     
    at com.sun.faces.el.PropertyResolverImpl.isReadOnly(PropertyResolverImpl.java:274)     
    at oracle.adfinternal.view.faces.model.FacesPropertyResolver.isReadOnly(FacesPropertyResolver.java:124)     
    at com.sun.faces.el.impl.ArraySuffix.isReadOnly(ArraySuffix.java:236)     
    at com.sun.faces.el.impl.ComplexValue.isReadOnly(ComplexValue.java:209)     
    at com.sun.faces.el.ValueBindingImpl.isReadOnly(ValueBindingImpl.java:266).....
    The strange thing is that this happends when I try to run/open any page of my application, even if it does not use the entity I have synchronized.
    If someone has any ideas...
    Thanx,
    Lana

    This is a bug for Jdev about naming of pagedef.
    Try this:
    1.Delete your form components what you have in the Structure and then just re-drag them into your jspx page.

  • Cannot 'Synchronize with database' my entity objects

    Hello,
    I have successfully created entities in my model using the 'new buisness components from tables' function. But now my database moddel has changed and I would like to synchronize my entities with database to get newest colums, but when right clicking an entity, the 'Synchronize with database' link is greyed and cannot be selected.
    I have a database connection configured in my application resources and I can still import new tables in my project.
    Do you have any idea of what is happening? This is not the case for all my applications. I can still synchronize in some other ones
    Thank you for your help
    Stephane

    Synchronize with the database will be avaialble for the Entity Object
    I think there is a problem with the particular entity that got created when you did the business components out of tables..
    This is not the case for all my applications. I can still synchronize in some other onesif this is not happening with any other EO in any other application.. then you can try
    recreating the EO from the table..
    compare what is the change in between the EO that can synchronize and your current EO that cannot synchronize..
    another option might be to check the application.. if that is havign some hidden property to synchronize..
    last is to go with sameer's approach, to doubt about the table is not proper... in the same applciation create an EO with another table and try to synchronize.. you will get to actually the point where there is a problem..
    good luck.

  • Synchronize with database problem

    Hi,
    I'm using JSF ADF BC, I have two tables in database, I've added new field to each, I go back to jdeveloper, and right click on both entities and click on synchronize with database, select new field that I've created and everything works fine, then I go to view object and here is the problem, in first view object I can add newly created and synchronized field like supposed (mapped to column or sql), but in the second view no matter what I do it is alway created as transient, I've tried deleting and synchronizing couple of times, changing properties, going to view row impl, view.xml, but it's always the same problem. Please can anyone help, I don't want to create whole view from scratch, every association, view link etc.
    Thanks in advance,
    Tomislav

    Hi,
    on view object editor screen, put your cursor on "Attributes" (left side of the screen) then click "NEW" button (on the right side of the screen), then fill in Attribute Name, check the "Map to column or SQL", and fill in query column. You are done.

  • Drop column from compressed table

    NLSRTL
    11.2.0.3.0
    Production
    Oracle Database 11g Enterprise Edition
    11.2.0.3.0
    64bit Production
    PL/SQL
    11.2.0.3.0
    Production
    TNS for Linux:
    11.2.0.3.0
    Production
    Hello,
    I read about how to drop column from a compressed table - first set it unused and then drop the unused columns. However, in the example below on the database I ran it, it does not work. Please, can you tell me WHEN this approach does not work. What it is dependent on - parameters or something else. Why I cannot drop the unused columns?
    And the example along with the errors:
    create table tcompressed compress as select * from all_users;
    > table TCOMPRESSED created.
    alter table tcompressed add x number;
    > table TCOMPRESSED altered.
    alter table tcompressed drop column x;
    >
    Error report:
    SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
    39726. 00000 -  "unsupported add/drop column operation on compressed tables"
    *Cause:    An unsupported add/drop column operation for compressed table
               was attemped.
    *Action:   When adding a column, do not specify a default value.
               DROP column is only supported in the form of SET UNUSED column
               (meta-data drop column).
    alter table tcompressed set unused column x;
    > table TCOMPRESSED altered.
    alter table tcompressed drop unused columns;
    >
    Error report:
    SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
    39726. 00000 -  "unsupported add/drop column operation on compressed tables"
    *Cause:    An unsupported add/drop column operation for compressed table
               was attemped.
    *Action:   When adding a column, do not specify a default value.
               DROP column is only supported in the form of SET UNUSED column
               (meta-data drop column).
    As you can see even after altering the table by setting the column X as unused I still cannot drop it by using DROP UNUSED COLUMNS.
    Thank you.

    check this link it might help. At the end it has also mentioned of a bug check the same.
    http://sharpcomments.com/2008/10/ora-39726-unsupported-adddrop-column-operation-on-compressed-tables.html

  • Importing into a compressed table

    Is importing into a compressed table effective?
    I mean is data get compressed when using imp utility to load into a compressed table?
    from the oracle doc :
    Compression occurs while data is being bulk inserted or bulk loaded. These operations include:
    ·Direct Path SQL*Loader
    ·CREATE TABLE … AS SELECT statement
    ·Parallel INSERT (or serial INSERT with an APPEND hint) statement

    A quick journey over to the SQL Reference manual, where things like table-create options are defined, gives us the following comment:
    table_compression
    The table_compression clause is valid only for heap-organized tables. Use this clause to instruct the database whether to compress data segments to reduce disk use. This clause is especially useful in environments such as data warehouses, where the amount of insert and update operations is small. The COMPRESS keyword enables table compression. The NOCOMPRESS keyword disables table compression. NOCOMPRESS is the default.
    When you enable table compression, Oracle Database attempts to compress data during direct-path INSERT operations when it is productive to do so. The original import utility (imp) does not support direct-path INSERT, and therefore cannot import data in a compressed format. You can specify table compression for the following portions of a heap-organized table:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2095331

  • BUG: JDev 10.1.3 ADF BC Synchronize With Database not available

    I am using the "Business Components from Tables" wizard to create my ADF model.
    If I specify a package that is 3 nodes deep (a.b.model) then JDev creates a structure that looks like this...
    Applications
    --MyApplication
    ----Model
    ------Application Sources
    --------a.b.model
    If I specify a package that is 4 nodes deep (a.b.c.model) then JDev creates an extra level in the Navigator like this....
    Applications
    --MyApplication
    ----Model
    ------Application Sources
    --------a.b.c
    ----------model
    The side effect of this extra level is that the "Synchronize With Database" option is no longer available when bringing up the context menu for either the "a.b.c" node or the "model" node.
    Bob

    The problem appears to not be with the Wizard.
    Instead, the problem is tied to the Navigator Flat Level control.
    If I increase the flat level to a point where the entire package appears in a single Navigator node, then the "Synchronize with Database" context menu item is available.
    If the flat level is decreased to a point that the package splits into multiple Navigator nodes then the context meny item is no longer available.
    Bob

  • Why the "Synchronize with Item" property don't save?

    I created that table :-
    create table syn (col1 number , col2 number);
    and in forms 6i ..... I determinate col1 as the value for "Synchronize with Item" .
    http://img169.imageshack.us/img169/6258/synar6.png
    and when I test it in runtime , it work correctly and after I saved the data and get out of forms and went to database to ensure the data existing ... I surprised that the data in col2 not existing at all .... wholly empty
    SQL> select*from syn;
    COL1 COL2
    1
    2
    3
    4
    5
    6
    7
    8
    8 rows selected.
    why the item (COL2) don't save the data that inserted into it automatically by "Synchronize with Item" property ????

    i should use copy property not Synchronize ?That's meant for copying values from master to detail, so it would probably not work as you want within the same record. I think you should set the value of the secondary item in pre-insert and pre-update triggers, depending on what you're actually trying to accomplish?

Maybe you are looking for

  • How to Update Encryption in WebLogic 7

    I am currently working on a web application and our payment provider has stopped supporting low strength encryption. This has caused the webapp to fail each time we try to send a secure transaction thru to the Payment Provider. Does anyone know the b

  • Download Adobe Reader to File

    Can I download Adobe Reader to a file so I can move it to a work station that is not on line?

  • Can`t (un)install iTunes8 because of key violation

    Can`t install neither. A pop-up says that key the following key can`t get accessed. "HKEYLOCALMACHINE32/software/classes/.m4v/openeithproids" I am admin with all rights. Reason: This key does not exist. (Un)installtion aborts here. What can I do? Whe

  • Info about procedure

    Dear gurus can i find the list of active procedures in my database and how many times a particular procedure is called. OS -LINUX database 10.2.0.4 Rgds

  • How to pass sql query to html region

    Is it possible to pass the results of a sql query (say in a page process) to javascript in an html region on the same page?