Problem with version enabling tables with ric.
Hello,
i have the a problem when i want to version enable tables having ref. int. constraits having delete rule cascade or set null.
Is it possible that i can't version enable the tables because of these constraints? How i could solve this problem if i want to keep the delete rule?
thanks,
Orsi Gyulai
Hi,
We are internally creating a <table_name>_g procedure that transfers privileges to the necessary users. If you have a table with that name, it would explain the error.
When using ignore_last_error, it will skip the statement that is selected from the all_wm_vt_errors view. Sometimes, the statement can be safely skipped, while in other cases it cannot be. This procedure will always eventually complete when it is repeatedly called with ignore_last_error set to true. However in doing so, some required objects or privileges may not exist or be in an invalid state.
In your case, you most likely had to skip the 3 or so statements that dealt with the <table_name>_g procedure. Typically, these statements should not be skipped, but you may or may not see a problem with it due to a number of factors.
The best course of action may be to drop the trigger in a beginDDL/commitDDL session, and then recreate it in a separate session. Of course, only do this after renaming the <table_name>_g table that you have. Unfortunately, there is currently no way of getting around this naming convention.
Thanks,
Ben
Similar Messages
-
Row locking issue with version enabled tables
I've been testing the effect of locking in version enabled tables in order to assess workspace manager restrictions when updating records in different workspaces and I have encountered a locking problem where I can't seem to update different records of the same table in different sessions if these same records have been previously updated & committed in another workspace.
I'm running the tests on 11.2.0.3. I have ROW_LEVEL_LOCKING set to ON.
Here's a simple test case (I have many other test cases which fail as well but understanding why this one causes a locking problem will help me understand the results from my other test cases):
--Change tablespace names as required
create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
exec dbms_wm.gotoworkspace('LIVE');
insert into t1 values ('1', 'name1');
insert into t1 values ('2', 'name2');
insert into t1 values ('3', 'name3');
commit;
exec dbms_wm.enableversioning('t1');
exec dbms_wm.gotoworkspace('LIVE');
exec dbms_wm.createworkspace('TESTWSM1');
exec dbms_wm.gotoworkspace('TESTWSM1');
--update 2 records in a non-LIVE workspace in preparation for updating in different workspaces later
update t1 set name = name||'changed' where id in ('1', '2');
commit;
quit;
--Now in a separate session (called session 1 for this example) run the following without committing the changes:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
--Now in another session (session 2) update a different record from the same table. The below update will hang waiting on the transaction in session 1 to complete (via commit/rollback):
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
I'm surprised records of different ids can't be updated in different sessions i.e. why does session 1 lock the update of record 2 which is not being updated anywhere else. I've tried this using different non-LIVE workspaces with similar results. I've tried changing table properties e.g. initrans with and still get a lock. The changes to table properties are successfully propagated to the _LT tables but not all the related workspace manager tables created for table T1 above. I'm not sure if this is the issue.
Note an example of the background workspace manager query that may create the lock is something like:
UPDATE TESTWSM.T1_LT SET LTLOCK = WMSYS.LT_CTX_PKG.CHECKNGETLOCK(:B6 , LTLOCK, NEXTVER, :B3 , 0,'UPDATE', VERSION, DELSTATUS, :B5 ), NEXTVER = WMSYS.LT_CTX_PKG.GETNEXTVER(NEXTVER,:B4 ,VERSION,:B3 ,:B2 ,683) WHERE ROWID = :B1
Any help with this will be appreciated. Thanks in advance.Hi Ben,
Thanks for your quick response.
I've tested your suggestion and it does work with 2 workspaces but the same problem is enountered when additional workspaces are created.
It seems if multiple workspaces are used in a multi user environment, locks will be inevitable which will degrade performance especially if a long transaction is used.
Deadlocks can also be encountered where eventually one of the sessions is rolled back by the database.
Is there a way of avoiding this e.g. by controlling the creation of workspaces and table updates?
I've updated my test case below to demonstrate the extra workspace locking issue.
--change tablespace name as required
create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
exec dbms_wm.gotoworkspace('LIVE');
insert into t1 values ('1', 'name1');
insert into t1 values ('2', 'name2');
insert into t1 values ('3', 'name3');
commit;
exec dbms_wm.enableversioning('t1');
exec dbms_wm.gotoworkspace('LIVE');
exec dbms_wm.createworkspace('TESTWSM1');
exec dbms_wm.gotoworkspace('TESTWSM1');
update t1 set name = name||'changed' where id in ('1', '2');
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
--end of original test case, start of additional workspace locking issue:
Session 1:
rollback;
Session 2:
rollback;
--update record in both workspaces
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '3';
commit;
exec dbms_wm.gotoworkspace('TESTWSM1');
update t1 set name = 'changed' where id = '3';
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
Session 1:
rollback;
Session 2:
rollback;
exec dbms_wm.gotoworkspace('LIVE');
exec dbms_wm.createworkspace('TESTWSM2');
exec dbms_wm.gotoworkspace('TESTWSM2');
update t1 set name = name||'changed2' where id in ('1', '2');
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
--this now gets locked out by session 1
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
Session 1:
rollback;
Session 2:
rollback;
--update record 3 in TESTWSM2
exec dbms_wm.gotoworkspace('TESTWSM2');
update t1 set name = 'changed' where id = '3';
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
--this is still locked out by session 1
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
Session 1:
rollback;
Session 2:
rollback;
--try updating LIVE
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '3';
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
--this is still locked out by session 1
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
Session 1:
rollback;
Session 2:
rollback;
--try updating TESTWSM1 workspace too - so all have been updated since TESTWSM2 was created
exec dbms_wm.gotoworkspace('TESTWSM1');
update t1 set name = 'changed' where id = '3';
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
--this is still locked out by session 1
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
Session 1:
rollback;
Session 2:
rollback;
--try updating every workspace afresh
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changedA' where id = '3';
commit;
exec dbms_wm.gotoworkspace('TESTWSM1');
update t1 set name = 'changedB' where id = '3';
commit;
exec dbms_wm.gotoworkspace('TESTWSM2');
update t1 set name = 'changedC' where id = '3';
commit;
Session 1:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '1';
--this is still locked out by session 1
session 2:
exec dbms_wm.gotoworkspace('LIVE');
update t1 set name = 'changed' where id = '2';
Session 1:
rollback;
Session 2:
rollback; -
Applying the 10.2.0.4 database patchset with version-enabled tables
We recently applied the 10.2.0.4 database patchset to a database where we had a few version-enabled tables. Applying the patchset took a bit longer than expected and looking through the log, it took about 30 minutes to upgrade Workspace Manager.
1) I know in the past that Workspace Manager patches were separate from database patches. Metalink NOTE:341353.1 "Why does the Workspace Manager version differ from the current RDBMS patchset version" seems to indicate that this is no longer the case,
>
From patchset release V10.2.0.4 onwards, the Oracle Workspace Manager updates are integrated with the generic RDBMS patchsets.
>
But that statement seems to be an afterthought in a document clarifying a separate behavior. Is there another document (Metalink or otherwise) that discusses this change directly?
2) Does anyone have a good feeling for how long applying this and future patchsets should take when there are potentially large version-enabled tables in the system? I am hoping/ assuming that it is not a linear relationship, because the version-enabled tables we have at the moment are rather small. But I don't know whether the upgrade time is basically a constant or whether there is a dependency on the size of the version-enabled tables, the number of version-enabled tables, etc.
I am concerned that we may end up with very large version-enabled tables which would require substantial downtime when we apply future patchsets whether or not we needed the Workspace Manager fixes. I hope/ expect that this is not the case, but thought I would double-check.
JustinHi Justin,
#1. I could not find another document that discusses this, but I may just be missing it. I will see if there is something else, and let you know if I find something.
#2. There is a fixed and variable part of the upgrade for Workspace Manager. The fixed potion recompiles all of the packages/views, modifies metadata tables, etc used directly by Workspace Manager. The other potion is done for each version enabled table. First, the view and triggers maintained by Workspace Manager are rebuilt. This is generally a quick operation as it is not dependent on the size of the table. There is also the potential for data migration for each of the rows of the versioned table. If needed, this would take much longer to complete, but this type of migration has not been needed since about version 9.2. If you are upgrading from a version newer than this, then this part of the migration wold not be necessary.
Regards,
Ben -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Unable to run BeginDDL on a version enabled table on OWM 9.2.0.8
Hello,
We running Oracle 9i with OWM 9.2.0.8
We are trying to run BeginDDL on a version enabled table with approx 2.8 million records
in it. And when we try running the statment:
BEGIN
DBMS_WM.BeginDDL('PFS_SPOT_SHOTS');
END;
We get the following error:
ORA-20203: enable/disable versioning or begin/commitDDL is being executed on PFSDB.PFS_SPOT_SHOTS
ORA-06512: at "WMSYS.OWM_DDL_PKG" line 3378
ORA-06512: at "WMSYS.LT", line 11827
ORA-06512: at line 2
We also try running BeginDDL on another version enabled table with no data in it
BEGIN
DBMS_WM.BeginDDL('PFS_DUMMY_POINT');
END;
then we are getting the following error:
ORA-25150: ALTERING of extent parameters not permitted
ORA-06512: at "WMSYS.OWM_DDL_PKG", line 3378
ORA-06512: at "WMSYS.LT", line 11827
ORA-06512: at line 2
Any suggestions in regards to this problem?
Thanks
Gary1) For the first table PFS_SPOT_SHOTS that we are having trouble with, the query return the following:
select * from wmsys.all_wm_versioned_tables t
where table_name = 'PFS_SPOT_SHOTS';
TABLE_NAME OWNER STATE HISTORY NOTIFICATION NOTIFYWORKSPACES CONFLICT DIFF VALIDTIME
PFS_SPOT_SHOTS PFSDB RB_IND VIEW_WO_OVERWRITE NO NO NO YES
2) For the second table PFS_DUMMY_POINT table, the query return the following:
select * from wmsys.all_wm_versioned_tables t
where table_name = 'PFS_DUMMY_POINT';
TABLE_NAME OWNER STATE HISTORY NOTIFICATION NOTIFYWORKSPACES CONFLICT DIFF VALIDTIME
PFS_DUMMY_POINT PFSDB VERSIONED VIEW_WO_OVERWRITE NO NO NO YES
Other than the PFS_DUMMY_POINT table, the query for other version enabled table also return STATE:VERSIONED except the PFS_SPOT_SHOTS table
We have filed a SR, and the number is : 7599412994
Thank you for your response. -
Problem by adding one spatial column to the version-enabled table
Hello, I'm trying to modify one version-enabled table T1 like this:
--table definition
create table T1
(ID NUMBER NOT NULL PRIMARY KEY,
NAME VARCHAR2 (256),
FUNCTION VARCHAR2 (256));
--enable versioning
EXECUTE DBMS_WM.EnableVersioning('T1','VIEW_WO_OVERWRITE');
I'd like to add one spatial column to this table by:
-- modify metada view for spatial indexing
INSERT INTO USER_SDO_GEOM_METADATA (TABLE_NAME, COLUMN_NAME, DIMINFO, SRID)
VALUES ('T1_LT', 'ANCHOR',
MDSYS.SDO_DIM_ARRAY
(MDSYS.SDO_DIM_ELEMENT('X', 0.000, 100000.000, 0.0005),
MDSYS.SDO_DIM_ELEMENT('Y', 0.000, 100000.000, 0.0005),
MDSYS.SDO_DIM_ELEMENT('Z', -100, 1000, 0.0005)) , 81989002);
-- table modification - add a column and create a spatial index
EXECUTE DBMS_WM.BeginDDL('T1');
ALTER TABLE T1_LTS ADD ("ANCHOR" MDSYS.SDO_GEOMETRY);
CREATE INDEX T1_SPX on T1_LTS(ANCHOR) INDEXTYPE is MDSYS.SPATIAL_INDEX;
EXECUTE DBMS_WM.CommitDDL('T1');
By finishing of the DDL operation with EXECUTE DBMS_WM.CommitDDL('T1') I get an error message:
"SQL> EXECUTE DBMS_WM.CommitDDL('T1');
BEGIN DBMS_WM.CommitDDL('T1'); END;
ERROR at line 1:
ORA-20171: WM error: 'CREATE TABLE' and 'CREATE SEQUENCE' privileges needed.
ORA-06512: at "SYS.WM_ERROR", line 342
ORA-06512: at "SYS.WM_ERROR", line 359
ORA-06512: at "SYS.LTUTIL", line 8016
ORA-06512: at "SYS.LT", line 11925
ORA-06512: at line 1
What is wrong here? The Oracle 10g DB is installed on Windows 2003 Server, OWM_VERSION - 10.2.0.1.0.
Regards,
ViktorHi,
You need to explicitly grant the create table and create sequence privileges to the user owning the table. It is not enough for these to be granted by using a role. This restriction is documented in the user guide.
Also, you should add the entry in the user_sdo_geom_metadata view on the t1_lts table, which is the table to which you are actually adding the geometry column, after calling beginDDL. CommitDDL will make the necessary changes, which in this case would be to add an entry for both t1 and t1_lt.
Regards,
Ben -
ORA-00600 problem when create XMLType table with registerd schema
Hi,
I am using Oracle9i Enterprise Edition Release 9.2.0.4.0 on RedHat Linux 7.2
I found a problem when I create table with registered schema with follow content:
<xs:element name="body">
<xs:complexType>
<xs:sequence>
</xs:sequence>
<xs:attribute name="id" type="xs:ID"/>
<xs:attribute name="class" type="xs:NMTOKENS"/>
<xs:attribute name="style" type="xs:string"/>
</xs:complexType>
</xs:element>
<xs:element name="body.content">
<xs:complexType>
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:element ref="p"/>
<xs:element ref="hl2"/>
<xs:element ref="nitf-table"/>
<xs:element ref="ol"/>
</xs:choice>
<xs:attribute name="id" type="xs:ID"/>
</xs:complexType>
</xs:element>
Does Oracle not support element reference to other element with dot?
For instance, body -> body.content
Thanks for your attention.Sorry, amendment on the schema
<xs:element name="body">
<xs:complexType>
<xs:sequence>
<xs:element ref="body.head" minOccurs="0"/>
<xs:element ref="body.content" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="body.end" minOccurs="0"/>
</xs:sequence>
<xs:attribute name="id" type="xs:ID"/>
<xs:attribute name="class" type="xs:NMTOKENS"/>
<xs:attribute name="style" type="xs:string"/>
</xs:complexType>
</xs:element> -
Unique constraint violation on version enabled table
hi!
we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
we somewhat assume that to be a pretty standard scenario when using versioned data.
is this some kind of bug or do we just miss something important here?
more information:
- versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
- database version is 10.2.0.1.0
- wm installation output:
ALLOW_CAPTURE_EVENTS OFF
ALLOW_MULTI_PARENT_WORKSPACES OFF
ALLOW_NESTED_TABLE_COLUMNS OFF
CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
NUMBER_OF_COMPRESS_BATCHES 50
OWM_VERSION 10.2.0.1.0
UNDO_SPACE UNLIMITED
USE_TIMESTAMP_TYPE_FOR_HISTORY ON
- all operations are done on LIVE workspace
any help is appreciated.
EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
regards,
Andreas Schilling
Message was edited by:
aschillinghi!
we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
we somewhat assume that to be a pretty standard scenario when using versioned data.
is this some kind of bug or do we just miss something important here?
more information:
- versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
- database version is 10.2.0.1.0
- wm installation output:
ALLOW_CAPTURE_EVENTS OFF
ALLOW_MULTI_PARENT_WORKSPACES OFF
ALLOW_NESTED_TABLE_COLUMNS OFF
CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
NUMBER_OF_COMPRESS_BATCHES 50
OWM_VERSION 10.2.0.1.0
UNDO_SPACE UNLIMITED
USE_TIMESTAMP_TYPE_FOR_HISTORY ON
- all operations are done on LIVE workspace
any help is appreciated.
EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
regards,
Andreas Schilling
Message was edited by:
aschilling -
Serializable transactions and initrans parameter for version enabled tables
Hi,
we want to use serializable transactions when using version enabled tables, so we need to set initrans parameter >= 3 for such tables.
Change made during BEGINDDL - COMMITDDL process is not done for LT table. I think that initrans parameter is not checked at all during BEGINDDL-COMMITDDL process, because skeleton table has initrans=1 even if LT table has different value of this parameter.
-- table GRST_K3_LT has initrans = 1
exec dbms_wm.beginddl('GRST_K3');
alter table grst_k3_lts initrans 3;
exec dbms_wm.commitddl('GRST_K3');
-- table GRST_K3_LT has initrans = 1
During enableversioning this parameter is not changed, so this script succesfully set initrans for versioned tables.
-- table GRST_K3 has initrans = 1
alter table grst_k3 initrans 3;
exec dbms_wm.enableversioning('GRST_K3','VIEW_WO_OVERWRITE');
-- table GRST_K3_LT has initrans = 3
We use OWM 10.1.0.3 version.
We cannot version disable tables. I understand that change can be done after manually disabling trigger NO_WM_ALTER.
Are there any problems with using serializable transactions when reading data in version enabled tables? We will not use serializable transactions for changing data in version enabled tables.
thanks for Your help
Jan VeleÅ¡ÃkHi,
You are correct. We do not currently support the initrans parameter during beginDDL/commitDDL. However, as you indicated, we will maintain any value that is set before enableversioning. If this is a critical issue for you, then please file a TAR and we can look into adding support for it in a future release.
Also, there are no known issues involving serializable transactions on versioned tables.
Thanks,
Ben -
Unable to drop foreign key on a version-enabled table
Hi,
We're using Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit and I'm trying to delete a foreign key from a version-enabled table.
The constraint shows in ALL_WM_RIC_INFO view. I run exec dbms_wm.beginddl('tablename'), when I inspect the generated <tablename>_LTS I don't see any referential integrity constraints generated on that table ? The constraint i'm trying to delete is from a version-enabled table to a non-version enabled table if that makes a difference.
From what I understand the referential integrity constraint would be generated and I would be able to run something like:
ALTER TABLE <tablename>_LTS DROP CONSTRAINT <constraintname>.
I tried running the above statement using the RIC_NAME from ALL_WM_RIC_INFO view but it fails predictably with:
ORA-02443: Cannot drop constraint - nonexistent constraint
Cause: alter table drop constraint <constraint_name>
Action: make sure you supply correct constraint name.as I ran into this today as well I feel like answering this question, as I suppose that the thread opener did the same mistake as I did, and maybe
some others do it as well :)
of course you need to open a DDL session on the parent table as well in order to drop foreign key constraints, just as you do when you add them.
so the correct order to make it work would be:
EXECUTE DBMS_WM.BeginDDL('PARENT_TABLE');
EXECUTE DBMS_WM.BeginDDL('CHILD_TABLE');
ALTER TABLE CHILD_TABLE_LTS
DROP CONSTRAINT FOREIGN_KEY_NAME
EXECUTE DBMS_WM.CommitDDL('CHILD_TABLE');
EXECUTE DBMS_WM.CommitDDL('PARENT_TABLE');I felt kind of stupid that it took me 1 hour to figure this out ;)
regards,
Andreas -
Altering Version enabled tables
Hallo,
i was wondering what happens when you alter an version-enabled table?
I don't find anything about it in the Oracle Workspace Manager infopapers
For instance: I have an table with colums A B C
i add a row1 at time1
Then i delete colum C and add a colum D
i add a row2 at time 2
What do i get when i query the table at time1 and time2 ?
gotodate(time1) -> row1 with ABC ? AB ? ABD? ABCD ? ...
gotodate(time2) -> row2 with ABD ? ABCD ? ...
How does Oracle store this information?
Thanks,
BartHi,
You can't alter versioned tables directly. You need to use the beginDDL/commitDDL procedures. DDL changes are not themselves versioned, so once a change is done, it will apply for all time periods and historical records. So, in your example the table will show columns ABD for all time periods, with either null or default values used for the time periods when the columns did not yet exist. The data from the dropped B column is lost after committing the DDL change.
Regards,
Ben -
[Solved]Problem in showing ADF tables with 50,000 Records and more in TP4
Hi,
When I tried to view readonly ADF table in TP4, it loaded perfectly, but when i tried to run the vertical scrollbar to bottom record, it showed Fetching Data .... for few minutes and finally it showed a message. java.lang.OutOfMemoryError: Java heap space.
When I tried to view normal ADF table in TP4, it loaded perfectly, but when i selected one row a popup Error message came and reported all the columns in the table with java.lang.NullPointerException.
Similar functionality works beautifully in JDeveloper 10g version. Seems to be a serious problem in handling cached rows.
Regards,
Santhosh
I am also getting a warning in OC4J console as given below:
WARNING: Application loader current-workspace-app.web.SRApplication-ViewController-webapp:0.0.0 may not use /D:/Appl/jdevstudio1111/j2ee/home/lib/oc4j-internal.jar (from <code-source> in META-INF/boot.xml in D:\Appl\jdevstudio1111\j2ee\home\oc4j.jar)
Message was edited by:
Santhosh1234
Message was edited by:
Santhosh1234Hi!
You have two ways:
1. build your own table CollectionModel that will use TopLink paged data source access (see Pedja's thread here Paging of large data sets - SOLUTION
2. use new Paged EJB finders that started to appear with TP4 when you use wizard to generate SessionBeans (named something like "query<Entity>FindAllByRange") . I posted already question about this new paged finders here How to use new "query<Entity>FindAllByRange" EJB finders? but no answers from devs yet :)
Personally, I opt for no. 2 (paged finders) as it looks more platform-supported that writing custom CollectionModels for each finder... But until devs enlighten us with details how the paged finders are to be used (the af:table is not [yet?] supportive for drag-and-drop of paged finders), we are stuck with 1. option.
Regards,
Pavle -
JSF: Problems adding rows to table with custom method
Since this being my first post, I find it only appropriate to thank the development team for such a huge addition to the application development world. You guys rock. You have cured my Java identity crisis. That being said...
I have been stumped for days. I'm making a simple web cart. I am trying to get two string values (name and price) passed through a custom exposed application module method ran on a backing bean. What I want the exposed VO module to do is, upon receiving the two strings, create a new row on the empty data controlled table I have built to store them. Basically I just need to add rows to a table with my custom managed bean. Most recent version.
I have written a test method to make sure my backing bean and exposed app module impl were working. This test works:
//WORKS
//CartInfo.java
public String cb1_action() {
AppModuleImpl am = new AppModuleImpl();
am.testMethod();
return null;
//AppModuleImp.java
public void testMethod() {
System.out.println("WORKS");
}I found some code on how to add rows and unfortunately it isn't working out for some reason. Here is what is producing the error "Caused By: java.lang.NullPointerException" :
//DOESN'T WORK IN CUSTOM METHOD BUT WORKS WHEN DRAGGED ON TO THE PAGE AS A COMMAND BUTTON
//CartInfo.java
public String cb1_action() {
AppModuleImpl am = new AppModuleImpl();
am.testMethod("test", "test");
return null;
//AppModuleImp.java
public void testMethod(String pName,String pPrice) {
ViewObjectImpl vo = this.getCartVO1();
Row r = vo.createRow();
r.setAttribute("NAME", pName);
r.setAttribute("PRICE", pPrice);
vo.insertRow(r);
//The VO attributes for the DC are:
// Name : Type : Alias Name : Entity Usage : Info
NAME : String : NAME : (blank) : Transient
PRICE : String : PRICE : (blank) : TransientSo what I have so far is a table linked to my database. A user can select and click on a product on the row's individual Add button and using a modified selection listener, it returns the values to the backing_bean.
Then what I want it to do is have those variables passed to the other table. I just can't put it together.
Please if there is anyone that could send me in the right direction I would greatly appreciate it!
Thank you,
Jackif i understand you, you want to call application module method when user clicks on add button on the UI and your table is from VO
so what i can come up for now is:
1) create variable binding from the iterator for both name and price.
2) on your UI action button :
<af:commandButton actionListener="yourbean.addToCart">
<f:attribute name="pName" value ="binding.<created name bind variable>"
<f:attribute name="pPrice" value = "binding.<created price bind variable>"
</af:commandButton>
3) custom Managed Bean:
//yourBean.java
public void addToCart(ActionEvent e){
// get binding here
// use executeWitParams method to send parameter to the function "testMethod" and execute
//AppModuleImp.java
public void testMethod(String pName,String pPrice) {
CardVOImpl vo = this.getCartVO1();
CardVORowImpl r = (CardVORowImpl)vo.createRow();
r.setAttribute("NAME", pName);
r.setAttribute("PRICE", pPrice);
vo.insertRow(r);
Hope this helps...
Let me know if not..
Thanks
Edited by: MavenDev on Oct 30, 2011 8:08 PM -
OID users ( EUS) problem with grant create table with admin
Hi,
We activated enterprise users in the OID.
There is a role APP_ADMIN that has the following grants:
create user
drop user
create table with admin option
this is for an application that creates BI schemas, so it needs to be able to create other users.
I have granted these to a local role, and the user has access to the local role, thanks to the OID setup.
The create and drop user work.
however, the grant create table to another user does not work.
Is there an issue with 'with admin option' grants in Enterprise user security?
Regards,
PeterIf I grant
grant create table to test_role with admin option;
it does not work
if I grant
GRANT GRANT ANY PRIVILEGE to test_role WITH ADMIN OPTION;
it does work.
The test command as user with test_role is:
grant create table to test_usr;
very strange!
If the user is a standard user and I create role test_role
and grant create table to test_role with admin option it works.
but if I convert the user to an EUS user and the same privilege is given to the role ( role is granted to a global role to an enterprise role)
it doesnt work
Edited by: Peter on Dec 7, 2012 2:36 PM -
Exporting Data from one Server to Another server w/ Version Enabled Tables
Hi,
I'm currently having a problem with regards to Exporting data to another server. This is the Scenario:
Source Server is Production Server with all of its Tables in the Schema are Version-Enabled.
Destination Server is a Test Server.
I exported data from Production Server using EXP command. Then in my Test Server I imported my data using IMP command (I already created tablespace and user for the Schema).
Import is successful in my Test server but when I execute my queries, There are no rows returned.
I checked my _LT tables and it contains my data. but when I query from the View created when version was enabled, no result is returned.
Am I missing something when I exported and imported my Schema? Should I have included the WMSYS schema when I created the .dump file?
Thanks in advance.Hi Stefan,
we tried using Export and Import using Data Pump.
expdp system/password@orcl full=y directory=dmpdir2 dumpfile=FULL_DB.dmp
impdp system/password@orcl full=y table_exists_action=truncate directory=dmpdir2 dumpfile=FULL_DB.dmp
Still the same result as using exp and imp. _LT tables have data but when you query using the View, no results are found.
Maybe you are looking for
-
Good People, I have an Airport Extreme serving as my wireless router with an external hard drive connected to it. I also have an Airport Express in another room connected to a stereo for Airplay. Neither can be found on the Airport Utility (v5.5.3)
-
Is there such a thing as an IP address alias?
I'm resigning the address scheme of my network and wonder if there is a way I can setup a second IP address in the router that will point to the new address of a server. My idea would be to make a virtual address out of the old address so that it wil
-
How to calculate RFC table columns in to another table column.
Hi, I am doing a scenario, where in my Wendynpro UI table has been populated with a Function module output. Now three of the output columns needs to processed on each row. For ex: 1. I have to add two cloumn values on each row in to third column whic
-
HT3204 unable to purchase music - keep getting error code 5002 on - using Vista
Unable to purchase songs - keep getting error code 5002 - using Vista
-
IPhone 3G (Orange PAYG) and BT Home Hub
Is there any way I can make calls on my iPhone 3G via my BT Home Hub wi-fi signal (strong) avoiding my Pay-As-You-Go Orange SIM (poor reception)? For instance, this evening I couldn't send a text message (no Orange reception) but the wi-fi signal fro