Unique Constraint Not Allowed in OA Table design?
Is it really true or is just my OApps Admin /DBA is just plain stupid for not allowing unique constraint at table level?
I have to implement a EO attribute level validation that does a select on an unindexed field thru the List Validator feature.
The problem is it is taking 40-60 seconds to commit something to the EO.
Thats true, if talking about only OAF, you don't need the constraint as you can handle that in code!
--Mukul
Similar Messages
-
Unique constraint violation on version enabled table
hi!
we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
we somewhat assume that to be a pretty standard scenario when using versioned data.
is this some kind of bug or do we just miss something important here?
more information:
- versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
- database version is 10.2.0.1.0
- wm installation output:
ALLOW_CAPTURE_EVENTS OFF
ALLOW_MULTI_PARENT_WORKSPACES OFF
ALLOW_NESTED_TABLE_COLUMNS OFF
CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
NUMBER_OF_COMPRESS_BATCHES 50
OWM_VERSION 10.2.0.1.0
UNDO_SPACE UNLIMITED
USE_TIMESTAMP_TYPE_FOR_HISTORY ON
- all operations are done on LIVE workspace
any help is appreciated.
EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
regards,
Andreas Schilling
Message was edited by:
aschillinghi!
we're facing a strange problem with a version enabled table that has an unique constraint on one column. if we rename an object stored in the table (the name-attribute of the object is the one that has a unique constraint on the respective column) and rename it back to the old name again, we get an ORA-00001 unique constraint violation on the execution of an update trigger.
if the constraint is simply applied as before to the now version enabled table, I understand that this happens, but shouldn't workspace manager take care of something like that when a table with unique constraints is version enabled? (the documentation also says that) because taking versioning into account it's not that we try to insert another object with the same name, it's the same object at another point in time now getting back it's old name.
we somewhat assume that to be a pretty standard scenario when using versioned data.
is this some kind of bug or do we just miss something important here?
more information:
- versioning is enabled on all tables with VIEW_WO_OVERWRITE and no valid time support
- database version is 10.2.0.1.0
- wm installation output:
ALLOW_CAPTURE_EVENTS OFF
ALLOW_MULTI_PARENT_WORKSPACES OFF
ALLOW_NESTED_TABLE_COLUMNS OFF
CR_WORKSPACE_MODE OPTIMISTIC_LOCKING
FIRE_TRIGGERS_FOR_NONDML_EVENTS ON
NONCR_WORKSPACE_MODE OPTIMISTIC_LOCKING
NUMBER_OF_COMPRESS_BATCHES 50
OWM_VERSION 10.2.0.1.0
UNDO_SPACE UNLIMITED
USE_TIMESTAMP_TYPE_FOR_HISTORY ON
- all operations are done on LIVE workspace
any help is appreciated.
EDIT: we found out the following: the table we are talking about is the only table where the unique constraint is left. so there must have been a problem during version enabling. on another oracle installation we did everything the same way and the unique constraint wasn't left there, so everything works fine.
regards,
Andreas Schilling
Message was edited by:
aschilling -
Hi,
I originally had a unique constraint on a table on columns say C1,C2,C3. Later I added a 4th column C4. I can see that the 4th column is part of the same index in all_cons_columns.r
But, when I try to insert row with same value of C1,C2,C3 but different C4 it gives me the same Unique key error even though the value I am entering for C4 is unique for that table. What could cause this?user13755008 wrote:
select * from all_cons_columns where constraint_name='HSRP_TRACK_UK';
OWNER CONSTRAINT_NAME TABLE_NAME
COLUMN_NAME
POSITION
NC3 HSRP_TRACK_UK HSRP_TRACK
ODBID_EQP
1
NC3 HSRP_TRACK_UK HSRP_TRACK
START_DATE
4
NC3 HSRP_TRACK_UK HSRP_TRACK
OBJECT_NUM
2
NC3 HSRP_TRACK_UK HSRP_TRACK
did some details get misplaced here?
>
>
>
select * from all_ind_columns where INDEX_NAME='HSRP_OBJ_INTER_ST_UIX';
INDEX_OWNER INDEX_NAME TABLE_OWNER TABLE_NAME
COLUMN_NAME
COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
ODBID_EQP
1 22 0 ASC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
OBJECT_NUM
2 4 4 ASC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
INTERFACE
3 32 32 ASC
NC3 HSRP_OBJ_INTER_ST_UIX NC3 HSRP_TRACK
START_DATE
4 7 0 ASC
select * from nc3.hsrp_track where odbid= 87820678;
ODBID CREATETMSTMP CREATEUS CREATETRAN
LASTUPDTMSTMP LASTUPDU LASTUPDTRAN ODBID_EQP
ODBID_START_ORDER ODBID_STOP_ORDER ODBID_PREV START_DATE STOP_DATE STATUS COMPONENT_TYPE OBJE
INTERFACE PROCESS
87820678 2013-01-18-11:29:29.285050 NC3USER StoreRouting
2013-01-18-11:29:29.285050 NC3USER StoreRouting 87811116
87811106 2013-01-18 9999-12-31 Started Equipment 1
Serial0/0 line-protocol
SQL> insert into NC3.HSRP_TRACK Values(3,'2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting', '2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting ', 87820377,'','','', '2013-01-18','9999-12-31','Started', 'Equipment', '1', 'Serial0/0', '');
insert into NC3.HSRP_TRACK Values(3,'2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting', '2013-01-18-11:29:29.436033', 'NC3USER ', 'StoreRouting ', 87820377,'','','', '2013-01-18','9999-12-31','Started', 'Equipment', '1', 'Serial0/0', '')
ERROR at line 1:
ORA-00001: unique constraint (NC3.HSRP_TRACK_UK) violated
SQL> desc nc3.hsrp_track
Name Null? Type
ODBID NOT NULL NUMBER(11)
CREATETMSTMP NOT NULL TIMESTAMP(6)
CREATEUSER NOT NULL CHAR(8)
CREATETRAN NOT NULL CHAR(16)
LASTUPDTMSTMP NOT NULL TIMESTAMP(6)
LASTUPDUSER NOT NULL CHAR(8)
LASTUPDTRAN NOT NULL CHAR(16)
ODBID_EQP NOT NULL NUMBER(11)
ODBID_START_ORDER NUMBER(11)
ODBID_STOP_ORDER NUMBER(11)
ODBID_PREV NUMBER(11)
START_DATE NOT NULL DATE
STOP_DATE NOT NULL DATE
STATUS NOT NULL CHAR(16)
COMPONENT_TYPE NOT NULL VARCHAR2(32)
OBJECT_NUM NOT NULL VARCHAR2(4)
INTERFACE NOT NULL VARCHAR2(32)
PROCESS VARCHAR2(15)
SQL> select * from nc3.hsrp_track where odbid_eqp=87820377;
no rows selected
Please, see since odbid_eqp that I am trying to insert is not presen in table unique index should not be thrown in any case? I am confused.
Edited by: user13755008 on Jan 18, 2013 8:24 PMpost results from SQL below
SELECT COLUMN_NAME, POSITION, CONSTRAINT_NAME FROM ALL_CONS_COLUMNS
WHERE TABLE_NAME = 'HSRP_TRACK' AND OWNER = 'NC3'; -
Not allow changes to table definition
Hi all,
I need not to allow any changes to a specific custom table definition. I'm looking for something similar to the "lock editor" in se38 applied to SE11. Can anybody help me ?
Thanks in advance,
GianlucaHello Gianluca,
When you have created the table and the required entries have been made ..change the data browser/table maintainance view attribute in the 'Delivery and maintenace tab 'to either 'Display/Maintenance allowed with restrictions' or 'Display or maintenace not allowed' according to the requirement.
hope this will resolve your issue.
Regards,
bhumika -
Crystal Reports XI does not allow to change Table to SQL command?
I have report that has Table in Datasource and this table used in report and all fileds are mapped. I need to change table to SQL Command with the same result set of collumns. When I try to Update in Set Datasource Location - it does not work. CR XI allow to update Command to table but Table to Command just do nothing.
What I have to do or how I can do it?Alexander,
That's probably the "Best" way to do it, and long term you'll want to start adding BOE to your work flow.
If you want to get around it, here how:
1) MAKE A COPY OF YOUR REPORT AND WORK FROM THE COPY!!! This involves some a good deal of destruction before you get into reconstruction.
2) Once you have created your command, remove the table.
3) Now the fun part... Go through report and manually change all references to the 1st table...
report fields
formulas
selection criteria
groups
the whole 9 yards...
A short cut for the future... If you make all of your formula copies of all of your fields ( fCustomerName = {Table.CustomerName} ) and then only use the formula version of the field in the reports, you can make these changes very easily. (All you have to do is update the one set of formulas.)
Also as a side note before you get started... You may want to think twice before you mix commands with tables. You loose the server side filtering and grouping on the tables when you do that. So if you have several tables, you are better off doing the whole thing in one SQL command, do all of your filtering and sorting there and use it to replace ALL of your tables.
Basically, Graham's way is the easy way... Assuming you have access to the BOE.
Jason -
Call HTTP web service action not allowing HTTPS in SharePoint Designer 2013.
HTTP works perfectly in browser and in workflow.
HTTPS works perfectly in browser and hangs in workflow.
Others having this issue have been advised to use OAuth2. OAuth2 is not an option without HTTPS, you're passing authorization tokens around in plaintext. This is not an acceptable solution.
Please advise.It says HTTPS web services will have an Issue with the SharePoint Designer.
Thanks, Parth -
Unique constraint violation while updating a non PK column
Hi,
I seem to have found this strange error.
What I try to do is bulk fetch a cursor in some table arrays. with limit of 1000
Then using a forall and a save exceptions at the end
I update a table with the values inside one of the table arrays.
The column I update is not part of a PK
I catch the error message: ORA-24381
by using PRAGMA exception_init(dml_errors, -24381);
and later on :
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
FOR i IN 1..sql%BULK_EXCEPTIONS.count LOOP
lr_logging.parameters:= 'index = ' || sql%BULK_EXCEPTIONS(i).error_index || 'error = ' ||Sqlerrm(-sql%BULK_EXCEPTIONS(i).error_code) ;
END LOOP;
I insert these errors in another table. and i get 956 errors
first one is :
index = 3error = ORA-00001: unique constraint (.) violated
last one is
index = 1000error = ORA-00001: unique constraint (.) violated
How can this be.Since i don't update in a PKcolumn.
FULL CODE IS:
PROCEDURE Update_corr_values( as_checkdate_from IN VARCHAR2,
as_checkdate_until IN VARCHAR2,
as_market IN VARCHAR2
IS
LS_MODULE_NAME CONSTANT VARCHAR2(30) := 'update_values';
lr_logging recon_logging.logrec;
CURSOR lc_update IS
SELECT /*+ORDERED*/c.rowid,c.ralve_record_id,d.value,c.timestamp,f.value
FROM rcx_allocated_values a,
rcx_allocated_values b,
meter_histories e,
rcx_allocated_lp_value c,
rcx_allocated_lp_value d,
counter_values f
WHERE a.slp_type NOT IN ('S89', 'S88', 'S10', 'S30') --AELP
AND b.slp_type IN ('S89', 'S88') --residu
AND a.valid_from >= to_date(as_checkdate_from,'DDMMYYYY HH24:MI')
AND a.valid_to <= to_date(as_checkdate_until,'DDMMYYYY HH24:MI')
AND a.market = as_market
AND a.market = b.market
AND a.ean_sup = b.ean_sup
AND a.ean_br = b.ean_br
AND a.ean_gos = b.ean_gos
AND a.ean_dgo = b.ean_dgo
AND a.direction = b.direction
AND a.valid_from = b.valid_from
AND a.valid_to = b.valid_to
AND c.ralve_record_id = a.record_id
AND d.ralve_record_id = b.record_id
AND c.TIMESTAMP = d.TIMESTAMP
AND e.ASSET_ID = 'KCF.SLP.' || a.SLP_TYPE
--AND f.timestamp between to_date(gs_checkdate_from,'ddmmyyyy') and to_Date(as_checkdate_until,'ddmmyyyy')
AND e.SEQ = f.MHY_SEQ
AND f.TIMESTAMP =c.timestamp - 1/24
ORDER BY c.rowid;
TYPE t_value IS TABLE OF RCX_ALLOCATED_LP_VALUE.VALUE%TYPE;
TYPE t_kcf IS TABLE OF COUNTER_VALUES.VALUE%TYPE;
TYPE t_timestamp IS TABLE OF RCX_ALLOCATED_LP_VALUE.TIMESTAMP%TYPE;
TYPE t_ralverecord_id IS TABLE OF RCX_ALLOCATED_LP_VALUE.RALVE_RECORD_ID%TYPE;
TYPE t_row IS TABLE OF UROWID;
ln_row t_row :=t_row();
lt_value t_value := t_Value();
lt_kcf t_kcf := t_kcf();
lt_timestamp t_timestamp := t_timestamp();
lt_ralve t_ralverecord_id := t_ralverecord_id();
v_bulk NUMBER := 1000;
val number;
kcf number;
ralve number;
times date;
dml_errors EXCEPTION;
errors NUMBER;
PRAGMA exception_init(dml_errors, -24381);
BEGIN
--setting arguments for the logging record
lr_logging.module := LS_MODULE_NAME;
lr_logging.context := 'INFLOW_ALL_VALUES_PARTS';
lr_logging.logged_by := USER;
lr_logging.parameters := 'Date time started: ' || TO_CHAR(sysdate,'DD/MM/YYYY HH24:MI');
-- log debugs
recon_logging.set_logging_env (TRUE, TRUE);
recon_logging.log_event(lr_logging,'D');
OPEN lc_update;
LOOP
FETCH lc_update BULK COLLECT INTO ln_row,lt_ralve,lt_value,lt_timestamp,lt_kcf LIMIT v_bulk;
FORALL i IN NVL(lt_value.first,1)..NVL(lt_value.last,0) SAVE EXCEPTIONS
UPDATE RCX_ALLOCATED_LP_VALUE
SET VALUE = VALUE * lt_value(i) * lt_kcf(i)
WHERE rowid =ln_row(i);
COMMIT;
lt_value.delete;
lt_timestamp.delete;
lt_ralve.delete;
lt_kcf.delete;
ln_row.delete;
EXIT WHEN lc_update%NOTFOUND;
END LOOP;
CLOSE lc_update;
recon_logging.log_event(lr_logging,'D');
lr_logging.parameters := 'Date time ended: ' || TO_CHAR(sysdate,'DD/MM/YYYY HH24:MI');
recon_logging.log_event(lr_logging,'D');
--to be sure
COMMIT;
EXCEPTION
WHEN dml_errors THEN
recon_logging.set_logging_env(TRUE,TRUE);
lr_logging.module := 'updatevalues';
lr_logging.context := 'exception';
lr_logging.logged_by := USER;
lr_logging.parameters := 'in dml_errors';
recon_logging.log_event(lr_logging);
errors := SQL%BULK_EXCEPTIONS.COUNT;
lr_logging.parameters:=errors;
recon_logging.log_event(lr_logging);
lr_logging.parameters :=('Number of errors is ' || errors);
--DBMS_OUTPUT.PUT_LINE('Number of errors is ' || errors);
FOR i IN 1..sql%BULK_EXCEPTIONS.count LOOP
lr_logging.parameters:= 'index = ' || sql%BULK_EXCEPTIONS(i).error_index || 'error = ' ||Sqlerrm(-sql%BULK_EXCEPTIONS(i).error_code) ;
recon_logging.log_event(lr_logging);
END LOOP;
--recon_logging.set_logging_env(TRUE,TRUE);
--recon_logging.log_event(lr_logging);
commit;
WHEN OTHERS THEN
lr_logging.module := 'updatevalues';
lr_logging.context := 'exception';
lr_logging.logged_by := USER;
recon_logging.set_logging_env(TRUE,TRUE);
lr_logging.parameters := 'in others error=' || SQLERRM;
recon_logging.log_event(lr_logging);
commit;--to look which is truly the last (else only commit after 1000)
--raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
END Update_corr_values;Hi,
No I didn't update a unique constraint.
But I found out that there is a trigger that causes the unique constraint while updating in the table.
Silly mistake.Didn't know there was a trigger there.
Thx anyway.
Greetz -
Foreign key also refer to unique constraint??
foreign key also refer to unique constraint.
(GREAT...)
1.then table that containt unique constraint act as master table??
2.IS unique constraint will replace with primary key??
3.Is unique constraint+not null gives all fuctionality as primary key constraint in ???
4.if primary key=unique+not null then what is use of primary key????????????
thanks
kuljeet pal singhWhen you are establishing a foreign key relationship between two tables, a child record must point to a unique record in the parent table. Typically, the child record points to the primary key of the parent, although any unique field or fields in the parent will do.
So, a table with a unique constraint can act as a parent table in a foreign key relationship.
A unique constraint may be replaced with a primary key, but not neccessarily.
A unique constraint plus a not null constraint is functionally identical to a primary key.
The principle benefit of a primary key compared to unique plus not null is that it provides additional information to someone looking at the database. The primary key is the unchanging identifier for a particular record. A unique constraint plus a not null constraint only implies uniqueness. It is somewhat common for unique values to change over time, as long as they remain unique, but a primary key should never change.
TTFN
John -
Hi everybody
I can't understand why my data pump import execution with parameter TABLE_EXISTS_ACTION=TRUNCATE returned ORA-00001 (unique constraint violation), while importing schema tables from a remote database where the source tables have the same unique constraints as the corresponding ones on the target database.
Now my question is "If the table would be truncated, why I get unique constraint violation when inserting records from a table where the same unique constraint is validated?
Here are the used parameter file content and the impdp logfile.
parfile
{code}
DIRECTORY=IMPEXP_LOG_COLL2
CONTENT=DATA_ONLY
NETWORK_LINK=PRODUCTION
PARALLEL=1
TABLE_EXISTS_ACTION=TRUNCATE
EXCLUDE=STATISTICS
{code}
logfile
{code}
Import: Release 10.2.0.1.0 - Production on Gioved� 22 Ottobre, 2009 15:33:44
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connesso a: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "IMPEXP"."PROVA_REFRESH_DBCOLL": impexp/********@dbcoll SCHEMAS=test_pump LOGFILE=test_pump.log parfile=refresh_dbcoll.par JOB_NAME=prova_refresh_dbcoll
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 523 MB
ORA-31693: Table data object "TEST_PUMP"."X10000000_TRIGGER" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (TEST_PUMP.SYS_C00726627) violated
ORA-31693: Table data object "TEST_PUMP"."X10000000_BASIC" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (TEST_PUMP.SYS_C00726625) violated
Job "IMPEXP"."PROVA_REFRESH_DBCOLL" completed with 2 error(s) at 15:34:04
{code}
Thank you
Bye AlessandroI forgot to read the last two lines of the documentation about TABLE_EXISTS_ACTION where it says:
"TRUNCATE cannot be used on clustered tables or over network links."
So it seems that it ignored the clause for the use of NETWORK_LINK and unreasonably opted for an APPEND action instead of throwing an error to highlight the conflicting parameters in the used configuration.
Bye Alessandro -
ORA-00001 UNIQUE CONSTRAINT ERROR ON SYS.WRH$_SQLTEXT_PK?
/u01/app/oracle/admin/bill02/bdump/bill02_m000_10074.trc
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.1.0
System name: SunOS
Node name: bill02
Release: 5.9
Version: Generic_117171-17
Machine: sun4u
Instance name: bill02
Redo thread mounted by this instance: 1
Oracle process number: 55
Unix process pid: 10074, image: oracle@bill02 (m000)
*** 2005-10-27 02:03:14.682
*** ACTION NAME:(Auto-Flush Slave Action) 2005-10-27 02:03:14.667
*** MODULE NAME:(MMON_SLAVE) 2005-10-27 02:03:14.667
*** SERVICE NAME:(SYS$BACKGROUND) 2005-10-27 02:03:14.667
*** SESSION ID:(519.24807) 2005-10-27 02:03:14.667
*** KEWROCISTMTEXEC - encountered error: (ORA-00001: unique constraint (SYS.WRH$_SQLTEXT_PK) violated
*** SQLSTR: total-len=570, dump-len=240,
STR={INSERT INTO wrh$_sqltext (sql_id, dbid, sql_text, command_type, snap_id, ref_count) SELECT /*+ ordered use_nl(s1) index(s1) */ sie.sqlid_kewrsie, :dbid, s1.sql_fulltext, }
*** KEWRAFM1: Error=13509 encountered by kewrfteh
How to fix?
Thanks!Have you tried searching Metalink? A unique constraint violation on a SYS table by MMON will almost certainly require Oracle Support's assistance to resolve. Something rather deep inside Oracle has done something wrrong.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
VLD-2769 DB link not allowed error
version: OWB 10.1.0
situation: target dimension object doesn't reside in same schema as mapping.
error: VLD-2769 DB link not allowed for target table
Is there a way to use Public Synonyms or something else to overcome this problem in OWB without having to place the mapping in the same schema as the dimension object or say a table.
Thank youthanks for the reply
I have a db link for it but I don't think that's the problem, I even have a public synonym for the table.
Oracle tells me that they dont support mapping in one schema calling a target object in another schema, BUT they said there was a workaround.
that is what I am trying to find out....
thanks -
Hi,
I have a table with many LONG fields (28). So far, everythings works fine.
However, if I add another LONG field I cannot insert a dataset anymore
(29 LONG fields).
Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
Thanks in advance
Michael
appendix:
- Create and Insert command and error message
- MaxDB version and its parameters
Create and Insert command and error message
CREATE TABLE "DBA"."AZ_Z_TEST02"
"ZTB_ID" Integer NOT NULL,
"ZTB_NAMEOFREPORT" Char (400) ASCII DEFAULT '',
"ZTB_LONG_COMMENT" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_00" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_01" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_02" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_03" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_04" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_05" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_06" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_07" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_08" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_09" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_10" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_11" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_12" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_13" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_14" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_15" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_16" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_17" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_18" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_19" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_20" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_21" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_22" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_23" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_24" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_25" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_26" LONG ASCII DEFAULT '',
PRIMARY KEY ("ZTB_ID")
The insert command
INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
works fine. If I add the LONG field
"ZTB_LONG_TEXTBLOCK_27" LONG ASCII DEFAULT '',
the following error occurs:
Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
MaxDB version and its parameters
All db params given by
dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
are
KERNELVERSION KERNEL 7.5.0 BUILD 026-123-094-430
INSTANCE_TYPE OLTP
MCOD NO
RESTART_SHUTDOWN MANUAL
SERVERDBFOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 10
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 LOG_001
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 64000
DATA_VOLUME_NAME_0001 DAT_0001
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0001 64000
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 22
MULTIO_BLOCK_CNT 4
DELAYLOGWRITER 0
LOG_IO_QUEUE 50
RESTARTTIME 600
MAXCPU 1
MAXUSERTASKS 50
TRANSRGNS 8
TABRGNS 8
OMSREGIONS 0
OMSRGNS 25
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
ROWRGNS 8
MINSERVER_DESC 16
MAXSERVERTASKS 20
_MAXTRANS 288
MAXLOCKS 2880
LOCKSUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
USEASYNC_IO YES
IOPROCSPER_DEV 1
IOPROCSFOR_PRIO 1
USEIOPROCS_ONLY NO
IOPROCSSWITCH 2
LRU_FOR_SCAN NO
PAGESIZE 8192
PACKETSIZE 36864
MINREPLYSIZE 4096
MBLOCKDATA_SIZE 32768
MBLOCKQUAL_SIZE 16384
MBLOCKSTACK_SIZE 16384
MBLOCKSTRAT_SIZE 8192
WORKSTACKSIZE 16384
WORKDATASIZE 8192
CATCACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 1632
INIT_ALLOCATORSIZE 229376
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
TASKCLUSTER01 tw;al;ut;2000sv,100bup;10ev,10gc;
TASKCLUSTER02 ti,100dw;30000us;
TASKCLUSTER03 compress
MPRGN_QUEUE YES
MPRGN_DIRTY_READ NO
MPRGN_BUSY_WAIT NO
MPDISP_LOOPS 1
MPDISP_PRIO NO
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 0
MPRGN_PRIO NO
MAXRGN_REQUEST 300
PRIOBASE_U2U 100
PRIOBASE_IOC 80
PRIOBASE_RAV 80
PRIOBASE_REX 40
PRIOBASE_COM 10
PRIOFACTOR 80
DELAYCOMMIT NO
SVP1_CONV_FLUSH NO
MAXGARBAGECOLL 0
MAXTASKSTACK 1024
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
DWIO_AREA_SIZE 50
DWIO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
FBMLOW_IO_RATE 10
CACHE_SIZE 10000
DWLRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
DATACACHE_RGNS 8
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 11
SEQUENCE_CACHE 1
IDXFILELIST_SIZE 2048
SERVERDESC_CACHE 73
SERVERCMD_CACHE 21
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
READAHEADBLOBS 25
RUNDIRECTORY E:\_mp\u_v_dbs\EVERW_C5
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 0
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 648
EXTERNAL_DUMP_REQUEST NO
AKDUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
UTILITYPROTFILE dbm.utl
UTILITY_PROTSIZE 100
BACKUPHISTFILE dbm.knl
BACKUPMED_DEF dbm.mdf
MAXMESSAGE_FILES 0
EVENTALIVE_CYCLE 0
_SHAREDDYNDATA 10280
_SHAREDDYNPOOL 3607
USE_MEM_ENHANCE NO
MEM_ENHANCE_LIMIT 0
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-13 13:47:17
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
DIAGSEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 21333
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_JOIN_OUTER YES
JOIN_OPERATOR_IMPLEMENTATION IMPROVED
JOIN_TABLEBUFFER 128
OPTIMIZE_FETCH_REVERSE YES
SET_VOLUME_LOCK YES
SHAREDSQL NO
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FORBID_LOAD_BALANCING NO>
Lars Breddemann wrote:
> Hi Michael,
>
> this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
> Really.
>
> Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
> Anyhow, when I use
>
> insert into "AZ_Z_TEST02" values (87,'','','','','','','','','','','','','','','',''
> ,'','','','','','','','','','','','','','','','')
>
> it works fine.
It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
>
Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
>
> Now to the other errors:
> - 28 Long values per row?
> What the heck is wrong with the data design here?
> Honestly, you can save data up to 2 GB in a BLOB/CLOB.
> Currently, your data design allows 56 GB per row.
> Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
>
> - The "ZTB_NAMEOFREPORT" looks like something the users see -
> still there is no unique constraint preventing that you get 10000 of reports with the same name...
You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
(These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
>
- MaxDB 7.5 Build 26 ?? Where have you been the last years?
> Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
> With 7.6. I was not able to reproduce your issue at all.
The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
>
- Are you really putting your data into the DBA schema? Don't do that, ever.
> DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
> Create a user/schema for your application data and put your tables into that.
>
> KR Lars
In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
Michael -
Commit not allowed in Trigger then how to store the values in table
Hi,
Database trigger not allowed to COMMIT in it (directly) or in procedure which is calling from the Trigger. it gives error.
so my question is:-
1)How Database internally store the INSERT AND UPDATE value in table which is written in procedure without COMMIT?
2) Is it necessary to write COMMIT in procedure using PRAGAMA AUTONOMOUS TRANSACTION to store inserted or updated value in table?
Thanks in advance.Hi,
A trigger is designed to be a part of a transaction, not it's end.
It is like following these steps (not really accurate, but should give an idea):
1. programA issues INSERT statement on tableA
2. 'control' goes over to the database
3. constraint checks happen
4. the associated triggers on tableA do their work (e.g. fetching a sequence value for a primary key)
5. 'control' goes back to programA
6. programA decides if a commit (no error occurred) or a rollback (error case) has to be issued.
Did this help? Probably not, I'm not happy with what I wrote, but anyway... :) -
Insert result of query into a table with unique constraint
Hi,
I have a query result that I would like to store in a table. The target table has a unique constraint. In MySQL you can do
insert IGNORE into myResultTable <...select statement...>
The IGNORE clause means if inserting a row would violate a unique or primary key constraint, do not insert the row, but continue inserting the rest of the query. Leaving the IGNORE clause out would cause the insert to fail and an error to return.
I would like to do this in oracle... that is insert the results of a query that are not already in the target table. What is the best way to do this? One way is use a procedural language and loop through the first query, checking to see if each row is a duplicate before inserting it. I would think this would be slow if there are lots of records. Other options...
insert into myTargetTable
select value from mySourceTable where ... and not exists (select 'x' from myTargetTable where value = mySourceTable.value)
insert into myTargetTable
select mySourceTable.value
from myTargetTable RIGHT JOIN mySourceTable
ON myTargetTable.value = mySourceTable.value
where ...
and myTargetTable.value IS NULL
any other suggestions?
Thanks,
SimonTry doing a MINUS instead of not exists., ie Source MINUS Target.
Disabling the constraint will not help you since this will allow the duplicate rows to be inserted into the table. I don't think you want this.
--kalpana -
NOT NULL Unique Constraint in Data Modeler
I've created Unique Constraints in the Relational Model and I'm trying to figure out how to make it a NOT NULL constraint.
Let's say the table name is category with columns cat_id, cat_name, sort.
In SQL I create "ALTER TABLE category MODIFY (category CONSTRAINT xxx_cat_name_nn NOT NULL);", but inside the modeler there is no data entry points in the [Unique Key Properties - xxx_cat_name_nn] dialog box, that I can find, that lets me tell it that it is a NOT NULL constraint. I'm sure there is a way but I'm just fall over my own feet trying to find it.
Any help would be greatly appricated.
Edited by: 991065 on Feb 28, 2013 1:40 PMHi,
You can make the column NOT NULL by unsetting the "Allow Nulls" property for the Column.
If you want a named NOT NULL Constraint, you should also set the "Not Null Constraint Name" property (on the Default and Constraint tab of the Column Properties dialog).
David
Maybe you are looking for
-
How to fetch year till date value for earning for current ,last and year
hi, how to fetch year till date value for earning for current ,last and year before that from payroll result plz reply soon, pratyush
-
Hi , I want to display a ALV Grid layout output and on that I want to insert a button on the application bar .When I click on it I should call a screen and dispaly that out on that screen . Prg : CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY' EXPORTING
-
Issue in updating large number of rows which is taking a long time
Hi all, Am new to oracle forums. First I will explain my problems as below: 1) I have a table of 350 columns for which i have two indexes. One is for primary key's id and the other is the composite id (combination of two functional ids) 2) Through my
-
Trying to Do TotPlan Refresh but TotPlan.dmp file doesn't exist
I tried using C:\demos\TotPlan\PLN\Batch\Refresh_demo_TotPlan.bat , however I get an error when it tries to restore ORCL db using backup file - C:\demos\TotPlan\PLN\Backup\TotPlan.dmp . I search my drive and there is no .dmp file anywhere on the driv
-
How to bring Job Posting tab page in Requisition Maintenance?
Hi All, I want to know how to bring the Job Posting tab page in Requisition Maintenance? As SAP help document says, before creating Job Posting you should create the corresponding requisition. I can able to create the requisition as a requisition man