SQLServer migration: trigger 'MD_PROJECTS_TRG' is invalid
Did anybody else come across this? I'm trying to migrate some objects from SQL Server, but cannot capture the data base. I did everything as the help says:
-Created a user to migrate (MIGRACAO) and a user to be the destiny of the migration (INFOAC) in the oracle database.
-Created and connected both users previously created in SQLDeveloper.
-Created and connect a user to sqlserver (MS_INFOAC) in SQLDeveloper.
When trying to capture the database, a window pops up, I wait a long time, but nothing heappens, just the close button is enabled, with no objects in the grid. So I supose that the capture failed, or somthing like it.
When I close it, I have the folling error in the migration log:
oracle.dbtools.metadata.persistence.PersistenceException: ORA-04098: trigger 'MIGRACAO.MD_PROJECTS_TRG' is invalid and failed re-validation
I inspected the MD_PROJECTS_TRG trigger from the user MIGRACAO, and it is just this:
create or replace TRIGGER "MD_PROJECTS_TRG" BEFORE INSERT OR UPDATE ON MD_PROJECTS
FOR EACH ROW
BEGIN
if inserting and :new.id is null then
:new.id := MD_META.get_next_id;
end if;
END;
Trying to compile it I got this compiler message:
Error(7,20): PLS-00905: object M_SQLSERVER.MD_META is invalid
Alright, then I inspected the MD_META object, and I have notice that its an unsual PL/SQL Package, as you can see:
create or replace
PACKAGE BODY "MD_META" wrapped
a000000
1
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
b
59a 327
yKgUGt13cb4L+
Trying to compile this I got these two errors:
Error(1): PLS-00908: The stored format of MD_META is not supported by this release
Error(22,2): PLS-00707: unsupported construct or internal error [2702]
And not only this package, but another 2 have the same errors. I'm afraid I cannot use the migration tool because any kind of incompatiblity.
SQLDeveloper: 1.1.2.25
Oracle: 9.0.2.4
Why would you want to migrate to 9.0.2? It is well beyond its support period. Surely any new project should be using 10g?
As it happens, you don't have to migrate to 10g - you just have to use 10g as the repository.
You could also try the older Migration Workbench.
Similar Messages
-
ORA-04098 trigger"BANKTRAN_BEF_DEL" is invalid and and failed re-validation
Hey Experts,
I created follwoing trigger successfully...
create or replace trigger BANKTRAN_BEF_DEL
before delete on BANKTRAN
declare
weekend_error EXCEPTION;
not_authentocated_user EXCEPTION;
begin
if TO_CHAR(SysDate,'DY') = 'SAT' or TO_CHAR(SysDate,'DY') = 'SUN' THEN
RAISE weekend_error;
end if;
if SUBSTR(User,1,3) <> 'ATN' THEN
RAISE not_authentocated_user ;
end if;
EXCEPTION
WHEN weekend_error THEN
RAISE_APPLICATION_ERROR (-20001,
'Deletions not allowed on weekends');
WHEN not_authentocated_user THEN
RAISE_APPLICATION_ERROR (-20002,
'Deletions only allowed by authentocated users');
end;
but when deleting the records using query delete from BANKTRAN
getting the below error
ORA-04098 trigger"BANKTRAN_BEF_DEL" is invalid and and failed re-validation
Edited by: SShubhangi on Jan 7, 2013 4:21 PMAlright.
Now Try the DML that causes the Trigger to fire.
And post the details.
PS:- Please use {noformat}{noformat} before and after the SQL statements/results or code samples.
It makes post more readable and you get better help. -
Trigger position is invalid??? pxi-5102 digitizer
Im currently trying to set up 2 channels to read at 8MS/s a peice for 1 second. Im getting the error " Trigger position is invalid." altho.. I dont even see a trigger position option from where I am gettin the error. Im getting the error at the Scope Horz Config VI. I have a picture of the setup Im useing, please le me know if im doing somthing wrong, thank you veyr much, im very puzzled over this.
-MarkThe 5102 cannot have more pretrigger points than it has onboard buffer memory - 663,000 bytes. By default, the trigger position is 50%, or halfway through the buffer. Taking 8 million points/channel on both channels allows you 331,500 points/channel or a maximum trigger position of about 4%.
Set the trigger position using the reference position input of niScope Configure Horizontal Timing.vi. If you want to always be safe, use zero.
This account is no longer active. Contact ShadesOfGray for current posts and information. -
ECC to HANA DB Migration, DMO error with invalid tables
Hello Migration Experts,
I am facing one issue during the uptime migration phase in DMO. Some of the Export and Import jobs are failing, and i see they are failing due to invalid tables. I have checked in HANA,those table are not imported in to HANA schema.
Should i be adding those tables as described in the BW first guidance document ?
Since the tables doesn't exist in HANA schema , what flag i should be using while adding the tables ?
Content from the first guidance doc
If you encounter critical or invalid tables you can do the following workaround:
Edit the file \\server\sapmnt\CIH\SUM\abap\bin\EUCLONEDEFS_ADD.LST (create if not available)
and add the affected tables here, depending the option you want to take, e.g.
/BIC/AZSPOB10300 noclone (Table seems not to exist)
TBTCS igncount (Table count will be ignored)
TST01 igncount
DBABARL nocontent (Table doesn’t exist on HANA)
REPOSRC ignlargercount (Table might change during cloning)
I have also found OSS Notes 1747673 - R3load on SAP HANA: SQL error: SQL syntax error: near ";" which advises to repalce DDLHDB.TPL, but my kernel is higher than the note says, Can i still try ?
Please advise
Thanks in advance,
GR
Below are some of my error logs
Import Job One
R3load: START OF LOG: 20140906205313
R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#11 $ SAP
R3load: version R7.41/V1.9 [UNICODE]
Compiled Aug 16 2014 00:13:51
R3load -pipe -decluster -i /usr/sap/ECS/SUM/abap/migrate_ut/MIGRATE_UT_00032_IMP.CMD -datacodepage 4103 -dbcodepage 4103 -l /usr/sap/ECS/SUM/abap/migrate_ut/MIGRATE_UT_
00032_IMP.LOG -loadprocedure fast -table_suffix ~ -k 1gegdUM50D801eqteAAv1A94
-------------------- Start of patch information ------------------------
patchinfo (patches.h): (0.048) R3load stops in the dependency loop while creating tasks (note 2044380)
DBSL patchinfo (patches.h): (0.031) New DBCON syntax for HANA (note 1983389)
--------------------- End of patch information -------------------------
process id 17232
(DB) INFO: connected to DB
(DB) INFO: NewDB Kernel version 1.00.80.00.391861
SQLDBC 1.00.82.00.0394270
(GSI) INFO: dbname = "DEV/00 "
(GSI) INFO: vname = "HDB "
(GSI) INFO: hostname = "saphana "
(GSI) INFO: sysname = "Linux"
(GSI) INFO: nodename = "SAPECCBOX"
(GSI) INFO: release = "3.0.13-0.27-default"
(GSI) INFO: version = "#1 SMP Wed Feb 15 13:33:49 UTC 2012 (d73692b)"
(GSI) INFO: machine = "x86_64"
(RTF) ########## WARNING ###########
Without ORDER BY PRIMARY KEY the exported data may be unusable for some databases
(RDI) INFO: /usr/sap/ECS/SUM/abap/migrate_ut/MIGRATE_UT_00032_IMP.STR has format version 2
(RDI) INFO: /usr/sap/ECS/SUM/abap/migrate_ut/SAPDOKCLU.STR.logical has format version 2
(DCL) INFO: found logical cluster description for DOKCLU in /usr/sap/ECS/SUM/abap/migrate_ut/SAPDOKCLU.STR.logical
(DB) INFO: loading data in table "DOKCLU" with mass loader for LOBs #20140906205313
(DB) INFO: DOKCLU deleted/truncated
ERROR exec_ddl_stmt: (DB) ERROR: DDL statement failed
(DELETE FROM "DOKTL"WHERE ('HY' < "ID" OR ("ID" = 'HY' AND 'SIMGISPAM_BELEGARTEN' < "OBJECT") OR ("ID" = 'HY' AND "OBJECT" = 'SIMGISPAM_BELEGARTEN' AND 'D' < "LANG
U") OR ("ID" = 'HY' AND "OBJECT" = 'SIMGISPAM_BELEGARTEN' AND "LANGU" = 'D' AND 'E' < "TYP") OR ("ID" = 'HY' AND "OBJECT" = 'SIMGISPAM_BELEGARTEN' AND "LANGU" = 'D' A
ND "TYP" = 'E' AND '0013' < "DOKVERSION")) AND ("ID" < 'TX' OR ("ID" = 'TX' AND "OBJECT" < 'SMP2ALR0000134') OR ("ID" = 'TX' AND "OBJECT" = 'SMP2ALR0000134' AND "LAN
GU" < 'E') OR ("ID" = 'TX' AND "OBJECT" = 'SMP2ALR0000134' AND "LANGU" = 'E' AND "TYP" < 'E') OR ("ID" = 'TX' AND "OBJECT" = 'SMP2ALR0000134' AND "LANGU" = 'E' AND "T
YP" = 'E' AND "DOKVERSION" <= '0001')))
DbSlExecute: rc = 103
Another import Job
(RTF) ########## WARNING ###########
Without ORDER BY PRIMARY KEY the exported data may be unusable for some databases
(RDI) INFO: /usr/sap/ECS/SUM/abap/migrate_ut/MIGRATE_UT_00063_IMP.STR has format version 2
(RDI) INFO: /usr/sap/ECS/SUM/abap/migrate_ut/SAPTERCL3.STR.logical has format version 2
(DCL) INFO: found logical cluster description for TERCL3 in /usr/sap/ECS/SUM/abap/migrate_ut/SAPTERCL3.STR.logical
(DDL) ERROR: no DDL for TERCL3
(build_ddl_stmt).
(IMP) INFO: a failed DROP attempt is not necessarily a problem
(DB) INFO: TERCL3 merged #20140906205315
(DDL) ERROR: no DDL for TERMC3
(build_ddl_stmt).
(IMP) INFO: a failed DROP attempt is not necessarily a problem
DbSl Trace: prepare() of C_0002, rc=1, rcSQL=259
DbSl Trace: PREPARE C_0002 on connection 0, rc=259
ERROR HDBexistsRow: Prepare/Read failed (dbrc=103).
(SQL error 259)
error message returned by DbSl:
invalid table name: Could not find table/view TERMC3 in schema SAPECS: line 1 col 58 (at pos 57)
(RDI) INFO: /usr/sap/ECS/SUM/abap/migrate_ut/SAPTERCL3.STR.logical has format version 2
(DCL) INFO: found logical cluster description for TERCL3 in /usr/sap/ECS/SUM/abap/migrate_ut/SAPTERCL3.STR.logical
(SQL) INFO: Searching for SQL file SQLFiles.LST
(SQL) INFO: found SQLFiles.LST
(SQL) INFO: Trying to open SQLFiles.LST
(SQL) INFO: SQLFiles.LST opened
(SQL) INFO: Searching for SQL file SSEXC.SQL
(SQL) INFO: SSEXC.SQL not found
(DB) INFO: TERCL3^0 dropped
(DB) INFO: TERCL3^0 created #20140906205315
(SQL) INFO: Searching for SQL file SDOCU.SQL
(SQL) INFO: SDOCU.SQL not found
ERROR exec_ddl_stmt: (DB) ERROR: DDL statement failed
(ALTER TABLE "TERMC3" DROP CONSTRAINT "TERMC3^0")
DbSlExecute: rc = 103
(SQL error 259)
error message returned by DbSl:
invalid table name: TERMC3: line 1 col 13 (at pos 12)
(IMP) INFO: a failed DROP attempt is not necessarily a problem
ERROR exec_ddl_stmt: (DB) ERROR: DDL statement failed
(ALTER TABLE "TERMC3" ADD CONSTRAINT "TERMC3^0" PRIMARY KEY ( "SPRAS", "TERM", "LINENUMBER" ) )
DbSlExecute: rc = 103
(SQL error 259)
error message returned by DbSl:
invalid table name: TERMC3: line 1 col 13 (at pos 12)
(RDI) INFO: /usr/sap/ECS/SUM/abap/migrate_ut/SAPTERCL3.STR.logical has format version 2
(DCL) INFO: found logical cluster description for TERCL3 in /usr/sap/ECS/SUM/abap/migrate_ut/SAPTERCL3.STR.logical
(DDL) ERROR: no DDL for TERCL3
(build_ddl_stmt).
(IMP) INFO: a failed DROP attempt is not necessarily a problem
(DB) INFO: TERCL3 unloaded #20140906205315
(DDL) ERROR: no DDL for TERMC3
(build_ddl_stmt).
(IMP) INFO: a failed DROP attempt is not necessarily a problem
ERROR exec_ddl_stmt: (DB) ERROR: DDL statement failed
( ALTER TABLE "TERMC3" enable persistent merge )
DbSlExecute: rc = 103
(SQL error 259)
error message returned by DbSl:
invalid table name: TERMC3: line 1 col 56 (at pos 55)
(DB) INFO: disconnected from DBResult of that table issue fix is that migration went ahead, finished both uptime and downtime run. Now i am trying to login to SAP and system is unable to log me in as DOKTL doesnt exist at all
But i see another table called DTELDOKTL in HANA, which same as DOKTL but no entries in it.
I have created a message to SAP , But what should i be doing now ?
Can i create this table in HANA ?
Can i do export/import of table ? Table structure should be there in system already for this option i guess.
Would i be able to replicate this table if i setup SLT ?
Please advise
C Mon Sep 8 11:02:30 2014
C *** ERROR => prepare() of C_0341, rc=1, rcSQL=259
[dbsdbsql.cpp 1397]
C {root-id=00505680721D1EE48DE09241702FF193}_{conn-id=00000000000000000000000000000000}_0
C *** ERROR => PREPARE C_0341 on connection 0, rc=259
[dbslsdb.cpp 9138]
C SQLCODE : 259
C SQLERRTEXT : invalid table name: Could not find table/view DOKTL in schema SAPECS: line 1 col 97 (at pos 96)
C sc_p=7f28f48cfac8,no=341,idc_p=7f28f3f87388,con=0,act=0,slen=254,smax=256,#vars=5,stmt=7a45980,table=DOKTL
C stmtid = <0/SAPMSYST /50334994/20140907170329>
C SELECT "ID" , "OBJECT" , "LANGU" , "TYP" , "DOKVERSION" , "LINE" , "DOKFORMAT" , "DOKTEXT" FROM "DOK\
C TL" WHERE "LANGU" = ? AND "ID" = ? AND "OBJECT" = ? AND "TYP" = ? AND "DOKVERSION" = ? ORDER BY "ID"\
C , "OBJECT" , "LANGU" , "TYP" , "DOKVERSION" , "LINE" ;
B ***LOG BZA=> table DOKTL does not exist on database R/3 [dbdbslst 3580]
M ThPerformDeleteThisSession: switch of statistics
D GuiStatus clear generate inline ts >20140908030105,195<
M ***LOG R47=> ThResFreeSession, delete () [thxxmode.c 1078]
M *** WARNING => ThrtGuiDeleteScreen: rscpGetUserLoginLang failed (128), use system language [thrtGuiHandl 469]
Thanks,
GR -
hey guys,
trying to create a trigger which should insert values into a table when another table gets updated, as well as trigger of a sequence of increasing an ID column by 1.
my trigger however shows as invalid and aqua data studio shows following error message:
"2 30 PL/SQL: ORA-00928: missing SELECT keyword"
trigger:
CREATE OR REPLACE TRIGGER "NIKUO"."z_nxpFTEtoCost_T1"
BEFORE INSERT OR UPDATE ON ODF_CA_COSTPLAN
FOR EACH ROW
WHEN (
old.ms_nxp_convert = 1 AND old.ms_nxp_put = 'ms_fte'
BEGIN
INSERT INTO z_nxpFTEtoCost (:new.id, cp_id, date_stamp)
VALUES ((select z_nxpFTEtoCost_S1.nextval into id from dual), :new.ID, SYSDATE());
END;
any ideas?
thanks,
mikeI think this should work:
create or replace trigger "NIKUO"."z_nxpFTEtoCost_T1"
before insert or update
on odf_ca_costplan
for each row
when(old.ms_nxp_convert = 1 and old.ms_nxp_put = 'ms_fte' )
begin
insert into z_nxpftetocost
id, cp_id, date_stamp )
values (
z_nxpftetocost_s1.nextval, :new.id, sysdate );
end;
/:new.id in column-list should be id
Message was edited by:
RemcoGoris
null -
How to trigger DSP cache invalidation and reload?
We have a scheduled job to update the database nightly. I used DSP caching to increase performance since the data only changes during the nightly updates.
Does anyone know how can I trigger the DSP cache invalidation and reload the updated data to DSP cache once the data loading is completed on the database server? (The database is hosted on a different server than DSP.)
Thanks in advance!
NavOn this page
http://e-docs.bea.com/aldsp/docs25/appdev/ejbclt.html
Bypassing the Data Cache When Using the Mediator API
Data retrieved by data service functions can be cached for quick access. This is known as a data caching. (See Configuring the Query Results Cache, in the DSP Administration Guide for details.) Assuming the data changes infrequently, it's likely that you'll want to use the cache capability.
You can bypass the data cache by passing the GET_CURRENT_DATA attribute within a function call, as shown in Listing 3-7. GET_CURRENT_DATA returns a Boolean value. As a by-product, the cache is also refreshed.
Listing 3-7 Cache Bypass Example When Using Mediator API
dataServices.customermanagement.CustomerProfile customerProfileDS =
customerDS=dataServices.customermanagement.CustomerProfile.getInstance(ctx,appName);RequestConfig
config = new
RequestConfig();attr.enableFeature(RequestConfig.GET_CURRENT_DATA);CustomerProfileDocument
customerProfileDoc customerPlofileDS.CustomerProfile(params,config); -
LIve migration trigger on Resource crunch in Hyper-v 2012 or R2
Like VMware DRS , who can trigger Migration of virtual machine from 1 datastore to another or 1 host to another. do we have same kind of machanism in Hyper-v 2012 also.
i have five HYper-v 2012 host in single cluster. now i want if any host face resource crucnh of memory than it migrate the virtual machine on another host.
Thanks ravinder
RaviSCVMM has a feature called Dynamic Optimization.
Dynamic Optimization can be configured on a host group, to migrate virtual machines within host clusters with a specified frequency and aggressiveness. Aggressiveness determines the amount of load imbalance that is required to initiate a migration during
Dynamic Optimization. By default, virtual machines are migrated every 10 minutes with medium aggressiveness. When configuring frequency and aggressiveness for Dynamic Optimization, an administrator should factor in the resource cost of additional migrations
against the advantages of balancing load among hosts in a host cluster. By default, a host group inherits Dynamic Optimization settings from its parent host group.
Dynamic Optimization can be set up for clusters with two or more nodes. If a host group contains stand-alone hosts or host clusters that do not support live migration, Dynamic Optimization is not performed on those hosts. Any hosts that are in maintenance
mode also are excluded from Dynamic Optimization. In addition, VMM only migrates highly available virtual machines that use shared storage. If a host cluster contains virtual machines that are not highly available, those virtual machines are not migrated during
Dynamic Optimization.
http://technet.microsoft.com/en-us/library/gg675109.aspx
Cheers !
Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
InsideVirtualization.com -
Installing CF10, in Configuration and Settings Migration Wizard, in browserhttp://127.0.0.1/CFIDE/administrator/enter.cfm,
ColdFusion has been successfully installed. This wizard will guide you through the remaining server configuration steps and, if applicable, migrate settings from a previous version of ColdFusion.
To guarantee the security of your server, please enter your ColdFusion Administrator password.
- getting error: Invalid Password. It won't take the password I provided in previous "Set Admin password" screen. Help?Hi,
I am having the issue with the URL on the Configuration ans Setting Migration Wizard pointing to the following URL:
http://localhost/CFIDE/administrator/enter.cfm
Receiving a message:
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
The following is the information in the error page:
Module
IIS Web Core
Notification
MapRequestHandler
Handler
ISAPI-dll
Error Code
0x80070002
Requested URL
http://localhost:80/jakarta/isapi_redirect.dll
Physical Path
D:\ColdFusion10\cfusion\wwwroot\jakarta\isapi_redirect.dll
Logon Method
Anonymous
Logon User
Anonymous
I could not find a directory called jakarta.
Any ideas?
Thanks,
Mike -
Invalid trigger on 7.7.06.09
I have migrate a database from SAPDB 7.4.03.32 to MaxDB 7.7.06.09 and have a problem with following trigger:
CREATE TRIGGER T_TREE_DELETE FOR S_TREE AFTER DELETE EXECUTE (
IF PARENT <> -1
THEN DELETE FROM DBA.S_ENTITY WHERE ETYID IN (SELECT DBA.S_ENTITY.ETYID FROM DBA.S_ENTITY,DBA.S_TREE WHERE (PARENT = :OLD.ETYID) AND (DBA.S_ENTITY.ETYID = DBA.S_TREE.ETYID));
IF PARENT = -1
THEN UPDATE DBA.S_TREE SET PARENT = -1 WHERE PARENT = :OLD.ETYID; DELETE FROM DBA.S_TREE WHERE PARENT = -1;
UPDATE DBA.S_TREE SET NODEORDER = NODEORDER-1 WHERE (PARENT = :OLD.INTERNALUSE) AND (NODEORDER >= :OLD.NODEORDER) AND (TREETYPE = :OLD.TREETYPE) AND (ETYID <> :OLD.ETYID);
When I try to delete an entry in the table S_ENTITY (S_TREE has a foreign key to S_ENTITY with the option cascade) with our application I get this error:
ERROR: SQLException: [-8037]: TRIGGER T_TREE_DELETE is invalid (rc -9206)
For a test I deleted the last part of the trigger "UPDATE DBA.S_TRE ...." and now it works, so I guess this is the part which make trouble.
Here is a part of the kernel trace log:
REQUEST: ascii, normal_swap, 70400-ODB (2 segments, len: 176)
(20.4290 page 20)
parse SEGMENT 1 (1 part, len: 96)
session_sqlmode, user_cmd
command PART (1 argument, size: 126440)
buf(38): 'delete from s_entity where (ETYID = ?)'
dbs SEGMENT 2 (1 part, len: 80, offset: 96)
session_sqlmode, user_cmd
command PART (1 argument, size: 126344, segm_offset: 96)
buf(22): 'CLOSE "JDBC_CURSOR_12"'
>b02get key(28):
FFFF0000 00000047 00410001 0053005F 0045004E 00540049 00540059
b02get root 59552; *** key_not_found ***
>b02get key(28):
00000000 00000002 00410001 0053005F 0045004E 00540049 00540059
b02get root 59552; *** key_not_found ***
>b02get key(28):
FFFF0000 0000000A 00410001 0053005F 0045004E 00540049 00540059
b02get root 59552; ok
>b02get key(20): 00000000 0000043C 00060001 FFFF0000 00000047
b02get root 44664; ok
>b02get key(12): 00000000 0000043C 00660001
b02get root 59552; ok
>b02get key(12): 00000000 0000043C 00670001
b02get root 59552; ok
>b02get key(12): 00000000 0000043C 00140001
b02get root 44664; ok
>b02get key(12): 00000000 000003B6 00670001
b02get root 59552; ok
>b02get key(12): 00000000 000003B6 00150001
b02get root 44664; ok
>b02get key(12): 00000000 000003B6 0008FE01
b02get root 44664; ok
REQUEST: unicode, full_swap, 62000-XCI (1 segment, len: 0)
(20.6789 page 20)
dbs SEGMENT 1 (1 part, len: 1216)
internal, user_cmd
scrollable on
command PART (1 argument, size: 8104)
buf(1158):
' C R E A T E T R I G G E R T _ T R E E _ D E L E T E F '
'O R S _ T R E E A F T E R D E L E T E E X E C U'
>b07cadd key(12): 0F000000 20202020 00B10001
b07cadd root 89384; ok
>b07cadd key(138):
00000000 00000000 00B5004A 00440042 0043005F 00430055 00520053
004F0052 005F0031 00320020 00200020 00200020 00200020 00200020
00200020
b07cadd root 89384; ok
>b07cadd key(12): 0F000000 20202020 00B20000
b07cadd root 89384; ok
>b07cadd key(12): 0F000000 20202020 00C10000
b07cadd root 89384; ok
>b07cadd key(13): 0F000000 20202020 00C10000 01
b07cadd root 89384; ok
>b02get key(24): FFFF0000 0000000A 00410001 0053005F 00540052 00450045
b02get root 59552; ok
>b02get key(12): 00000000 000003B6 00150003
b02get root 44664; ok
>b02get key(12): 00000000 000003B6 00150005
b02get root 44664; ok
>b02logadd key(76):
00000000 00000004 00B70001 00540045 004D0050 00200020
00200020 00200020 00200020 00200020 00200020 00200020
00200020 00200020 00200020
>b01t_create Temp; fileTfnNo: 0; ttfnTempLog
session: 0; fid: 553648128
fn: 1A00 80000000000002F6 000000000021
b01t_create root 59697; ok
>b07cadd key(4): 00000001
b07cadd root 59697; ok
b02logadd root 89384; ok
>b02logadd key(12): FFFD0000 10000000 00B10001
>b07cadd key(4): 00000002
b07cadd root 59697; ok
b02logadd root 89384; ok
>b02logadd key(12): 00000000 00018EC9 00010001
b02logadd root 44664; ok
>b02kb_repl key(12): 00000000 000003B6 00780001
b02kb_repl root 59552; *** key_not_found ***
>b02logadd key(12): 00000000 000003B6 00780001
b02logadd root 59552; ok
>b02kb_repl key(12): 00000000 000003B6 00780101
b02kb_repl root 59552; *** key_not_found ***
>b02logadd key(12): 00000000 000003B6 00780101
b02logadd root 59552; ok
>b02kb_del key(12): 00000000 000003B6 00180001
b02kb_del root 44664; ok
>b02kb_del key(13): 00000000 000003B6 00180001 01
b02kb_del root 44664; ok
>b02kb_del key(13): 00000000 000003B6 00180001 02
b02kb_del root 44664; *** key_not_found ***
>b02logadd key(12): 00000000 000003B6 00180001
b02logadd root 44664; ok
>b02logadd key(13): 00000000 000003B6 00180001 01
b02logadd root 44664; ok
>b02kb_del key(76):
00000000 00000004 00B70001 00540045 004D0050 00200020
00200020 00200020 00200020 00200020 00200020 00200020
00200020 00200020 00200020
>b07cadd key(4): 00000003
b07cadd root 59697; ok
b02kb_del root 89384; ok
>b02first_ql StartKey(8): FFFD0000 10000000
zero Stop Key
b02first_ql root 89384; ok
>b07cadd key(4): 00000004
b07cadd root 59697; ok
>b02del key(12): FFFD0000 10000000 00B10001
b02del root 89384; ok
>b02next_qual StartKey(12): FFFD0000 10000000 00B10001
zero Stop Key
b02next_qual root 89384; *** no_next_record ***
>b02kb_repl key(12): 00000000 000003B6 00150001
b02kb_repl root 44664; ok
>b01t_create Temp; fileTfnNo: 0; ttfnPars
session: 0; fid: 100663296
fn: 1A00 80000000000002F7 000000000006
b01t_create root 74585; ok
>b01t_reset Temp; fileTfnNo: 0; ttfnNone
session: 0; fid: 0
fn: 1A00 0000000000000000 000000000000
>b01t_create Temp; fileTfnNo: 0; ttfnNone
session: 0; fid: 0
fn: 1A00 80000000000002F8 000000000000
b01t_create root 89473; ok
b01t_reset root 89473; ok
>b02get key(12): 00000000 000003B6 00080101
b02get root 44664; ok
RECEIVE: unicode, full_swap, 70706-XCI (1 segment, len: 0)
(21.4926 page 21)
parse SEGMENT 1 (2 parts, len: 400)
internal, user_cmd
scrollable on
command PART (1 argument, size: 16296)
buf(318):
' D E L E T E F R O M D B A . S _ E N T I T Y W H E R E '
' E T Y I D I N ( S E L E C T D B A . S _ E N T I'
appl_param_desc PART (1 argument, size: 15960)
1. fixed(38)
REQUEST: unicode, full_swap, 70706-XCI (1 segment, len: 0)
(21.5226 page 21)
parse SEGMENT 1 (2 parts, len: 400)
internal, user_cmd
scrollable on
command PART (1 argument, size: 16296)
buf(318):
' D E L E T E F R O M D B A . S _ E N T I T Y W H E R E '
' E T Y I D I N ( S E L E C T D B A . S _ E N T I'
appl_param_desc PART (1 argument, size: 15960)
1. fixed(38)
opmsg: AK CACHE 00000000000000000081FFFF0000000000000000
RECEIVE: unicode, full_swap, 70706-XCI (1 segment, len: 312)
(21.5610 page 21)
*** -9206 / RETURN SEGMENT 1 (1 part, len: 312)
delete_fc, errpos: 159, sqlstate: ''S9206'
errortext PART (1 argument, size: 16296)
buf(256):
' S y s t e m e r r o r : A K D u p l i c a t e c a t '
'a l o g i n f o r m a t i o n : 0 0 0 0 0 0 0 0 0 0 0'
>b01destroy Temp; fileTfnNo: 0; ttfnPars
session: 0; fid: 100663296
fn: 1A00 80000000000002F7 000000000006
b01destroy root 74585; ok
>b01destroy Temp; fileTfnNo: 0; ttfnUsage
session: 0; fid: 167772160
fn: 1A00 80000000000002F8 00000000000A
b01destroy root 89473; ok
>b07cadd key(12): FF00002C 010A0020 00890000
b07cadd root 89384; ok
>b01t_create Temp; fileTfnNo: 0; ttfnCacheRollback
session: 0; fid: 520093696
fn: 1A00 80000000000002F9 00000000001F
b01t_create root 104361; ok
>b07cadd key(12): FF00002C 010A0020 00890000
b07cadd root 104361; ok
>b02prev key(4): FFFFFFFF
b02prev root 59697; *** key_not_found ***
>b07cadd key(12): FFFD0000 10000000 00B10001
b07cadd root 89384; ok
>b07cdel key(4): 00000004
b07cdel root 59697; ok
>b02prev key(4): 00000004
b02prev root 59697; *** key_not_found ***
>b07cadd key(76):
00000000 00000004 00B70001 00540045 004D0050 00200020 00200020
00200020 00200020 00200020 00200020 00200020 00200020 00200020
00200020
b07cadd root 89384; ok
>b07cdel key(4): 00000003
b07cdel root 59697; ok
>b02prev key(4): 00000003
b02prev root 59697; *** key_not_found ***
>b07cdel key(12): FFFD0000 10000000 00B10001
b07cdel root 89384; ok
>b07cdel key(4): 00000002
b07cdel root 59697; ok
>b02prev key(4): 00000002
b02prev root 59697; *** key_not_found ***
>b07cdel key(76):
00000000 00000004 00B70001 00540045 004D0050 00200020 00200020
00200020 00200020 00200020 00200020 00200020 00200020 00200020
00200020
b07cdel root 89384; ok
>b07cdel key(4): 00000001
b07cdel root 59697; ok
>b02prev key(4): 00000001
b02prev root 59697; *** no_prev_record ***
>b02get key(12): 00000000 000003B6 00150001
b02get root 44664; ok
>b02repl key(12): 00000000 000003B6 00150001
b02repl root 44664; ok
>b02get key(13): 00000000 000003B6 00180001 01
b02get root 44664; ok
>b02del key(13): 00000000 000003B6 00180001 01
b02del root 44664; ok
>b02get key(12): 00000000 000003B6 00180001
b02get root 44664; ok
>b02del key(12): 00000000 000003B6 00180001
b02del root 44664; ok
>b02add key(13): 00000000 000003B6 00180001 01
b02add root 44664; ok
>b02add key(12): 00000000 000003B6 00180001
b02add root 44664; ok
>b02get key(12): 00000000 000003B6 00780101
b02get root 59552; ok
>b02del key(12): 00000000 000003B6 00780101
b02del root 59552; ok
>b02get key(12): 00000000 000003B6 00780001
b02get root 59552; ok
>b02del key(12): 00000000 000003B6 00780001
b02del root 59552; ok
>b02get key(12): 00000000 00018EC9 00010001
b02get root 44664; ok
>b02del key(12): 00000000 00018EC9 00010001
b02del root 44664; ok
>b07cnext zero key
b07cnext root 104361; *** key_not_found ***
>b07cdel key(12): FF00002C 010A0020 00890000
b07cdel root 89384; ok
opmsg: AK CACHE 00000000000000000081FFFF0000000000000000
RECEIVE: unicode, full_swap, 62000-XCI (1 segment, len: 312)
(22.2148 page 22)
*** -9206 / RETURN SEGMENT 1 (1 part, len: 312)
create_trigger_fc, errpos: 253, sqlstate: ''S9206'
errortext PART (1 argument, size: 8104)
buf(256):
' S y s t e m e r r o r : A K D u p l i c a t e c a t '
'a l o g i n f o r m a t i o n : 0 0 0 0 0 0 0 0 0 0 0'
===== T2 =====Logwr ================================22.2499 page 22
*b15write log ---------------> 416; pno log 1: 274
*b15write log ---------------> 417; pno log 1: 274
I guess the database try to create the trigger again, but why? What goes wrong here?
Can someone help me out with the trigger? The trigger worked fine in SAPDB 7.4.06.09.
Thanks,
ThomasI still have a question about the trigger.
We have migrated a database from SAPDB 7.4.3 to 7.7 which already contains different trigger.
The Migration process is following:
I made a backup with the old database
On the new database I run these commands
dbmcli -d <dbname> -u dbm,<password> medium_put completeF "<backup-file>" FILE DATA 0 8 YES
dbmcli -d <dbname> -u dbm,<password> db_admin
dbmcli -d <dbname> -u dbm,<password> db_activate dba,<password>
dbmcli -d <dbname> -u dbm,<password> -uUTL -i "recover.param"
dbmcli -d <dbname> -u dbm,<password> db_online
// After 'db_online' we will get an error that the WebDAV tables are invalid
// So we will delete all WebDAV table because we don't need it
dbmcli -d <dbname> -u dbm,<password> -uSQL dba,<password> -i "drop_webdav.param"
dbmcli -d <dbname> -u dbm,<password> load_systab -u dba,<password> -ud domain
dbmcli -d <dbname> -u dbm,<password> sql_recreateindex
dbmcli -d <dbname> -u dbm,<password> medium_delete completeF
file "drop_webdav.param":
sql_execute DROP TABLE WEBDAV_CONTAINER
sql_execute DROP TABLE WEBDAV_INODE
sql_execute DROP TABLE WEBDAV_NAME_SPACE
sql_execute DROP TABLE WEBDAV_PROPERTY
sql_execute DROP TABLE WEBDAV_PROPERTY_MANAGEMENT
file "recover.param":
db_admin
db_connect
db_activate recover completeF data
When we try to delete an entry in one table the delete trigger reports:
[12:38:33] [DeleteUserReqHandler::handleRequest] ERROR: SQLException: [-8037]: TRIGGER T_DELETE_USER is invalid (rc -9206)
[12:38:33] [DeleteUserReqHandler::handleRequest] ERROR: SQLState: I8037
[12:38:33] [DeleteUserReqHandler::handleRequest] ERROR: VendorError: -8037
This is the trigger, but the problem is with other triggers too:
CREATE TRIGGER T_DELETE_USER FOR S_USER AFTER DELETE EXECUTE(
DELETE FROM DBA.S_GROUP WHERE :OLD.USRID = GRPID AND :OLD.USERNAME = NAME;
Is there an easy way to update the triggers during the migration process like load_systab? Or is there a command to convert the triggers so the MaxDB 7.7 can handle it? Because on MaxDB 7.7 they works correct.
Thanks for your help.
Best regards, Thomas -
ORA-22275: invalid LOB locator specified on trigger
I have a trigger which copies a blob on insert to one table to another.
CREATE OR REPLACE TRIGGER SWZTPRO.TSWTMPI_BEFORE_INSERT
BEFORE INSERT
ON SWZTPRO.TO_TSWTMPI
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
discriminator TO_TSWCRUL.BTC_DIS%TYPE;
discriminator:=:NEW.BTC_DIS;
insert into .....
If using after insert this triggerworks, but if the trigger fails for any reason, the client does not recieve the error.
If using before insert this fails:
insert into table, use before insert trigger
ORA-22275: invalid LOB locator specified
ORA-06512: at "SWZTPRO.TSWTMPI_BEFORE_INSERT", line 108
ORA-22275: invalid LOB locator specified
ORA-04088: error during execution of trigger 'SWZTPRO.TSWTMPI_BEFORE_INSERT'
Any help would be appreciatedI have also used a varaiation tirgger to do an instead of on insert on a view and I get the following error:
ORA-25008: no implicit conversion to LOB datatype in instead-of trigger -
Logon Trigger should not be invalidated
HI ,
I have a login trigger.When the objects being used by this are chaned (DDL) ,trigger is becoming invalid and its not allowing end users to login.
Though i ll make sure that Triggers will be disabled before any DDL to objects being used by this trigger ,I want to know is there any way to stop this invalidation.
Because we never know when these objects can be changed.I want to aviod Trigger to be invalidated and block users to login.
Is there any way to do this.
Thanks,
pramodMaybe this will work.
Say your current trigger_body is this:
begin
<lots of plsql and sql stuff here>
end;You could change that into:
begin
execute immediate 'begin <lots of plsql and sql stuff here> end;';
end;That way:
1) there is not direct DEPENDENCY between <lots of plsql and sql stuff here> and your (new) trigger code. So the trigger object won't get invalidated anymore.
2) the trigger always submits an anonymous pl/sql block (via exec imm.), i.e. not a stored plsql object that has a STATUS. The plsql block would probably get an automatic recompile, I think.
Worth a try.
Toon
PS. you might want to use the new quoting mechanism (10G +) if your trigger body has a lot of single quotes now. So new trigger body:
begin
execute immediate q'{begin <lots of plsql and sql stuff here> end;}';
end; -
Trigger invalidations, how to detect cause?
Hello,
is it possible to somehow detect what causes object(PKG/PRC/FNC/TRG) invalidations?
Application users doesn't have righst to compile the object. After invalidation object can be again successfuly recompiled by ALTER XXX COMPILE; command. For example we are using after logon trigger for application users, this trigger gets sometimes invalidated for no obvious reasons, users are unable to login to the database because they're hitting permission denied upon trying to compile this trigger. Any hints on how to detect what is the cause of these invalidations?
Thank you,
D.Hm, what if I found out that last DDL time with the latest date were the following objects:
SYS.DBMS_STANDARD
SYS.DICTIONARY_OBJ_NAME
SYS.DICTIONARY_OBJ_TYPE
SYS.STANDARD -
Convert!(SQLSERVER- Oracle!!)a trigger question!!
in the SQLSERVER,the trigger is writed this:
CREATE Trigger OnInsertBbname
on dbo.bbname
for insert
as
declare @bbname varchar(20)
declare @Sql varchar(1000)
select @bbname = bbname from inserted
if Not Exists( select * from Sysobjects where id = object_id(N'[dbo].['+@bbname+']') and objectproperty(id,'IsUserTable')=1 )
begin
set @Sql =
'Create Table temptable'+
temp varchar(10)
exec(@Sql)
end
question:
to covert oracle??You cannot execute DDL in a trigger, since Oracle issues an implicit commit before and after DDL statements and you are not allowed to commit in a trigger. If you really need to create a table in response to inserting a row in this table, you would need to do so asynchronously either by scheduling a job or queueing a message and having the job/ queue consumer create the table.
I would seriously question the need to create the table, though. While individual sessions can create tables in SQL Server, temporary table definitions in Oracle are always visible to all sessions, though the data is visible only to individual sessions.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Migration sql server 2000 to oracle 9i
Hi
I migrate sql server 2000 to oracle 9i
when i capture Microsoft Sql Server it gives an error
oracle.dbtools.metadata.persistence.PersistenceException: ORA-04098: trigger 'MIGRATIONS.MD_PROJECTS_TRG' is invalid and failed re-validation
I try it again and it starts but it doesn't stop until clicking cancel or close
what is the wrong?
ThanksHi,
You are hitting a know issue using a repository created in 9.2.
To show this is the case do the following -
SQL> alter trigger MD_PROJECTS_TRG compile ;
Warning: Trigger altered with compilation errors.
SQL> show errors
Errors for TRIGGER MD_PROJECTS_TRG:
LINE/COL ERROR
3/9 PL/SQL: Statement ignored
3/20 PLS-00905: object MIGREP.MD_META is invalid
Compiling md_meta -
SQL> alter package md_meta compile ;
Warning: Package altered with compilation errors.
SQL> show errors
Errors for PACKAGE MD_META:
LINE/COL ERROR
0/0 PLS-00908: The stored format of MD_META is not supported by this
release
21/4 PLS-00114: identifier 'PUTBAIFZKA3IHSJ5AC4ZXWYAWG41KN' too long
21/4 PLS-00707: unsupported construct or internal error [2702]
SQL>
==
If you get this then the only alternative is to create the SQL*Developer repository in a 10.2 database.
Regards,
Mike -
Migration plugin issues - MD_META package won't compile (9i)
I downloaded the SQL Server migration plugin yesterday and, after working through a few issues, finally got to the point where I could create a repository (in a new schema I named "MIGRATIONS") and connect to a SQL Server 2000 database. I selected a specific SQL Server DB, right mouse clicked, and selected "Capture Schema." It very quickly completed (about 2 seconds - too quickly for it to have actually captured anything), I clicked OK, and the Migration Log displayed the following message:
oracle.dbtools.metadata.persistence.PersistenceException: ORA-04098: trigger 'MIGRATIONS.MD_PROJECTS_TRG' is invalid and failed re-validation.
Upon closer inspection of the MIGRATIONS schema, I discovered several packages with red X's (e.g., MD_META). When I try to compile them, I get the following errors:
Error(1): PLS-00908: The stored format of MD_META is not supported by this release
Error(22,2): PLS-00114: identifier 'O7PUTBAIFZKA3IHSJ5AC4ZXWYAWG41' too long
Error(22,2): PLS-00707: unsupported construct or internal error [2702]
FWIW, the version of Oracle we're using is 9i. Thanks in advance for any help you can provide.I've read there is a patch with 9i wrappers about, but can't seem to find or locate it Can you point in a direction? I've already posted a message out onto the apex site.
Thanks for any assistance.
Maybe you are looking for
-
Hi, I am creating an application for a user to enter configuration data for running a test. I would like to have the user click on items on tree, and a dialog will pops up with a cluster control. Once they change settings in the cluster control and p
-
So I've just gotten off speaking on the phone to the clueless although sincere Sky Agent. I have Sky TV + Phone Line + Free BB. The Sky TV subscription is Family with Sports half price. I cancelled my Sky TV subscription last month and it cuts on the
-
Parser Problems (jaxp)
I am trying to parse a document and my program freezes at the factory.newDocumentBuilder() method. I have the same jar files that I have in my build path. I am confused. Any guidance would be appreciated. Thank you. try{ System.setProperty("javax.xml
-
Apple mail - sent item duplicates
I have two computers with osx 10.4 installed. Each has a unique imap account. Somehow, within the last few months some sent message (particularly ones with attachments) are duplicated in the sent items folder. I setup a rule that emails the person se
-
My nano has been malfunctioning recently. It won't turn on without resetting it and shows an icon that I have been unable to find info about on the apple site. The icon looks like the charging battery icon, except that it has a jagged edge on the lef