Error in LSMW execution. Table Lock overflow
HI experts,
I am running a LSMW with huge flat file.
Idocs are created for all the recoreds, but 7 records got error. The error is
"Message Type A: Lock table overflow. Program error: Incorrect
values in interface parameters".
I dont know what kind of error this is.
But i tried to figure it out, but no use.
Could anyone tell methe reason for this error, and the soltion please.
Kindly reply me, its very urgent. Its in QA system, need to move to production.
thanks in advance,
KK
Talk to yous Basis guy he can increase the limit of the lock table parameters.
Note: 746138
Symptom
This note describes the subsequent analysis performed to determine the cause of a lock table overflow.
Other terms
enque, enqueue, lock table overflow, ENQ_OVERFLOW, 16, EnquServerException
Reason and Prerequisites
The maximum number of locks in the lock table is subject to an upper limit that is set by profile parameters. The enque/table_size parameter determines the size of the lock table in KBs. You can use transaction SM12 -> Extras -> Statistics to determine both the maximum number and current number of entries.
A lock table overflow may occur if:
the lock table configuration is too small
an application sets lots of locks
the update hangs and, as a result, numerous locks inherited by the update task exist
Up to now, subsequent analysis could not be performed to determine which lock types and which lock owners mainly contributed to the lock table overflow.
Similar Messages
-
Hi All,
I am uploading material mater via standard direct input program RMDATIND. While executing I am uploading 10,000 records. But program is getting terminated giving an error log as 'Lock table overflow'.
Paralley, I am monitoring SM12. During execution I am finding lock enteries getting increased for table 'EANQ'.
Program is getting terminated after 3500 lock enteries in SM12 then after all those getting deleted.
EANQ is Table for Setting Up Lock Objects for EANs.
Can any one help me in this regards.
Regards
RohitHi,
While executing RMDATIND, in the First Screen you can see " Transactions per commit unit" under General data..
In that Field click & press F4 and see the below Description..
No. of transactions per commit unit for data transfer
You use this number to define how many transactions are to be processed
in a physical step.
o The higher the number, the less frequently the system has to access
the database, increasing, however, the size of internal tables.
o The smaller the number, the more frequently the system needs to
access the database, reducing, however, the size of internal tables.
Procedure
By changing this number, you may be able to improve performance.
Unfortunately, it is not possible to make any general recommendation
regarding size. However, 500 is appropriate.
So SAP suggests 500 Records for one Upload...
Here Try to put Like 3000, 3500,4000 etc...
and check the Upload program where it fails and Select &use the Appropriate No of Records
Per Upload.
Thx
Raju -
Hi,
During the performance monitoring, we observed lock table overflow occurred for T-Code VT01N(Shipment creation) in production system on 28-MAy-08 at 15:30:10 system time for user ID XXXXXXX. In the past two months, lock table overflow occurred twice, once on 11-Apr-08 for user ID YYYYYY and on 28-MAy-08 for user ID XXXXXXXX, both for T-Code VT01N.
Can anyone please help me by telling the reasons and the corrective measures need to take to prevent this issue.look at SAP Note : 746138 for analysis ..
also look at 660133 -
Scheduling error causing - ORA-00942: table or view does not exist
Hello
I have the following problem, I created an scenario that its supposed to run every 1 hour, it has an interface that moves an amount of records from one table to another (the amount of records is not fixed it changes). The problem arises when the amount of data to move is so big that the hour between executions is not enough and the scenario is still running when a second execution of the same scenario starts to run (as scheduled), so in the interface when the second process is trying to run the loading step it fails and I got the error ODI-1228 Caused By: java.sql.BatchUpdateException: ORA-00942: table or view does not exists. So the scenario fails and it logs this error. I think the error is due to the table in stage being locked by the first execution of the scenario.
I don't want to change the amount of time between executions because it is usually enough, I just want to avoid the execution of a scenario when there is another execution of the same scenario still running, is there any way to do this?.
Thanks
LJHello
Thanks for your answer, I already have the "Interval between repetitions" property set to 1 hour, but when one execution takes more than one hour the scheduler starts another and I have 2 processes accessing the same table and it is when I have a concurrence error on the staging table.
How can I ensure that I wont have two or more executions of the same scenario running at the same time?.
Thanks,
LJ -
Error Maintaining multi-lingual tables.
Hello, for past few days i have been stack at this error, can't find the solution. So i am bringing my problem here, hoping for an answer.
Platform: Linux Red Hat Enterprise 5
i am on R12.1.3 version and DB: 11g 2 realise.
I need to install a Finnish language on EBS, so i licensed Finnish language in EBS.
As mentioned in subject my problem lies within Maintaining multi-lingual tables.
After i run the adadmin and select Maintain multi-lingual tables, everything is right until the last jobs, where 3 workers fail! here is the error
sqlplus -s APPS/***** @/u01/ar121/VIS/apps/apps_st/appl/ibc/12.0.0/sql/IBCNLINS.sql
Connected.
PL/SQL procedure successfully completed.
MESG
LANGUAGE=AMERICAN
PACKAGE=IBC_CITEM_VERSIONS_PKG
SQLERRM=ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-20000: Oracle Text error:
DRG-50857: oracle error in textindexmethods.ODCIIndexInsert
ORA-20000: Oracle Text error:
DRG-10607: index meta data is not ready yet for queuing DML
DRG-50857: oracle error in drdmlv
ORA-01426: numeric overflow
ORA-30576: ConText Option dictionary loading error
MESG
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 752
select to_date('ERROR')
ERROR at line 1:
ORA-01858: a non-numeric character was found where a numeric was expected
And here is the worker fail
ATTENTION: All workers either have failed or are waiting:
FAILED: file IBCNLINS.sql on worker 1.
FAILED: file CSNLINS.sql on worker 2.
FAILED: file IRCNLINS.sql on worker 3.
the worker error is the same on all three workers, just script is changed.
I am pretty new at EBS, and newbie at DB administration too, so don't be so hard on me. Thx
Edited by: 905194 on Dec 30, 2011 4:17 AM
Edited by: 905194 on Dec 30, 2011 4:20 AM
Edited by: 905194 on Dec 30, 2011 4:23 AMWas this instance upgraded from 11i and 10g database ? If so, a step in the upgrade may have been missed. Pl see this MOS Doc
Applying The Patch 6678700 Worker 1 Failed: File Cskbcat.Ldt. ERRORS: ORA-20000: Oracle Text error: DRG-50857: oracle error in textindexmethods.ODCIIndexUpdate, DRG-13201: KOREAN_LEXER is desupported (Doc ID 1333659.1)
If you are using 11.2.0.3, MOS Doc 1386945.1 (Oracle Text release 11.2.0.3.0 mandataroy Patches) may be applicable
HTH
Srini -
Timesten Table Locking - ttxactadmin
Hi,
Point number 1.
Trace file output
15:42:41.961 10667 SQL 2L 13C 7866P Preparing: {CALL ttOptSetFlag('RowLock', 0)}
15:42:41.961 10668 SQL 3L 13C 7866P Executing: {CALL ttOptSetFlag('RowLock', 0)}
15:42:41.961 10669 SQL 2L 13C 7866P Preparing: {CALL ttOptSetFlag('TblLock', 1)}
15:42:41.961 10670 SQL 3L 13C 7866P Executing: {CALL ttOptSetFlag('TblLock', 1)}
15:44:11.013 246101 SQL 3L 1C 23864P Opening: SELECT SMST_SHORT_SELL_FLG, NVL(SM.SMST_INTRADAY_ALLWED_FLG, 'N'), NVL(SM.SMST_MAR_ALLOWD_FLG, 'N'), NVL(SM.SMST_T2_ALLOW_FLG, 'N'), NVL(SM.SMST_COVER_ALLOW_FLG, 'N'), DECODE(:B3 , :B2 , NVL(SM.SMST_RCBM_BASKET_ID, 'NO_BKT'), NVL(SMST_EBM_BASKET_ID, 'NO_BKT')), SM.SMST_ISIN_CODE, SM.SMST_OPEN_POSITION, SM.SMST_EXPOSURE_LIMIT FROM SECURITY_MASTER SM WHERE SMST_SECURITY_ID = :B1
15:44:11.013 246102 SQL 3L 10C 8041P Executing: update ttrep.reppeers set sendlsnhigh = :h, sendlsnlow = :l, reptableslsnhigh = :rh, reptableslsnlow = :rl, commit_timestamp = :ct, commit_seqnum = :cs, timesend = :t where replication_name = :rname and replication_owner = :rowner and tt_store_id = :mid and subscriber_id = :sid
15:44:11.013 246103 SQL 3L 1C 23864P execEnvGet: Allocated new env 8412790936 due to size reqd, nr=52, sdsz=34343
15:44:11.013 246104 SQL 3L 1C 23864P Opening: SELECT NVL(SUM(TONE_CURR_TRADE_QTY),0), NVL(SUM(TONE_BUY_ORD_QTY),0), NVL(SUM(TONE_SELL_ORD_QTY),0) FROM TRADE_ORD_NET_EXPOSURE WHERE TONE_ENTITY_ID = :B6 AND TONE_SECURITY_ID = :B5 AND TONE_EXCH_ID = :B4 AND TONE_ACC_TYPE = :B3 AND TONE_PRODUCT_ID = :B2 AND TONE_DATE = :B1
15:44:11.014 246105 SQL 3L 62C 23943P execEnvGet: Allocated new env 8413003648 due to size reqd, nr=47, sdsz=18633
15:44:11.014 246106 SQL 3L 62C 23943P Opening: select nvl(RQ.RQ_ORDER_NO,0) ,nvl(RQ.RQ_ORD_SERIAL_NO,0) ,trim(RQ.RQ_BUY_SELL_IND) ,trim(RQ.RQ_CLIENT_ID) ,to_number(trim(RQ.RQ_USER_ID)) ,trim(RQ.RQ_ENTITY_ID) ,trim(RQ.RQ_SECURITY_ID) ,trim(RQ.RQ_EXCH_ID) ,RQ.RQ_SEQ_NO ,nvl(RQ.RQ_D2C1_FLAG,'Y') ,nvl(RQ.RQ_FI_RETAIL_FLG,'L') ,nvl(RQ.RQ_OIB_INT_REF_ID,0) ,RQ.RQ_SOURCE_FLAG ,RQ_SESSION_TYPE ,RQ_PRODUCT_ID ,RQ_GROUP_ID ,RQ_HANDL_INST ,RQ_SETTLEMENT_TYPE ,RQ_CLIENT_SUB_TYPE ,RQ_ALGO_OI_NUM into :b1,:b2,:b3,:b4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12,:b13,:b14,:b15,:b16,:b17,:b18,:b19,:b20 from REQUEST_QUEUE RQ where (((RQ.RQ_EXCH_ORDER_NO=:b21 and RQ
15:44:11.014 246107 SQL 3L 1C 23864P Opening: SELECT RELD_LMT_FLG FROM RMS_ENTITY_LIMIT_DTLS WHERE RELD_EM_ENTITY_ID = :B2 AND RELD_SEGMENT_TYPE = 'C' AND RELD_EXM_EXCH_ID = 'ALL' AND RELD_ACC_TYPE = :B1
15:44:11.014 246108 SQL 3L 1C 23864P Opening: SELECT SEM_VAR_PERCENTAGE, SEM_EXCH_SPCL_MAR_PCT FROM SECURITY_EXCH_MAP S WHERE SEM_SMST_SECURITY_ID = :B2 AND SEM_EXM_EXCH_ID = :B1 AND SEM_STATUS = 'A' AND SEM_DERIVATIVE_FLG = 'N'
15:44:11.014 246109 SQL 3L 62C 23943P Opening: select OS_SEQ_NO into :b1 from ORDER_SEQ where OS_ORDER_NO=:b2
15:44:11.014 246110 SQL 3L 1C 23864P execEnvGet: Allocated new env 8412793368 due to size reqd, nr=108, sdsz=2400
15:44:11.014 246111 SQL 3L 1C 23864P Opening: SELECT SUM(NVL(TONE_TOT_NET_EXP, 0)), SUM(NVL(TONE_CURR_NET_EXP, 0)), SUM(NVL(TONE_BUY_ORD_QTY, 0)), SUM(NVL(TONE_SELL_ORD_QTY, 0)), SUM(NVL(TONE_CURR_TRADE_QTY, 0)), MAX(TONE_MARGIN_PCT), SUM(TONE_BUY_ORD_VAL), SUM(TONE_SELL_ORD_VAL), SUM(T.TONE_AVG_TRANS_PRICE), SUM(TONE_AVG_CUM_BUY_AMT), SUM(TONE_AVG_CUM_SELL_AMT), SUM(TONE_BUY_EXPOSURE), SUM(TONE_SELL_EXPOSURE), SUM(TONE_CURR_TRADE_VAL), SUM(TONE_BOOKED_PROFIT) FROM TRADE_ORD_NET_EXPOSURE T WHERE TONE_ENTITY_ID = :B7 AND TONE_EXCH_ID = :B6 AND TONE_SECURITY_ID = :B5 AND TONE_TCURR = :B4 AND TONE_ACC_TYPE = :B3 AND TONE_PRODUCT_ID = :B2 AND
15:44:11.014 246112 SQL 3L 62C 23943P execEnvGet: Allocated new env 8411217112 due to size reqd, nr=121, sdsz=5916
15:44:11.014 246113 SQL 3L 62C 23943P Opening: select ORD_ORDER_NO ,ORD_SERIAL_NO ,LTRIM(RTRIM(ORD_SEM_SMST_SECURITY_ID)) ,ORD_BTM_EMM_MKT_TYPE ,ORD_EXCH_ID ,LTRIM(RTRIM(ORD_EPM_EM_ENTITY_ID)) ,nvl(ORD_EXCH_ORDER_NO,0) ,nvl(ORD_CLIENT_ID,'0') ,ORD_BUY_SELL_IND ,to_char(ORD_ENTRY_DATE,'DD-MM-YYYY HH24:MI:SS') ,to_char(ORD_ORDER_TIME,'DD-MM-YYYY HH24:MI:SS') ,to_char(ORD_ORIGINAL_TIME,'DD-MM-YYYY HH24:MI:SS') ,ORD_QTY_ORIGINAL ,ORD_ORDER_PRICE ,nvl(ORD_TRIGGER_PRICE,0) ,ORD_DISC_QTY_FLG ,to_char(ORD_GTC_FLG) ,to_char(ORD_DAY_FLG) ,to_char(ORD_IOC_FLG) ,to_char(ORD_MIN_FILL_FLG) ,to_char(nvl(ORD_MKT_FLG,0)) ,to_char(ORD_STOP_LOSS_FLG) ,to_cha
15:44:11.014 246114 SQL 3L 1C 23864P Executing: UPDATE TRADE_ORD_NET_EXPOSURE SET TONE_BUY_ORD_QTY = TONE_BUY_ORD_QTY + :B17 , TONE_BUY_ORD_VAL = TONE_BUY_ORD_VAL + :B16 , TONE_SELL_ORD_QTY = TONE_SELL_ORD_QTY + :B15 , TONE_SELL_ORD_VAL = TONE_SELL_ORD_VAL + :B14 , TONE_CURR_NET_EXP = TONE_CURR_NET_EXP + :B13 , TONE_TOT_NET_EXP = TONE_TOT_NET_EXP + :B12 , TONE_TOT_BROKERAGE = TONE_TOT_BROKERAGE + :B11 , TONE_BUY_EXPOSURE = TONE_BUY_EXPOSURE + :B10 , TONE_SELL_EXPOSURE = TONE_SELL_EXPOSURE + :B9 WHERE TONE_ENTITY_ID = :B8 AND TONE_EXCH_ID = :B7 AND TONE_SECURITY_ID = :B6 AND TONE_TCURR = :B5 AND TONE_PRODUCT_ID = :B4 AND TONE_ACC_TYPE = :B3
15:44:11.014 246115 SQL 3L 1C 23864P execEnvGet: Allocated new env 8412792952 due to size reqd, nr=62, sdsz=1994
15:44:11.014 246116 SQL 3L 10C 8041P Executing: update ttrep.reppeers set sendlsnhigh = :h, sendlsnlow = :l, reptableslsnhigh = :rh, reptableslsnlow = :rl, commit_timestamp = :ct, commit_seqnum = :cs, timesend = :t where replication_name = :rname and replication_owner = :rowner and tt_store_id = :mid and subscriber_id = :sid
15:44:11.014 246117 SQL 3L 1C 23864P Executing: UPDATE RMS_ENTITY_LIMIT_DTLS R SET RELD_RTO_EXP = NVL(RELD_RTO_EXP, 0) + :B12 , RELD_NE_EXP = NVL(RELD_NE_EXP, 0) + :B11 , RELD_MAR_UTILIZATION = NVL(RELD_MAR_UTILIZATION, 0) + :B5 , RELD_BROKERAGE_AMT = NVL(RELD_BROKERAGE_AMT, 0) + :B10 , RELD_BUY_EXP = R.RELD_BUY_EXP + :B9 , RELD_SELL_EXP = R.RELD_SELL_EXP + :B8 , RELD_EMARGIN_UTILIZATION = RELD_EMARGIN_UTILIZATION + DECODE(:B7 , :B6 , :B5 , 0) WHERE RELD_EM_ENTITY_ID = :B4 AND R.RELD_EXM_EXCH_ID = :B3 AND R.RELD_SEGMENT_TYPE = 'E' AND R.RELD_ACC_TYPE = :B2 AND R.RELD_PROCESS_ID = :B1
15:44:11.014 246118 SQL 3L 1C 23864P execEnvGet: Allocated new env 8412793496 due to size reqd, nr=54, sdsz=2524This content is displayed in my trace file
and o/p of ttxactadmin is
Program File Name: EquNseParent1
30486 0x139202b0 54.7002 Active Database 0x01312d0001312d00 IX 0
Command 8416780720 S 8416780720
Row BMUFVUAAAC2BwAALDe Xn 8416776768 DAIWAPRODV7.RMS_ENTITY_LIMIT_DTLS
Row BMUFVUAAABGEAAABgz Xn 8416766544 DAIWAPRODV7.RMS_ENTITY_LIMIT_DTLS
Table 12144968 IXn 8416766544 DAIWAPRODV7.RMS_ENTITY_LIMIT_DTLS
Row BMUFVUAAABPEAAAFhL Xn 8416493904 DAIWAPRODV7.TRADE_ORD_NET_EXPOSURE
Table 719136 IXn 8416493904 DAIWAPRODV7.TRADE_ORD_NET_EXPOSURE
Row BMUFVUAAACoFAAAIBt Xn 8413989000 DAIWAPRODV7.RMS_SOD_EOD_LOG
Table 721136 IXn 8413989000 DAIWAPRODV7.RMS_SOD_EOD_LOG
Command 8419805088 S 8419805088
Command 8419800256 S 8419800256
Command 8419759480 S 8419759480
Command 8412801112 S 8412801112
13 locks found for transaction 54.7002As per my understanding of this " Table 12144968 IXn 8416766544" lock is that there is a table lock.
So we tried setting as per http://docs.oracle.com/cd/E11882_01/timesten.112/e21643/proced.htm#TTREF271
CALL ttOptSetFlag('RowLock', 1)
CALL ttOptSetFlag('TblLock', 0)
but result is still the same.
22878 0x9555770 151.643 Active Database 0x01312d0001312d00 IX 0
Row BMUFVUAAAAOJQAAJD2 Xn 6306707440 VSEVEN.RMS_ENTITY_LIMIT_DTLS
Table 12827064 IXn 6306707440 VSEVEN.RMS_ENTITY_LIMIT_DTLS
Row BMUFVUAAABbKwAAIDO Xn 6306693872 VSEVEN.TRADE_ORD_NET_EXPOSURE
Table 719232 IXn 6306693872 VSEVEN.TRADE_ORD_NET_EXPOSURE
Row BMUFVUAAABqBgAAGj1 Xn 6306573400 VSEVEN.RMS_SOD_EOD_LOG
Table 12826952 IXn 6306573400 VSEVEN.RMS_SOD_EOD_LOG
We are facing locking issues degrading our performance. What does ttoptesetflag('TblLock', 1) and ttoptesetflag('TblLock', 0) implies and why is it showing table level locks
Point number 2.
Also we run the procedure from backend it takes 1 ms for complete process, where as same when called from proc is taking more than 160 ms
Point number 3.
Also as seen in trace file there is frequent statement
"23864P execEnvGet: Allocated new env 8412790936 due to size reqd, nr=52, sdsz=34343"
what does it imply..?
Regards,
YogitaHi Chris,
Sorry for the delay.. The Pro*C code snippet is :
logTimestamp("BEFORE PK_RMS_INTEGRATED_MAIN.PR_RMS_ORDER_MAIN");
logDebug3("TESTIN db_conn value: %s", db_conn.arr);
EXEC SQL AT :db_conn EXECUTE
BEGIN
PK_RMS_INTEGRATED_MAIN.PR_RMS_ORDER_MAIN (
rtrim(ltrim(:TempEntityId)),
rtrim(ltrim(:TempClientFamilyId)),
rtrim(ltrim(:TempEntityType)),
:TempClientAccType,
:TempSettType,
rtrim(ltrim(:TempAccCode)),
rtrim(ltrim(:TempClientProdPriv)),
rtrim(ltrim(:TempClientId)),
rtrim(ltrim(:TempClientProfileId)),
:TempClientStatus,
rtrim(ltrim(:D2C1ControllerId)),
rtrim(ltrim(:TempDealerProfileId)),
:TempDealerStatus,
rtrim(ltrim(:TempBranchId)),
rtrim(ltrim(:TempBranchProfileId)),
:TempBranchStatus,
rtrim(ltrim(:TempBranchId)),
rtrim(ltrim(:TempBranchProfileId)),
:TempBranchStatus,
rtrim(ltrim(:TempBrokerId)),
rtrim(ltrim(:TempBrokerProfileId)),
:TempBrokerStatus,
:RMS_Deact_Entity,
:RMS_Status,
:RMS_Reason_Id,
:TempOrderNo,
:TempSerialNo,
rtrim(ltrim(:TempExchId)),
:TempQtyRemaining,
:TempPrice,
:TempPrice_Mrg,
:old_ord_qty_remaining,
:old_price,
:Transcode,
rtrim(ltrim(:TempBuyOrSell)),
rtrim(ltrim(:TempSecurityId)),
:SeriesInd,
rtrim(ltrim(:TempMktType)),
:off_on_order,
'F001',
rtrim(ltrim(:TempUserId)),
:TempProductId,
:TempSourceFlag,
:BasketOrderNumber,
:TempSeqNo,
:TempSeqNo_Mrg,
:msg_count,
:TempUserIdString,
:TempUsrMesgString,
:RetCode,
:RetString
END;
END-EXEC;
logTimestamp("AFTER PK_RMS_INTEGRATED_MAIN.PR_RMS_ORDER_MAIN");
if(sqlca.sqlcode !=0)
printf("\n 1. Error in execution RMS_MAIN : %d",sqlca.sqlcode );
printf("\n ERROR :%s\n",sqlca.sqlerrm.sqlerrmc) ;
/**Added for user friendly_RMS Message for version 3.3***/
printf("\n Error is %s : %d",errstr,sqlca.sqlcode);
/**Added for user friendly_RMS Message for version 3.3***/
sprintf(p_req_out->ind_str[0],RMS_ERROR);
EXEC SQL ROLLBACK; /* ganesh */
p_req_out->user_id[0]=p_req->IntReqHeader.UserIdOrLogPktId ;
p_req_out->tot_num_user=1;
logDebug3("User Id:%d,Tot Num Of User:%d",p_req_out->user_id[0],p_req_out->tot_num_user) ;
return (FALSE);
}logTimestamp function is defined as
void
logTimestamp (const char *fmt, ...)
va_list ap;
va_start(ap, fmt);
CHAR buffer[LOG_TIMESTAMP_LENGTH];
CHAR chktime[DATE_LEN];
static CHAR message[LOG_BUFFER_LENGTH];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(chktime, DATE_LEN, "%d:%m:%Y", localtime(&curtime));
if ( strcmp(chktime,logTime))
logLevelSet = 0;
strftime(buffer, LOG_TIMESTAMP_LENGTH, "%d:%m:%Y %T", localtime(&curtime));
vsprintf(message, fmt, ap);
fprintf(stdout, "%s.%06ld: %s\n", buffer, tv.tv_usec, message);
va_end(ap);
return;
}Can you point us towards some documentation towards the impact of using memorylock=4 for linux ?
Regards,
Karan -
Error while altering a table.
Hi All,
When i tried to alter a table. I got the following error.
SQL> ALTER TABLE SALES_ORDER_TRANS_IHST ADD(SUPER VARCHAR2(1),REASON_CD VARCHAR2(10));
ERROR at line 1:
ORA-00069: cannot acquire lock -- table locks disabled for SALES_ORDER_TRANS_IHST
Then I tried to enable table lock for that table. Again got error.
SQL> ALTER TABLE SALES_ORDER_TRANS_IHST enable table lock;
ALTER TABLE SALES_ORDER_TRANS_IHST enable table lock
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specifiedORA-00054: resource busy and acquire with NOWAIT specified The table is currently in use by something else.
Werner -
Error "No entry in table T589A for P" while creating new Infotype
HI,
I've created a new infotype (9605). All the tables, structures, screens etc. have been created using transaction PM01 successfully. But upon execution of Infotype from PA30, the infotype screen is displayed with the following error:
"No entry in table T589A for P"
Any input will be highly appreciated.
Thank you,
Farooq.Hi Farooq,
Untill and unless the the field PSYST-IOPER is cleared explicitly in the program MP960500 (in PBO modules), this error should not occur.
Also is this error is coming only for 9605 infotype ? Also is any other info availaible in the error message such as entry in table t589a is not available for which value of OPERA (INS, MOD, DEL, LIS etc) ?
Regards,
Shrinivas -
SM58 : Internal error when accessing a table
Hi there,
We have just upgraded from R/3 4.7 to ECC 6.0. After the upgarde we face many "Internal error when accessing a table" in sm58. Is there any table mapping mismatched happened during unicode conversions? How to check the details? Most of the errors are SWW_WI_EXECUTE_INTERNAL_RFC, SWW_WI_CREATE_VIA_EVENT_IBF and etc which are workflow modules.
can you help?
Thanks.
Regards,
ThavaHi
Have u checked this thread?
problem in TRFC
Error while executing Workflow: User is locked.
/message/5804053#5804053 [original link is broken]
Regards
Sridhar Goli -
ORA-00054 error when loading Oracle table using Data Services
Hello,
we are facing ORA-00054 error when loading Oracle table using BO Data services
(Oracle 10g database, BODS Xi 3.2 SP3)
Test Job performs
1- truncate table
2- load table (tested in standard and bulk load modes)
Scenario when issue happens is:
1- Run loading Job
2- Job end in error for any Oracle data base error
3- When re-running the same Job, Job fails with following error
ORA-00054: resource busy and acquire with NOWAIT specified
It seems after first failure, Oracle session for loading the table stays active and locks the table.
To be able to rerun the Job, we are forced need to kill Oracle session manually to be able to run the Job again.
Expected behaviour would be : on error rollback modifications made on table and BODS stops Oracle session in a clean way.
Can somebody tell me / or point me to any BODS best practice about Oracle error handling to prevent such case?
Thanks in advance
Paul-Mariethe ora-0054 can occure depending how the job failed before. If this occures you will need the DBA to release the lock on the table in question
Or
AL_Engine.exe on The server it creates the Lock. Need to Kill Them. Or stop it..
This Problem Occurs when we select The Bulkloading Option in orclae We also faced the same issue,Our admin has Killed the session. Then everything alright. -
Error when querying a table through the Query Window
I am running a query on the following table in the ODT Query window - for some reason I get the error below when trying to retrieve the data - I can query the table just fine through SQL Plus - it errors out whether I use grid or text window
ERROR
Arithmetic operation resulted in an overflow.
CREATE TABLE "RF3_PROD_1"."F_EXTRACTMETRICS" ("EXTRACT_NAME" VARCHAR2(50) NOT NULL,"RUN_START_DATE" DATE NOT NULL,"RUN_END_DATE" DATE NOT NULL,"DURATION" NUMBER DEFAULT 0 NOT NULL,"EXTRACT_START_DATE" DATE NULL,"EXTRACT_END_DATE" DATE NULL,"NUM_RECS_ADDED" NUMBER DEFAULT 0 NOT NULL,"NUM_RECS_DELETED" NUMBER DEFAULT 0 NOT NULL,"STATUS" VARCHAR2(50) NOT NULL,"COMMENTS" VARCHAR2(500) NULL) TABLESPACE "EXTRACT_TAB_01_TS" PCTFREE 15 PCTUSED 75 INITRANS 1 MAXTRANS 255 STORAGE ( FREELISTS 1 FREELIST GROUPS 1 INITIAL 8388608 NEXT 516096 MAXEXTENTS 2147483645 MINEXTENTS 1 PCTINCREASE 0 )
I would have formatted it nicer - but that is the way that ODT created it :)Christian,
I found that this happens when trying to query number fields that are reals with a large decimal value (26 decimal values and above seem to be the magic number). What is strange is that I don't get this problem when I retrieve data from the same table using the Retrieve data option (versus a query for all the data in the Query Window) so they don't seem to be utilizing the same basic code to grab and then display the data interestingly enough (I am sure you knew that). Does this allow you to replicate the issue?
Thanks,
Bryan -
Time out parameter to avoid Table locking
Hi,
I am looking at any configurable parameter if any for setting the time out parameter to avoid Table locking.Now what's happening is : If i run select ...for update from one session,oracle is applying a lock till i do a commit.And if i run the same query from another session,it takes unspecified time without returning any error.Using the query with NOWAIT options does not serve my purpose.
Any help in this regard is appreciated
Thanks
SamAre you looking for a way to time out the original query, or are you looking for a way for the second query to wait for some time and then abort if it is unable to lock the row(s)?
Justin
Distributed Database Consulting, Inc.
www.ddbcinc.com -
Getting the Datatype error data into a table or file
Hi experts,
I have a scenario where I need to capture the datatype mismatch records between source & target or the data length error records into a table or file.
For example.
1. I have source table column of datatype varchar which is mapped to a target table column of datatype Integer.
2. I have source table column of datatype varchar2(2000) is mapped to a target table column of datatype varchar2(200).
I know that while interface execution, if any of the above scenario comes an error occurs respectively and execution stops.
My question is whether is there anyway to capture those types of records and insert into a table or file. If yes kindly suggest me how to do it.
Thanks in advanceHi Siva,
Use SqlUnload tool to capture error out records in excel file.
Hope this will helps you
Thanks,
Phani -
Hi all,
Good day..
DB version is 10.2.0.4 I need to write a script which has to kill any table locks in the DB which is more than 10 minutes.
thanks,
baskar.lhi sb,
DECLARE
CURSOR c IS
SELECT c.owner,
c.object_name,
c.object_type,
b.SID,
b.serial#,
b.status,
b.osuser,
b.machine
FROM v$locked_object a, v$session b, dba_objects c
WHERE b.SID = a.session_id AND a.object_id = c.object_id
and c.object_name in (MES.JSW_CRM_C_HR_COIL_INFO,MES.JSW_CRM_C_HR_COIL_INFO);
c_row c%ROWTYPE;
1_sql VARCHAR2(100);
BEGIN
OPEN C;
LOOP
FETCH c INTO c_row;
EXIT WHEN c%NOTFOUND;
l_sql := 'alter system kill session '''||c_row.sessionid||','||c_row.serialid||'''';
EXECUTE IMMEDIATE l_sql;
END LOOP;
CLOSE c;
END;But when executing it i get
1_sql VARCHAR2(100);
ERROR at line 15:
ORA-06550: line 15, column 1:
PLS-00103: Encountered the symbol "1" when expecting one of the following:
begin function package pragma procedure subtype type use
<an identifier> <a double-quoted delimited-identifier> form
current cursorthanks,
baskar.l -
Reliable detect number of occurences (table lock needed or ?)
Hi all,
I got an issue with detecting duplicate messages. Database clients process files and messages and create a hash value that is passed on to the database. The database should return the number of occurences of this hash value in the last (e.g) 14 days.
create table HashHistory ( ID NUMBER,
MESSAGEHASH VARCHAR2 (20),
TS TIMESTAMP);
create sequence SomeSequence;
insert into HashHistory values (SomeSequence.nextval,'first hash', systimestamp);
insert into HashHistory values (SomeSequence.nextval,'second hash', systimestamp);
create or replace procedure DuplDetection (p_HashIn varchar2,
p_occurences OUT number) AS
l_timestamp timestamp default systimestamp;
begin
-- possible exclusive table lock here... lock table HashHistory in exclusive mode;
insert into HashHistory values (SomeSequence.nextval, p_HashIn, l_timestamp);
select count (1)
into p_occurences
FROM HashHistory
where MESSAGEHASH = p_HashIn
and TS < l_timestamp
and TS > l_timestamp-14;
commit; --to release the table lock if applicable
end;When this procedure is called by two different machines at the same time with the same new hash value ('third hash'). One session should return 0 while the other should return 1 as number of occurences.... With default Oracle behaviour using row level locks, and executing them in parallel both sessions will not be able to see the others sessions hash values, and both will return 0 occurences. Is an exclusive table lock my only option to enforce this behaviour or can I trust Oracle to handle this correctly?
I expect 10^6 hashes each day and possible up to 10 or 20 clients running at the same time generating and checking these hash values. What are the changes of both sessions returning the same value without an exclusive table lock (as in this example)? What other parameters would you consider?
I am on Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64biNicosa wrote:
Hi,
Wouldn'that work if you do it as follows ?insert into HashHistory values (SomeSequence.nextval, p_hashIn,...);
commit;
select count (1)
into p_occurences
FROM HashHistory
where MESSAGEHASH = p_HashIn
and TS > l_timestamp-14
and ID!=SomeSequence.currval;The second session should see that some others existing hash were inserted, and hence an be warned.No, it wouldn't work. Some kind of synchronization is required.
In multi-threaded environment you cannot predict the order of execution of this program.
Let say we have a server with only one processor and 3 users (sessions) connected to that server.
Three users (let say U1, U2 and U3) call this procedure to insert the same hash. The procedure has 3 operations: insert, commit, select,
and assume theoretically that these 3 operation are atomic (each takes only one processor cycle - in reality each of these operations can consume thousands of cycles).
Server can execute these calls in this order:
U1-insert (sequence + 1)
U1-commit
U1-select (returns 0)
U2-insert (sequence + 1)
U2-commit
U2-select (returns 1)
U3-insert (sequence + 1)
U3-commit
U3-select (returns 2)
- in this scenario results will be OK.
But the order might be:
U1-insert (sequence + 1)
U1-commit
- here server decides to switch to another process/thread
U2-insert (sequence + 1)
U2-commit
- server switches to proccess 3
U3-insert (sequence + 1)
U3-commit
- server switches back to U2
U2-Select (user 2 sees record commited by U1 and U3 and returns 2)
- server switches to U3
U3-Select (user 1 sees record commited by U1 and U2 and returns 2)
- server switches to U1
U1 - Select (this also returns 2)
- results are 2,2,2, but should be 0,1,2
If 20 users cal this procedure at the same moment with the same hash value, it is even possible that each user gets 19 as the final result,
but proper results should be 0, 1, 2, 3 .... 19 ;)
Without some kind of synchronization this a lottery.
There is also another trap with the sequence - consider this example:
Session 1
SQL> create sequence abc;
Sequence created.
SQL> select abc.nextval from dual connect by level <=5;
NEXTVAL
1
2
3
4
5
SQL> select abc.currval from dual;
CURRVAL
5Session 2
SQL> select abc.nextval from dual;
NEXTVAL
6
SQL> select abc.nextval from dual connect by level <=5;
NEXTVAL
7
8
9
10
11
SQL> select abc.currval from dual;
CURRVAL
11Back to session 1
SQL> select abc.currval from dual;
CURRVAL
5What happens in this scenario:
- user 1 does insert (select nextval from the sequence)
- user 2,3,4,5,6 ..... 100 call the procedure just 3 microseconds after U1 and do insert (increase the sequence)
- user 1 retrieve currval from the sequence ?
Maybe you are looking for
-
Ctxload error DRG-11530: token exceeds maximum length
I downloaded the 11g examples (formerly the companion cd) with the supplied knowledge base (thesauri), unzipped it, installed it, and confirmed that the droldUS.dat file is there. Then I tried to use ctxload to create a default thesaurus, using that
-
On a DW template, the code in Code View appears in a light grey color, and i can not cahnge or edit those grey code lines. Major parts of the template code are in that way. Is it just a setting that i am missing?
-
1.1.4 upgrade to 2.02 question
Hi, I have a jailbroken 1.1.4 which I would like to upgrade to unbroken 2.02, can this be done via the normal method via iTunes or will the fact it's Jailbroken cause problems during the official upgrade? Also, is it worth upgrading? I've heard of a
-
There was an error downloading this update
I reinstalled Adobe CS5 on my new PC (Windows 8.1). I installed from the DVDs and ran the updater. Most of the updates installed fine. However, I received the message "Some updates failed to install." with 8 updates listed. I read the forums and down
-
Hi, Recently I have a problem about BOM greation(productTrees), I found if I set the unit price in the item master of component then when I add this component into a BOM with a price value, the BOM generation always failed. But if I use its default p