MCHA : Error - While Archiving Batch Record
Dear Guru's
While Archiving the Batch, I am getting an error message as
MCHA:Dependent Sales Order Stock Records exist
I checked out the Stock for present and previous period its NIL and i also checked the entries in the table MSKA. & MSKAH its Zero for all the stock.
I am using the Object MM_SPSTOCK to archive
Guide me to solve this error
Regards
Anand
Hi
I am closing this thread. Sorry i wrongly posted in SD Sales . I do move to Logistic MM post.
Regards
Anand
Similar Messages
-
Error while creating a record in a custom infotype
hi to all experts,
I have created a infotype with 3 screens . But one of the screen im getting this error while saving a record .The Required screen change cannot be made.Im defining using the table pa9005 not the structure p9005
(is it the same)
Please help me ....
Edited by: mohammed abdul hai on Sep 18, 2009 8:20 AM -
Error while Activating Inventory record in WM
Hi All,
I am getting an error while activating Inventory record using LI02N ,The Inventory record has been created using LICC Cycle Inventory at Quant Level.The error detrails are as below:
Message Text:
Function code cannot be selected
Technical Data
Message type__________ A (Cancel)
Message class_________ BL (Application Log)
Message number________ 001
Message variable 1____ Function code cannot be selected
Message variable 2____
Message variable 3____
Message variable 4____
Message Attributes
Level of detail_______
Problem class_________ 4 (Additional information)
Sort criterion________
Number________________
Thanks you all in Advance.
Regards,
NVKI am encountering a similar error whils tryng to create or display WM Physical Inventory documents.
Tx: LX16 - Error Message no. BL001: u201CWarehouse number WH1 does not exist (check your entry)u201D
Tx: LI01N and LI03N u2013 Error Message no. L4001: u201CWarehouse number WH1 does not exist (check your entry)u201D
Errors when trying to create or display PIs, but can run reports on u201CWH1u201D and configuration looks ok?
I cannot find anthing n SAP OSS for WM.
Any assistance would be appreciated. -
Error while creating absence record in PA30 using 2001 infotype
Hi all,
iam getting an error while creating absence record for infotype 2001for LOP(unpaid)
error : No quota available for att./abs. 6000 for pers. no. 61000052 between 31.12.2009 and 31.12.2009
i have did
Create a absence type LOP
Determine Entry Screens and Time Constraint Classes
Define Counting Rules
Assign Counting Rules to Absence Types
Payroll: India>>Absences>Describe Absence Valuation Rules 01 Unpaid Leave
Group Absences for Absence Valuation>>Absence valuation rule>>F4 country grouping 40>> assign 01Unpaid Leave
please solve thisActually I don't know how your system is configured.
However, the error you're receiving seems you have assigned the absence type to a quota type.
Please go to your counting rule and check whether Deduction rule - absence quotas - within entitlement field is filled.
If yes, get the deduction rule value and go to
SPRO Time man- Time data rec - Absences - Absence Cat - Abs counting - rules for counting (new) - deduction rules for abs quotas and check the quota type in here.
If there is a quota assignment in place, before creating an IT2001 you need to create a IT2006 for that specific time period.
Or, you need to delete the quota assignment via two config steps mentioned above.
Regards,
Dilek -
Error while executing Batch Risk Analysis job in full sync mode
Hi Gurus,
I am getting following error while executing Batch Risk Analysis job in full sync mode for the first time, please help me out.
May 12, 2011 3:57:26 AM com.virsa.cc.multi.node.dao.NodeDAO deleteMTGenObjTable
INFO: In deleteMTGenObjTable() deleting from VIRSA_CC_MTGENOBJ for jobid = 100
May 12, 2011 3:59:53 AM com.virsa.cc.multi.node.dao.NodeDAO deleteMTGenObjTable
INFO: In deleteMTGenObjTable() deleting from VIRSA_CC_MTGENOBJ for jobid = 100
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.bg.BatchRiskAnalysis performBatchSyncAndAnalysis
INFO: --- Batch Sync/Analysis/Mgmt Report completed --- elapsed time: 104907817 ms
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.bg.BgJob setStatus
INFO: Job ID: 100 Status: Error
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.bg.BgJob updateJobHistory
FINEST: --- @@@@@@@@@@@ Updating the Job History -
2@@Msg is Error Job not completed
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.bg.dao.BgJobHistoryDAO insert
INFO: -
Background Job History: job id=100, status=2, message=Error Job not completed
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob scheduleJob
INFO: -
Complted Job =>100----
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob scheduleJob
INFO: Daemon idle time longer than RFC time out, terminating daemon [211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/. Thread ID 0
May 12, 2011 3:59:53 AM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob start
INFO: Analysis Daemon ID [211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/. Thread ID 0 terminiated
May 12, 2011 4:00:35 AM com.virsa.cc.xsys.bg.AnalysisDaemonThread run
FINEST: Analysis Daemon Thread: Invoking (HTTP): http://10.66.218.68:52100/webdynpro/dispatcher/sap.com/grc~ccappcomp/BgJobStart?daemonId=[211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/.&threadId=0&daemonType=BG
May 12, 2011 4:01:36 AM com.virsa.cc.xsys.bg.AnalysisDaemonThread run
FINEST: Analysis Daemon Thread: Invoking (HTTP): http://10.66.218.68:52100/webdynpro/dispatcher/sap.com/grc~ccappcomp/BgJobStart?daemonId=[211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/.&threadId=0&daemonType=BG
May 12, 2011 4:02:37 AM com.virsa.cc.xsys.bg.AnalysisDaemonThread run
FINEST: Analysis Daemon Thread: Invoking (HTTP): http://10.66.218.68:52100/webdynpro/dispatcher/sap.com/grc~ccappcomp/BgJobStart?daemonId=[211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/.&threadId=0&daemonType=BG
May 12, 2011 4:02:37 AM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob start
INFO: Analysis Daemon ID [211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/. Thread ID 0 started
May 12, 2011 4:02:38 AM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob start
FINEST: Another Analysis Daemon ID [211288050]/usr/sap/DAC/JC21/j2ee/cluster/server0/. Thread ID 0 is already runningHi,
May be it worked in your case How the job names going to affect the execution of the job. The issue is purely because of RFC timeout (As per the logs). I recommend to change the parameter in the configuration tab as recommended by Sunny in the previous thread.
Regards,
Raghu -
Error while posting batch managed stock in 107 movement type?
Hi All,
Error while posting batch managed stock in 107 movement type in MIGO, like 'Goods movement not possible with mvmt type 107'
and here in batch tab batch number is not displaying only valuation type is displaying, kindly let me know can it be done by doing any change in field selection with respect to 107 movement type
regards,
SanjanaHi,
spro-inventory management-setting for enjoy transaction-setting for goods mvt--field selection per mvt type
107 SGTXT
107 WEMPF
Check here 107 mvt type there or not if not then put.
101 CUSTNAME
101 GRUND
101 SGTXT
101 WEMPF
102 SGTXT
102 WEMPF
103 SGTXT
103 WEMPF
104 SGTXT
104 WEMPF
105 SGTXT
105 WEMPF
106 SGTXT
106 WEMPF
107 SGTXT
107 WEMPF
108 SGTXT
108 WEMPF
109 SGTXT
109 WEMPF
110 SGTXT
110 WEMPF
Regards
Rakesh -
Error while archiving the repository
Hi all
We are facing error while archiving the repository. It is giving following error while archiving:
Error reading blob from A2i_CM_XMLschema table
Operation ended in ERROR : 84020008H : Database binary object error.
Can anybody help me on this.
I am not getting error while working with repository. Its working fine but only while archiving its giving this error.
Thanks in advance.I tried doing that . Now after compacting i am not getting warning while verifying but while archiving i am getting following error :
*Error reading blob from A2i_CM_XMLSchema table*
*$$$ Operation ended in ERROR : 84020008H : Database binary object error.*
Also I tried unarchiving from earlier archived file but for unarchiving also its giving me following Error:
MDM Repository data is out of date or locked by another MDM Server. Refresh the data and try the operation again.
I checked. There is no another MDM server accessing the repository. What does it mean by Refresh Data.
Do above two problems inter-related.
Can anybody help me on this.
Thanks in advance -
I am getting an error while running Batch input session.
While running BDC getting error that "Enter Discount Base, Automatic calculation not possible". I checked all the settings at company code level, tax code settings, document type settings. I am not getting it. While doing mannual posting the error is not coming.
Please help me on this.Hi,
While creating Material master sometime warning message will come for some materials . So while doing the LSMW Recording method it will record howmany times you are entering the "Enter" key also. So while doing batch input fome article it may stop at some point, so better run the LSMW in foreground and check were it stops exactly.
Regards
GK. -
I am getting the following error while adding record into the table CM_RECIPE_ITEM :
<h4> Error </h4>
ORA-20505: Error in DML: p_rowid=626, p_alt_rowid=CRI_ID, p_rowid2=, p_alt_rowid2=. ORA-01410: invalid ROWID ORA-06512: at "COSTMAN.CM_RECIPE_ITEM_T3_AFTER", line 11 ORA-04088: error during execution of trigger 'COSTMAN.CM_RECIPE_ITEM_T3_AFTER'
Error Unable to process row of table CM_RECIPE_ITEM.
Kindly suggest if the problem is because of the Global temporary table or the triggers given below. Also suggest the solution.
Thanking You,
Yogesh
<h4> CM_RECIPE_ITEM Table </h4>
CRI_ID------CRI_CR_ID--------CRI_BOM_CODE--------CRI_CIFG_CODE---------CRI_CIRM_CODE--------CRI_SEQ--------CRI_QTY--------CRI_RM_COST
625----------464-----------------PRODUCT3001----------FG003----------------------10---------------------------1-------------------60-----------------10
626----------464-----------------PRODUCT3001----------FG003----------------------12---------------------------2-------------------40------------------10
<h4>Global temporary table</h4>
DROP TABLE COSTMAN.INTERIM CASCADE CONSTRAINTS;
CREATE GLOBAL TEMPORARY TABLE COSTMAN.INTERIM
ROW_ID ROWID
ON COMMIT PRESERVE ROWS
NOCACHE;
CREATE OR REPLACE TRIGGER COSTMAN."CM_RECIPE_ITEM_T3"
BEFORE INSERT OR UPDATE ON "CM_RECIPE_ITEM" FOR EACH ROW
BEGIN
INSERT INTO interim VALUES (:new.rowid);
END;
<h4>Trigger to update data on CM_RECIPE table </h4>
CREATE OR REPLACE TRIGGER COSTMAN."CM_RECIPE_ITEM_T3_AFTER"
AFTER INSERT OR UPDATE ON "CM_RECIPE_ITEM"
BEGIN
FOR ds IN (SELECT row_id FROM interim) LOOP
UPDATE CM_RECIPE
SET CR_RMC = (
SELECT SUM(CRI_QTY * CRI_RM_COST)/SUM(CR_QUANTITY)
FROM CM_RECIPE_ITEM
WHERE CRI_BOM_CODE = CR_BOM_CODE
AND rowid = ds.row_id
UPDATE CM_RECIPE
SET CR_TOTAL_COST = (
SELECT CIFG_PACKING + CIFG_OVERHEAD +CIFG_OTHERS
FROM CM_ITEM_FG
WHERE CIFG_CODE = CR_CIFG_CODE
AND rowid = ds.row_id
) + CR_RMC;
UPDATE CM_RECIPE
SET CR_GROSS_MARGIN =
(SELECT CIFG_DP_RATE
FROM CM_ITEM_FG
WHERE CIFG_CODE = CR_CIFG_CODE
AND rowid = ds.row_id) - CR_TOTAL_COST) / CR_TOTAL_COST;
END LOOP;
END;
/The scripts of the tables CM_ITEM_FG, CM_RECIPE, CM_RECIPE_ITEM are as follows :
<h4>CM_ITEM_FG</h4>
ALTER TABLE COSTMAN.CM_ITEM_FG
DROP PRIMARY KEY CASCADE;
DROP TABLE COSTMAN.CM_ITEM_FG CASCADE CONSTRAINTS;
CREATE TABLE COSTMAN.CM_ITEM_FG
CIFG_CODE VARCHAR2(13 BYTE) NOT NULL,
CIFG_CCG_ID NUMBER NOT NULL,
CIFG_NAME VARCHAR2(50 BYTE) NOT NULL,
CIFG_PACKING NUMBER NOT NULL,
CIFG_OVERHEAD NUMBER NOT NULL,
CIFG_OTHERS NUMBER NOT NULL,
CIFG_DP_RATE NUMBER NOT NULL,
CIFG_CR_BY VARCHAR2(32 BYTE),
CIFG_CR_ON DATE,
CIFG_UPD_BY VARCHAR2(32 BYTE),
CIFG_UPD_ON DATE
TABLESPACE COST_MANAGER
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE UNIQUE INDEX COSTMAN.CM_ITEM_FG_PK_001 ON COSTMAN.CM_ITEM_FG
(CIFG_CODE, CIFG_CCG_ID)
LOGGING
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;
CREATE UNIQUE INDEX COSTMAN.CM_ITEM_FG_UK_001 ON COSTMAN.CM_ITEM_FG
(CIFG_CODE)
LOGGING
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;
CREATE OR REPLACE TRIGGER COSTMAN."CM_ITEM_FG_T1"
BEFORE UPDATE ON "CM_ITEM_FG"
FOR EACH ROW
BEGIN
BEGIN
UPDATE CM_RECIPE
SET CR_TOTAL_COST = (CR_RMC + :NEW.CIFG_PACKING + :NEW.CIFG_OVERHEAD + :NEW.CIFG_OTHERS);
END;
BEGIN
UPDATE CM_RECIPE
SET CR_GROSS_MARGIN = (:NEW.CIFG_DP_RATE - CR_TOTAL_COST) / CR_TOTAL_COST;
END;
END;
ALTER TABLE COSTMAN.CM_ITEM_FG ADD (
CONSTRAINT CM_ITEM_FG_PK_001
PRIMARY KEY
(CIFG_CODE, CIFG_CCG_ID)
USING INDEX
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
CONSTRAINT CM_ITEM_FG_UK_001
UNIQUE (CIFG_CODE)
USING INDEX
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
ALTER TABLE COSTMAN.CM_ITEM_FG ADD (
CONSTRAINT CM_ITEM_FG_FK_001
FOREIGN KEY (CIFG_CCG_ID)
REFERENCES COSTMAN.CM_COST_GROUP (CCG_ID));
<h4>CM_RECIPE</H4>
ALTER TABLE COSTMAN.CM_RECIPE
DROP PRIMARY KEY CASCADE;
DROP TABLE COSTMAN.CM_RECIPE CASCADE CONSTRAINTS;
CREATE TABLE COSTMAN.CM_RECIPE
CR_ID NUMBER NOT NULL,
CR_CCG_ID NUMBER,
CR_EFF_FROM DATE,
CR_CIFG_CODE VARCHAR2(10 BYTE) NOT NULL,
CR_BOM_CODE VARCHAR2(50 BYTE),
CR_QUANTITY NUMBER,
CR_RMC NUMBER,
CR_TOTAL_COST NUMBER,
CR_GROSS_MARGIN NUMBER,
CR_CR_BY VARCHAR2(32 BYTE),
CR_CR_ON DATE,
CR_UPD_BY VARCHAR2(32 BYTE),
CR_UPD_ON DATE
TABLESPACE COST_MANAGER
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE UNIQUE INDEX COSTMAN.CM_RECIPE_PK_001 ON COSTMAN.CM_RECIPE
(CR_CCG_ID, CR_ID, CR_CIFG_CODE)
LOGGING
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;
CREATE UNIQUE INDEX COSTMAN.CM_RECIPE_UK_001 ON COSTMAN.CM_RECIPE
(CR_ID)
LOGGING
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;
CREATE UNIQUE INDEX COSTMAN.CM_RECIPE_UK_002 ON COSTMAN.CM_RECIPE
(CR_BOM_CODE)
LOGGING
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;
CREATE OR REPLACE TRIGGER COSTMAN."CM_RECIPE_T1"
BEFORE INSERT ON "CM_RECIPE"
FOR EACH ROW
DECLARE
L_ID NUMBER;
BEGIN
IF INSERTING THEN
IF :NEW.CR_ID IS NULL THEN
--SELECT CM_RECIPE_SEQ.NEXTVAL INTO L_ID FROM DUAL;
:NEW.CR_ID := CM_RECIPE_SEQ.NEXTVAL; --L_ID;
END IF;
:NEW.CR_CR_ON := SYSDATE;
:NEW.CR_CR_BY := nvl(v('APP_USER'),USER);
END IF;
IF UPDATING THEN
:NEW.CR_UPD_ON := SYSDATE;
:NEW.CR_UPD_BY := nvl(v('APP_USER'),USER);
END IF;
END;
ALTER TABLE COSTMAN.CM_RECIPE ADD (
CHECK ("CR_EFF_FROM" IS NOT NULL) DISABLE,
CHECK ("CR_CCG_ID" IS NOT NULL) DISABLE,
CHECK ("CR_QUANTITY" IS NOT NULL) DISABLE,
CHECK ("CR_QUANTITY" IS NOT NULL) DISABLE,
CHECK ("CR_QUANTITY" IS NOT NULL) DISABLE,
CHECK ("CR_QUANTITY" IS NOT NULL) DISABLE,
CHECK ("CR_QUANTITY" IS NOT NULL) DISABLE,
CHECK ("CR_QUANTITY" IS NOT NULL) DISABLE,
CONSTRAINT CM_RECIPE_PK_001
PRIMARY KEY
(CR_ID)
USING INDEX
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
CONSTRAINT CM_RECIPE_UK_002
UNIQUE (CR_BOM_CODE)
USING INDEX
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
<h4>CM_RECIPE_ITEM</H4>
ALTER TABLE COSTMAN.CM_RECIPE_ITEM
DROP PRIMARY KEY CASCADE;
DROP TABLE COSTMAN.CM_RECIPE_ITEM CASCADE CONSTRAINTS;
CREATE TABLE COSTMAN.CM_RECIPE_ITEM
CRI_ID NUMBER NOT NULL,
CRI_CR_ID NUMBER NOT NULL,
CRI_BOM_CODE VARCHAR2(50 BYTE) NOT NULL,
CRI_CIFG_CODE VARCHAR2(10 BYTE) NOT NULL,
CRI_CIRM_CODE VARCHAR2(10 BYTE) NOT NULL,
CRI_SEQ NUMBER,
CRI_QTY NUMBER,
CRI_RM_COST NUMBER,
CRI_CR_BY VARCHAR2(32 BYTE),
CRI_CR_ON DATE,
CRI_UPD_BY VARCHAR2(32 BYTE),
CRI_UPD_ON DATE
TABLESPACE COST_MANAGER
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE UNIQUE INDEX COSTMAN.CM_RECIPE_ITEM_PK_001 ON COSTMAN.CM_RECIPE_ITEM
(CRI_ID)
LOGGING
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;
CREATE OR REPLACE TRIGGER COSTMAN."CM_RECIPE_ITEM_T2"
BEFORE INSERT OR UPDATE ON "CM_RECIPE_ITEM"
FOR EACH ROW
BEGIN
IF :NEW.CRI_CR_ID IS NULL THEN
SELECT CR_ID INTO :NEW.CRI_CR_ID
FROM CM_RECIPE
WHERE CR_BOM_CODE = :NEW.CRI_BOM_CODE;
END IF;
END;
CREATE OR REPLACE TRIGGER COSTMAN."CM_RECIPE_ITEM_T1" BEFORE
INSERT OR UPDATE ON "CM_RECIPE_ITEM" FOR EACH ROW
DECLARE
L_ID NUMBER;
SEQ NUMBER;
BEGIN
IF INSERTING THEN
IF :NEW.CRI_ID IS NULL THEN
SELECT CM_RECIPE_ITEM_SEQ.NEXTVAL
INTO :NEW.CRI_ID
FROM dual;
END IF;
:NEW.CRI_CR_ON := SYSDATE;
:NEW.CRI_CR_BY := NVL(v('APP_USER'),USER);
SELECT (NVL(MAX(CRI_SEQ),0)+1)
INTO SEQ
FROM CM_RECIPE_ITEM
WHERE CRI_BOM_CODE = :NEW.CRI_BOM_CODE;
:NEW.CRI_SEQ := SEQ;
END IF;
IF UPDATING THEN
:NEW.CRI_UPD_ON := SYSDATE;
:NEW.CRI_UPD_BY := NVL(v('APP_USER'),USER);
END IF;
END;
ALTER TABLE COSTMAN.CM_RECIPE_ITEM ADD (
CHECK ("CRI_RM_COST" IS NOT NULL) DISABLE,
CHECK ("CRI_QTY" IS NOT NULL) DISABLE,
CHECK ("CRI_SEQ" IS NOT NULL) DISABLE,
CONSTRAINT CM_RECIPE_ITEM_PK_001
PRIMARY KEY
(CRI_ID)
USING INDEX
TABLESPACE COST_MANAGER
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
ALTER TABLE COSTMAN.CM_RECIPE_ITEM ADD (
CONSTRAINT CM_RECIPE_FK_002
FOREIGN KEY (CRI_CIRM_CODE)
REFERENCES COSTMAN.CM_ITEM_RM (CIRM_CODE),
CONSTRAINT CM_RECIPE_ITEM_FK_001
FOREIGN KEY (CRI_CR_ID)
REFERENCES COSTMAN.CM_RECIPE (CR_ID));
Yogesh -
Error while updating the records through a DB link
Hi ,
I am getting the below error while updating around 15 thousands records, i suspect it to be timeout but not sure.Please can you let me know what solutions needs to be taken.
Not Found
The requested URL /pls/htmldb/wwv_flow.show was not found on this server.
Not Found
The requested URL /pls/htmldb/wwv_flow.show was not found on this server.
Oracle-Application-Server-10g/10.1.2.0.0 Oracle-HTTP-Server Server at htmldb.oraclecorp.com Port 80
Object Type TABLE Object GWB_GRN_GRP_COUNTRIES_LU
Table Column Data Type Length Precision Scale Primary Key Nullable Default Comment
GWB_GRN_GRP_COUNTRIES_LU NAME Varchar2 720 - - - - - -
ALIAS Varchar2 720 - - - - - -
ACTIVE Varchar2 3 - - - - - -
REGION Varchar2 300 - - - - -
1 - 4
Thanks & Regards,
RamuHi Sangu,
See if the following notes help you:
Error when calling API from SQL*Developer, eg. ORA-01403 in API OE_ORDER_PUB.PROCESS_ORDER (Doc ID 1054295.1)
Cancellation Of Transfer Orders Is Not Possible - ORA-01403: no data found in Package OE_Order_PVT Procedure Process_Order (Doc ID 391307.1)
Thanks &
Best Regards, -
Hi All
I am getting an error of "All available file names are already being used " while archiving the ODS.
On other side when i search for the same available files i did not get that. So i want to know where i can look for the available files?
Also when i manage the ODS and go to Archiving Tab the status is still yellow where as when i go to check the Job status in SARA it shows me the cancelled job. Can anybody tell me that what may be the problem is??
Thanks & Regards
NehaHi,
Check the definition of File Name, if it not defined as per the SAP recommendations, at runtime u will get this type of error. In the definition of File Name, the parameter II is alpha numeric, if it reaches the maximum combinations, then the system tries to create from starting but already archive files with same names are exists in the specified archive directory, this results the error what u got.
So, check once the definition of the archive file name. The SAP recommended, naming convention for the logical file is listed below:
Defining Logical File Names:
The following parameters are of particular interest here:
¡ PARAM_1: Two-character application abbreviation (for example, HR, CO, MM) for the classification of the archive files in the system. The value of the definition is determined from the relevant archiving object at runtime.
¡ PARAM_2: Single-character alphanumerical code (0-9, A-Z). If, when creating a new archive file, an existing file with an identical physical name would result in a conflict, the ADK increases this value by 1. This value must, therefore, always be a part of the physical name.
¡ PARAM_3: This parameter is filled at runtime with the name of the archiving object. In archive management, this enables you to check the file contents or to store the archive files by archiving objects.
To enable maximum space in the name range for the archive file, the following entry is recommended:
.ARCHIVE
In the Logical Path field, you assign the previously defined logical path name to the current logical file names. You can assign a logical path name to several file names.
To display the path name and file name definitions and their specifications for the relevant syntax groups, go to transaction SF07.
Regards,
Rahul. -
Error while creating condition record
Hi
Am using 4.7 and creating condition record while saving condition record am getting error NO UPDATE SERVER FOUND FOR CONTEXT E, So am stuck up and do not know whom to contact is it error for BASIS? or related with SD kindly help.
ThanxHi Mukesh
As you are getting error NO UPDATE SERVER FOUND FOR CONTEXT E, while saving the condition record, kindly consult your BASIS consultants and ask them to check wheather the pricing related tables and structures have been copied and uploaded properly or not.
Regards
Srinath -
Error while running batch in Hyperion Reports
Has anybody run accross this error while trying to run a batch in Hyperion Reports:
Unexpected Error creating query to datasource in createCubeViews:
The book the batch is running is big...Have you ever successfully scheduled a batch? Make sure the scheduled batch name does not contain an ampersand "&" symbol or an apostrophe/single tick symbol. Those cause batches to error out for us.
-Karen -
DAC Error: Error while inserting a record!
Hi,
I want to create a new DAC on my system but I am getting below error while trying to create record or saving any records.
Can't create reference
MESSAGE:::INSERT INTO W_ETL_OBJ_REF(OBJ_WID, OBJ_REF_WID, OBJ_TYPE, OBJ_REF_TYPE_CD, APP_WID, LAST_UPD, SOFT_DEL_FLG, ROW_WID) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
Values :
KEY : 1
VALUE : 202CB962AC5975B964B7152D234B70
KEY : 2
VALUE : 202CB962AC5975B964B7152D234B70
KEY : 3
VALUE : W_ETL_SA
KEY : 4
VALUE : 1
KEY : 5
VALUE : null
KEY : 6
VALUE : 2011-10-12 15:17:11.586
KEY : 7
VALUE : N
KEY : 8
VALUE : 7716EB6EA78A5B2E2D3C2AE1CF204A2C
EXCEPTION CLASS::: com.siebel.etl.database.IllegalSQLQueryException
com.siebel.etl.database.DBUtils.batchUpdate(DBUtils.java:1946)
com.siebel.etl.gui.util.UncommitUpdateHelper.executeUpdate(UncommitUpdateHelper.java:84)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.createReference(BaseDACObject.java:1362)
com.siebel.analytics.etl.client.data.dataobject.DACObject.createReference(DACObject.java:78)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.createUpdateReference(BaseDACObject.java:1274)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.createOwnedObject(BaseDACObject.java:565)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.create(BaseDACObject.java:544)
com.siebel.analytics.etl.client.data.dataobject.UpdatableDataObject.createUpdate(UpdatableDataObject.java:210)
com.siebel.analytics.etl.client.data.dataobject.DACObject.createUpdate(DACObject.java:213)
com.siebel.analytics.etl.client.data.dataobject.UpdatableDataObject.createUpdate(UpdatableDataObject.java:198)
com.siebel.analytics.etl.client.data.model.ResultSetParser.insertNewRecord(ResultSetParser.java:226)
com.siebel.analytics.etl.client.data.model.UpdatableDataTableModel.insertNewRecord(UpdatableDataTableModel.java:122)
com.siebel.analytics.etl.client.data.model.DACTableModel.insertNewRecord(DACTableModel.java:335)
com.siebel.analytics.etl.client.data.model.DACTableModel.insertNewRecord(DACTableModel.java:320)
com.siebel.analytics.etl.client.data.model.UpdatableDataTableModel.updateRecord(UpdatableDataTableModel.java:76)
com.siebel.analytics.etl.client.view.edit.EditObject.save(EditObject.java:326)
com.siebel.analytics.etl.client.view.edit.EditObject.actionPerformed(EditObject.java:616)
javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1995)
javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2318)
javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:236)
java.awt.Component.processMouseEvent(Component.java:6041)
javax.swing.JComponent.processMouseEvent(JComponent.java:3265)
java.awt.Component.processEvent(Component.java:5806)
java.awt.Container.processEvent(Container.java:2058)
java.awt.Component.dispatchEventImpl(Component.java:4413)
java.awt.Container.dispatchEventImpl(Container.java:2116)
java.awt.Component.dispatchEvent(Component.java:4243)
java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4322)
java.awt.LightweightDispatcher.processMouseEvent(Container.java:3986)
java.awt.LightweightDispatcher.dispatchEvent(Container.java:3916)
java.awt.Container.dispatchEventImpl(Container.java:2102)
java.awt.Window.dispatchEventImpl(Window.java:2440)
java.awt.Component.dispatchEvent(Component.java:4243)
java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:273)
java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:183)
java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:173)
java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:168)
java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:160)
java.awt.EventDispatchThread.run(EventDispatchThread.java:121)
::: CAUSE :::
MESSAGE:::ORA-01400: cannot insert NULL into ("DW"."W_ETL_OBJ_REF"."APP_WID")
EXCEPTION CLASS::: java.sql.SQLIntegrityConstraintViolationException
oracle.jdbc.driver.T2CConnection.checkError(T2CConnection.java:737)
oracle.jdbc.driver.T2CConnection.checkError(T2CConnection.java:647)
oracle.jdbc.driver.T2CPreparedStatement.executeForDescribe(T2CPreparedStatement.java:530)
oracle.jdbc.driver.T2CPreparedStatement.executeForRows(T2CPreparedStatement.java:713)
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1307)
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3449)
oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3530)
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
com.siebel.etl.database.cancellable.CancellablePreparedStatement.executeUpdate(CancellablePreparedStatement.java:91)
com.siebel.etl.database.DBUtils.batchUpdate(DBUtils.java:1942)
com.siebel.etl.gui.util.UncommitUpdateHelper.executeUpdate(UncommitUpdateHelper.java:84)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.createReference(BaseDACObject.java:1362)
com.siebel.analytics.etl.client.data.dataobject.DACObject.createReference(DACObject.java:78)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.createUpdateReference(BaseDACObject.java:1274)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.createOwnedObject(BaseDACObject.java:565)
com.siebel.analytics.etl.client.data.dataobject.BaseDACObject.create(BaseDACObject.java:544)
com.siebel.analytics.etl.client.data.dataobject.UpdatableDataObject.createUpdate(UpdatableDataObject.java:210)
com.siebel.analytics.etl.client.data.dataobject.DACObject.createUpdate(DACObject.java:213)
com.siebel.analytics.etl.client.data.dataobject.UpdatableDataObject.createUpdate(UpdatableDataObject.java:198)
com.siebel.analytics.etl.client.data.model.ResultSetParser.insertNewRecord(ResultSetParser.java:226)
com.siebel.analytics.etl.client.data.model.UpdatableDataTableModel.insertNewRecord(UpdatableDataTableModel.java:122)
com.siebel.analytics.etl.client.data.model.DACTableModel.insertNewRecord(DACTableModel.java:335)
com.siebel.analytics.etl.client.data.model.DACTableModel.insertNewRecord(DACTableModel.java:320)
com.siebel.analytics.etl.client.data.model.UpdatableDataTableModel.updateRecord(UpdatableDataTableModel.java:76)
com.siebel.analytics.etl.client.view.edit.EditObject.save(EditObject.java:326)
com.siebel.analytics.etl.client.view.edit.EditObject.actionPerformed(EditObject.java:616)
javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1995)
javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2318)
javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:236)
java.awt.Component.processMouseEvent(Component.java:6041)
javax.swing.JComponent.processMouseEvent(JComponent.java:3265)
java.awt.Component.processEvent(Component.java:5806)
java.awt.Container.processEvent(Container.java:2058)
java.awt.Component.dispatchEventImpl(Component.java:4413)
java.awt.Container.dispatchEventImpl(Container.java:2116)
java.awt.Component.dispatchEvent(Component.java:4243)
java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4322)
java.awt.LightweightDispatcher.processMouseEvent(Container.java:3986)
java.awt.LightweightDispatcher.dispatchEvent(Container.java:3916)
java.awt.Container.dispatchEventImpl(Container.java:2102)
java.awt.Window.dispatchEventImpl(Window.java:2440)
java.awt.Component.dispatchEvent(Component.java:4243)
java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:273)
java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:183)
java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:173)
java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:168)
java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:160)
java.awt.EventDispatchThread.run(EventDispatchThread.java:121)
Why it happened and how to solve it?The error is not because a table is MISSING..it is because its trying to insert a NULL and its failing: ORA-01400: cannot insert NULL.
I would check the following:
Make sure all DAC related patches are applied (you can find these on metalink) for the DAC version you are on
Is this your first time to try a load? Is there any load that may not have cleaned up all process that was previously runninng..if so restart the DAC services and retry
Make sure the DAC repository metadat tables are created properly. Go to the table in question W_ETL_OBJ_REF and see if it is correctly created
Did you assemble all Subject Areas and successfully Build the execution plan? If not, do that and retry.
Let me know how it goes.
if this is helpful, please mark as correct or helpful. -
Error while running batch update statement
Hi
We are experiencing the below error while running the batch update statement where in the IN clause have more than 80,000 entries. The IN clause is already handled for max 1000 values so it has multiple or clause
like update...where id in (1,2...999) OR id in (1000,1001........) OR Id in ()...
Error at Command Line:1 Column:0
Error report:
SQL Error: ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [kghfrh:ds], [0x2A9C5ABF50], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [kkoitbp-corruption], [], [], [], [], [], [], []
00603. 00000 - "ORACLE server session terminated by fatal error"
*Cause: An ORACLE server session is in an unrecoverable state.
*Action: Login to ORACLE again so a new server session will be created
Is there a limitation of oracle or some bug?
Thankshttp://download.oracle.com/docs/cd/B19306_01/server.102/b14237/limits003.htm
The limit on how long a SQL statement can be depends on many factors, including database configuration, disk space, and memoryI think you're over this limit.
The only way is creating a temporary table with all the values and using
IN (select ...)Max
http://oracleitalia.wordpress.com
Maybe you are looking for
-
my ipad mini has been stolen and i have activated the find my iphone to lock. now it is only a wifi device so what are the chances that a person can get hold of it and log onto wifi without the password of my device to unlock. does that not make the
-
How to know exact size of table with blob column
I have a table with one BLOB column. I ran this query. select bytes/1024/1024 from user_segments where segment_name='GMSSP_REQUEST_TEMP_FILES' (user_segments is a view) it gave me 0.125 It means size of table is 0.125. I have uploaded 3 files to this
-
Hi! I have a WD my book external hard drive plugged into my imac and I want to share it over my network with my apple tv. I went into system preferences > sharing, checked "file sharing" and added my external hard drive to the shared folder. So now i
-
Add a wifi client to Time Capsule
How can I add a client to my Time Capsule 1st generation wifi network, using mac os.x 10.8.3 ?
-
How do I make my vector character shrink well?
I created a character in illustrator that I plan to use in animations and on the web. When I shrink the vector to the size of, for example, a small avi picture for social media, I significantly lose line quality. How can I maintain the quality of det