Valid to dates on KONH table not matching the dates in A* tables- SD pricin
We have an issue in our KONH/A* tables. Please review the data below:
A583: ( This is for a Z condition type ZCTR)
MANDT KAPPL KSCHL VKORGAU MATNR DATBI DATAB KNUMH
028 V ZCTR 0006 000000000000031886 03/31/2006 01/01/2006 0071519869
028 V ZCTR 0006 000000000000031886 06/30/2008 03/02/2008 0071519869
KONH
MANDT KNUMH DATAB DATBI
028 0071519869 01/01/2006 12/31/9999
If you look in VK13, for this ZCTR record/material you will see the following dates. We can see that two records have the same KNUMH , is this a SAP bug?
Also what is the difference in dates in KONH and A* tables ? I see that the valid from dates match but the valid to date in KONH is the valid to date of the last record in the A table.
MANDT KAPPL KSCHL VKORGAU MATNR DATBI DATAB KNUMH
028 V ZCTR 0006 000000000000031886 03/31/2006 01/01/2006 0071519869
028 V ZCTR 0006 000000000000031886 12/31/2007 03/13/2007 1032266853
028 V ZCTR 0006 000000000000031886 03/01/2008 01/01/2008 1061605348
028 V ZCTR 0006 000000000000031886 06/30/2008 03/02/2008 0071519869
028 V ZCTR 0006 000000000000031886 12/31/9999 07/01/2008 1093084511
Thanks for putting so much thought into this, but I still fail to see the advantage/reason of maintaining the original dates in KONH if the new dates are prior to the original date ( in case of valid to). We always have the change log to determine the origibal dates with which the record was created.
Thanks for all the responses.
The duplicate KNUMH is causing the issue for us:
A583: ( This is for a Z condition type ZCTR)
MANDT KAPPL KSCHL VKORGAU MATNR DATBI DATAB KNUMH
028 V ZCTR 0006 000000000000031886 03/31/2006 01/01/2006 0071519869
028 V ZCTR 0006 000000000000031886 06/30/2008 03/02/2008 0071519869
As you can see two records have the same KNUMH. We were trying to run archiving for condition records and came across an issue.
As there is a duplication of KNUMH ( which is wrong) the program does not pick up the 2006 record . This is because it considers the 2008 record to be compared against the residency date and skips the older one too. We tried changing the validity dates of the 2008 record to an older date so that both are within the residency date. This is how the records looks now :
028 V ZCTR 0006 000000000000031886 06/30/2005 03/02/2005 0071519869
028 V ZCTR 0006 000000000000031886 03/31/2006 01/01/2006 0071519869
KONH
Cond.record no. Usage Table Application Condition type Valid From Valid To
0071519869 A 583 V ZCTR 03/02/2005 12/31/9999
Once we did this the archiving program was able to pick this record up.
Has any one faced this issue before? Is this a known SAP issue ? I could not find any OSS for this.
Edited by: harikrishnan balan on May 28, 2009 11:03 PM
Similar Messages
-
I am using Veristand 2014, Scan Engine and EtherCat Custom Device. I have not had this error before, but I was trying to deploy my System Definition File (run) to the Target (cRio 9024 with 6 modules) and it failed. It wouldn't even try to communicate with the target. I get the 'connection refused' error.
I created a new Veristand project
I added the Scan Engine and EtherCat custom device.
I changed the IP address and auto-detected my modules
i noticed tat Veristand didn't find one of my modules that was there earlier. (this week)
So, i went to NiMax to make sure software was installed and even reinstalled Scan Engine and Veristand just to make sure.
Now, it finds the module, but when i go to deploy it getsto the last step of deploying the code to the target, and then it fails.
Any thoughts?
Start Date: 4/10/2015 11:48 AM
• Loading System Definition file: C:\Users\Public\Documents\National Instruments\NI VeriStand 2014\Projects\testChassis\testChassis.nivssdf
• Initializing TCP subsystem...
• Starting TCP Loops...
• Connection established with target Controller.
• Preparing to synchronize with targets...
• Querying the active System Definition file from the targets...
• Stopping TCP loops.
Waiting for TCP loops to shut down...
• TCP loops shut down successfully.
• Unloading System Definition file...
• Connection with target Controller has been lost.
• Start Date: 4/10/2015 11:48 AM
• Loading System Definition file: C:\Users\Public\Documents\National Instruments\NI VeriStand 2014\Projects\testChassis\testChassis.nivssdf
• Preparing to deploy the System Definition to the targets...
• Compiling the System Definition file...
• Initializing TCP subsystem...
• Starting TCP Loops...
• Connection established with target Controller.
• Sending reset command to all targets...
• Preparing to deploy files to the targets...
• Starting download for target Controller...
• Opening FTP session to IP 10.12.0.48...
• Processing Action on Deploy VIs...
• Setting target scan rate to 10000 (uSec)... Done.
• Gathering target dependency files...
• Downloading testChassis.nivssdf [92 kB] (file 1 of 4)
• Downloading testChassis_Controller.nivsdat [204 kB] (file 2 of 4)
• Downloading CalibrationData.nivscal [0 kB] (file 3 of 4)
• Downloading testChassis_Controller.nivsparam [0 kB] (file 4 of 4)
• Closing FTP session...
• Files successfully deployed to the targets.
• Starting deployment group 1...
The VeriStand Gateway encountered an error while deploying the System Definition file.
Details:
Error -66212 occurred at Project Window.lvlibroject Window.vi >> Project Window.lvlib:Command Loop.vi >> NI_VS Workspace ExecutionAPI.lvlib:NI VeriStand - Connect to System.vi
Possible reason(s):
LabVIEW: The data type of the reference does not match the data type of the variable.
=========================
NI VeriStand: NI VeriStand Engine.lvlib:VeriStand Engine Wrapper (RT).vi >> NI VeriStand Engine.lvlib:VeriStand Engine.vi >> NI VeriStand Engine.lvlib:VeriStand Engine State Machine.vi >> NI VeriStand Engine.lvlib:Initialize Inline Custom Devices.vi >> Custom Devices Storage.lvlib:Initialize Device (HW Interface).vi
• Sending reset command to all targets...
• Stopping TCP loops.
Waiting for TCP loops to shut down...
• TCP loops shut down successfully.
• Unloading System Definition file...
• Connection with target Controller has been lost.Can you deploy if you only have the two 9401 modules in the chassis (no other modules) and in the sysdef? I meant to ask if you could attach your system definition file to the forum post so we can see it as well (sorry for the confusion).
Are you using any of the specialty configurations for the 9401 modules? (ex: counter, PWM, quadrature, etc)
You will probably want to post this on the support page for the Scan Engine/EtherCAT Custom Device: https://decibel.ni.com/content/thread/8671
Custom devices aren't officially supported by NI, so technical questions and issues are handled on the above page.
Kevin W.
Applications Engineer
National Instruments -
Data warehouse Loader did not write the data
Hi,
I need to know which products are the most searched, I know the tables responsible for storing this information
and are ARF_QUERY ARF_QUESTION. I already have the Data Warehouse module loader running, if anyone knows
why the data warehouse loader did not write the data in the database, I thank you.
Thank.I have configured the DataWarehouse Loader and its components.Even I have enabled the logging mechanism.
I can manually pass the log files into queue and then populate the data into Data Warehouse database through scheduling.
The log file data is populated into this queue through JMS message processing" and should be automated.I am unable to
configure this.
Which method is responsible for adding the log file data into loader queue and how to automate this. -
I have the following data flow:
ADO Source
Input Column: Cookies ID: 7371 Datatype: DT_NUMERIC
Output Column: Cookies ID: 7391 Datatype: DT_NUMERIC
DATA CONVERSION TASK
Input Column: Cookies ID: 1311 Datatype: DT_NUMERIC
Output Column: Copy of Cookies ID: 1444 Datatype: DT_I4
SQL Server Destination
Input Column: Copy of Cookies ID: 8733 Datatype: DT_I4
Output Column: Cookies ID: 8323 Datatype: DT_I4
This looks fine to me. Why am I getting the error?
This is SQL Server 2008 and I am working at this point only in BIDS, so it is not a question of dev vs prod server or anything. One environment, running in BIDS. Other similar data flows seems to be working ok. It seems the error is referring to a datatype
mismatch in the source--but how can that be?
Thanks!Actually, I am wrong in that Visakh. I think you are correct.
There are two versions of all tables, one with 15 minute rollups and one with daily rollups. Otherwise the tables are the same--same exact fields. I use a loop with a data flow inside to get data from one version of the rollup and then the other.
The problem is, for some of the fields with larger values the datatype is NUMERIC instead of INTEGER. (This is Cache database). SO:
dailyCookies: Field: CountOne Datatype: NUMERIC
15minCookies: Field: CountOne Datatype: INTEGER
A variable dynamically creates the query so it appends "daily" or "15min" to the beginning of the tables. So on the first loop it does the 15min tables and on the second the daily tables. When I created this particular table I have to plug a default query
in so I used the daily. It picked up the datatype as NUMERIC so when I run the package it loops through 15min first and sees the datatype is INTEGER.
How to deal with this? I suppose I could convert the datatype in the source query, but that would be a hassle to do that for some fields and not others. This tables has hundreds of fields, BTW. Can one source object deal with a change of datatypes? SSIS
is so picky on datatypes compared to other tools....
Thanks, -
NFe Error: Valid. error: CT-e ID does not match the format of tax authorities
Good Afternoon,
We are facing an issue when we are sending NFe layout 3.10 from SAP ECC to SAP GRC 10.0. We are receiving the following error regarding to CTe but, in fact, we are only sending NFe:
One thing to note is that once we receive this error we use "cancel the reject NFe" and then we resend the NFe with successfull.
Could anyone please advise why the CTe error mentioned before is occuring?
Thanks in advance.,Muito obrigada José , nós vamos aplicar a nota no ambiente de teste, mas não vamos aplicar em produção, porque de repente voltou a funcionar.
Para os casos que não haviam funcionado ou fiz o procedimento descrito pelo Glauco, de cancelar e então reenviar. O sistema orienta apenas reenviar, mas nao funciona.
Nós decidimos aplicar a nota em produçao somente quando voltar a ocorrer este problema, porque queremos ter a oportunidade de debugar o programa de criação de NFe no GRC e entender o que acontece.
Para mim é inaceitável um programa que funciona bem , de repente parar e depois voltar a funcionar, uma vez que o tipo de registro que está sendo processado tem as mesmas características e nenhuma configuração foi alterada.
A SAP tinha que explicar melhor os por quês.
Obrigada pela ajuda mais uma vez!
Abraços,
Fernanda -
Job Cancelled with an error "Data does not match the job def: Job terminat"
Dear Friends,
The following job is with respect to an inbound interface that transfers data into SAP.
The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
1.Job Started
2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
8.Job cancelled after system exception
ERROR_MESSAGE
Could you please analyse and come up about under what circumstance the above error is reported.
As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
My Trials
1. Tested uplaoding an empty file
2. Tested uploading with wrong data
3. Tested uploading with improper data that has false file structue
But failed to simulate the above scenario.
Clarification Required
Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
Is the above question valid?
Edited by: dharmendra gali on Jan 28, 2008 6:06 AMDear Friends,
_Urgent : Please work on this ASAP _
The following job is with respect to an inbound interface that transfers data into SAP.
The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
1.Job Started
2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
8.Job cancelled after system exception
ERROR_MESSAGE
Could you please analyse and come up about under what circumstance the above error is reported.
As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
My Trials
1. Tested uplaoding an empty file
2. Tested uploading with wrong data
3. Tested uploading with improper data that has false file structue
But failed to simulate the above scenario.
Clarification Required
Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
Is the above question valid? -
Regarding "Data does not match the job definition; job terminated"
Dear Friends,
The following job is with respect to an inbound interface that transfers data into SAP.
The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
1.Job Started
2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
8.Job cancelled after system exception
ERROR_MESSAGE
Could you please analyse and come up about under what circumstance the above error is reported.
As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
My Trials
1. Tested uplaoding an empty file
2. Tested uploading with wrong data
3. Tested uploading with improper data that has false file structue
But failed to simulate the above scenario.
Clarification Required
Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
Is the above question valid?Hi dharmendra
Good Day
How are you
I am facing the same problem which you have posted
By any chance have you got the soultion for this
If so please let me know the solution for this.
Thanks in advance.
Cheers
Vallabhaneni -
Getting error as Buffer table not up to date while checking EBP order
Hi SRM gurus,
We have one order which is created in the EBP side and the same is replicated in the backend also. Also confirmations and invoices are posted for this local PO. These confirmations and invoices are also carried over to backend. However very recently when we open this order in EBP and when we click either check button (or) change button we get the error indicating that Buffer table not up to date. I have pasted the error message below. We have also checked the ST22 and observed that exception occurred CX_BBP_PD_ABORT and is not caught any where. When do not have any issues with other orders. We have checked the recent changes and we could not observe any recent changes to the order.
Note
The following error text was processed in the system : Buffer table not up to date
The error occurred on the application server bprapz36_SP1_08 and in the work process 0 .
The termination type was: RABAX_STATE
The ABAP call stack was:
Function: BBP_PD_ABORT of program SAPLBBP_PDH
Form: ABORT of program SAPLBBP_PD
Form: CHECK_VENDOR_ERS of program SAPLBBP_PD
Form: HEADER_CROSS_CHECKS of program SAPLBBP_PD
Form: PROCDOC_DB_CHECK of program SAPLBBP_PD
Form: PROCDOC_CHECK of program SAPLBBP_PD
Function: BBP_PROCDOC_CHECK of program SAPLBBP_PD
Function: BBP_PD_PO_CHECK of program SAPLBBP_PD_PO
Form: CHECK_PO of program SAPLBBP_PO_APP
Form: PROCESS_EVENT of program SAPLBBP_PO_APP
iN BBP_PD the status of this PO is as below.
Stats:
Status Description Inactiv
HEADER I1015 Awaiting Approval X
HEADER I1021 Created
HEADER I1038 Complete
HEADER I1043 Ordered
HEADER I1080 In Transfer to Execution Syst. X
HEADER I1120 Change was Transmitted
HEADER I1180 Document Completed
0000000001 I1021 Created
0000000002 I1021 Created
Please suggest / advice on this.
Thanks & Regards
Psamp1Dear Poster,
As no response has been provided to the thread in some time I must assume the issue is resolved, if the question is still valid please create a new thread rephrasing the query and providing as much data as possible to promote response from the community.
Best Regards,
SDN SRM Moderation Team -
Background Job cancelling with error Data does not match the job definition
Dear Team,
Background Job is getting cancelled when I run a Job on periodically but the same Job is executing perfectly when I run it manually(repeat scheduling) .
Let me describe the problem clearly.
We have a program which picks up files from an FTP server and posts the documents into SAP. We are scheduling this program as a background Job daily. This Job is running perfectly if the files contain no data. But if the file contains data the JOb is getting cancelled with the following messages.
And also the same Job is getting executed perfectly when repeat scheduling is done ( even for files with data).
Time Message text Message class Message no. Message type
03:46:08 Job PREPAID_OCT_APPS2_11: Data does not match the job definition; job terminated BD 078 E
03:46:08 Job cancelled after system exception ERROR_MESSAGE 00 564 A
Please help me in resolving this issue.
Thanks in advance,
Sai.hi,
If you have any GUI function modules used in the program of job
you cannot run it in background mode. -
hi all,
While creating a contact i am getting error as"Buffer table not up to date"..what does it mean..i am working in SRM 5.0 classical scenario...
Thanks&Regards,
Hari...Hi Sridhar,
This issue generally comes up when the tables that store these entries get full and doesn't allow any further postings.
In most of the cases this will be the issue though for your case i can not say this is the exact issue but this may work.
At this point i don't remember the exact transaction for refreshing but inform your BASIS guys to refresh the buffer tables.They will definitely know the procedure.
Hope this resolves your issue.
Rgds,
Teja -
Buffering table not up to date error message when creating a Cart
Hi Folks,
We are getting a 'Buffering table not up to date' error message when attempting to create a Cart. The error message only happens to the one end user ID only with the others not getting this error, therefore suggesting that my SRM org plan set-up is correct.
Has anyone come across this previously and what checks are available in the system to resolve this? As mentioned, the attribute check is okay and I have also removed the user ID from the SRM org plan and reassigned again but this has not corrected the problem. We are on SRM 5.
Thanks. Mike.
Message:
Buffering table not up to date
Method: GET_STRUCTURE_PATHS_UP of program CL_BBP_ES_EMPLOYEE_MYS========CP
Method: IF_BBP_ES_EMPLOYEE~GET_RL_UNIT_IDS of program CL_BBP_ES_EMPLOYEE_MYS========CP
Method: IF_BBP_ES_PROFESSIONAL~GET_WORKPLACE_ADDRESS_IDS of program CL_BBP_ES_EMPLOYEE_MYS========CP
Method: IF_BBP_ES_PROFESSIONAL~GET_WORKPLACE_ADDRESS_ID of program CL_BBP_ES_EMPLOYEE_MYS========CP
Method: IF_BBP_ES_PROFESSIONAL~GET_WORKPLACE_ADDRESS of program CL_BBP_ES_EMPLOYEE_MYS========CP
Form: USER_DETAIL_GET of program SAPLBBP_SC_APP
Form: GLOBAL_FILL of program SAPLBBP_SC_APP
Form: SC_INIT of program SAPLBBP_SC_APP
Function: BBP_SC_APP_EVENT_DISPATCHER of program SAPLBBP_SC_APP
Form: APP_EVENT_HANDLER of program SAPLBBP_SC_UI_ITS
Edited by: Mike Pallister on Nov 5, 2008 11:44 AMPlease advise on this problem. When I try to check the Approval Overview tab for these two shopping carts.
I got a dump.can anyone help me.
Information on where terminated
Termination occurred in the ABAP program "CL_BBP_ES_EMPLOYEE_MYS========CP" -
in "IF_BBP_ES_EMPLOYEE~GET_RL_UNIT_IDS".
The main program was "SAPMHTTP ".
In the source code you have the termination point in line 35
of the (Include) program "CL_BBP_ES_EMPLOYEE_MYS========CM008".
The termination is caused because exception "CX_BBP_ES_INTERNAL_ERROR" occurred
in
procedure "/SAPSRM/IF_PDO_DO_APV_EXT~GET_AGENT_DETAILS" "(METHOD)", but it was
neither handled locally nor declared
in the RAISING clause of its signature.
The procedure is in program "/SAPSRM/CL_PDO_DO_APV_EXT=====CP "; its source
code begins in line
1 of the (Include program "/SAPSRM/CL_PDO_DO_APV_EXT=====CM00E ". -
Extended Classic Scenario - SHC: Buffer table not up to date
Hello all,
I'm working in SRM_SERVER 550, SAPKIBKT11, and having issues using BACKEND PURCHASING ORGANISATIONS and the Extended Classic Scenario.
According to SAP note 944918 the following indicator should not be required to be set:
Supplier Relationship Management>SRM Server>Cross-Application Basic Settings>Activate Extended Classic Scenario --> extended classic scenario active.
I did set the Backend Purch. Grps Responsible indicator, and I did create organizational units specifically for the backend purch. grps.
We need to set the main part of all SHCs to ECS, but when I do set the indicator in customizing to "extended classic scenario = active", I get a dump in my web environment: 'buffer table not up to date'.
What is causing this failure in my SHC?
I tried to implement BAdI "BBP_EXTLOCALPO_BADI" (method DETERMINE_EXTPO) like described below here: without the line bbp_extpo_gl-bbpexpo = 'X'., I can continue and create the SHCs. Problem is though: all SHCs will get marked as Classic Scenario. So, to make sure the SHCs will always get marked as ECS, I have added this line in the BAdI. Unfortunately this immediately results in the 'buffer table not up to date' error in the SHC itself, as soon as I try to open the details of the new item.
Hope you help me out here?? It doesn't seem to be related to the BAdI, but somehow the system doesn't allow me to mark SHC items as ECS.
Thanks & Regards,
Berend Oosterhoff
SRM Consultant Accenture Technology Solutions - The Netherlands.
BAdI BBP_EXTLOCALPO_BADI:
method IF_EX_BBP_EXTLOCALPO_BADI~DETERMINE_EXTPO.
data definition----------------------------------------------------*
DATA: wa_mattype TYPE BAPIMATDOA,
wa_char18 TYPE MATNR,
attrib_tab TYPE TABLE OF bbp_attributes,
wa_attrib_tab TYPE bbp_attributes,
wa_value TYPE om_attrval,
wa_product_id TYPE comt_product_id.
bbp_extpo_gl-bbpexpo = 'X'.
from here I did the specific selection for the SC, but that's not relevant here.Hi Prashant,
Thanks for your quick reply!
Note 1085700 is about Short Dumps when creating or changing a contract. I am trying to create a Shopping Cart, without a reference to a contract. BAdI BBP_DOC_CHANGE_BADI is not active in our system.
Any other thoughts?
Regards,
Berend -
BSP error when clicking on line item in SUS :Buffer table not up to date
Hi Experts,
I'm having a problem in SUS Portal. When i click on a line item of a PO to display the actual line item or see more details, i get a buffer table out of date error. I saw a thread with a similar issue where it says problem resolved but haven't had any luck getting a response from the poster, so I'm putting this question out there to everyone.
Related Post
Buffer table not up to date in SUS
More details on error...
Exception Class CX_BBP_PD_ABORT
Error Name
Program SAPLBBP_PDH
Include LBBP_PDHU08
Line 81
Long text Buffer table not up-to-date {}
Regards,
JD
Edited by: julian.k. drummond on Apr 13, 2010 6:46 PMHi Julian,
Please give the system some time and try again. It should work.
Thanks
Hari -
Buffer Table Not Up To Date Short Dump When Publishing Change Version of Bid Invitation
Hi experts,
We are running SRM 5.5 with Support Level 16.
We have a bid invitation for which a user is trying to publish a Change Version. This bid contains two bid outlines, with one line item in each bid outline:
ITEM 1: HIER (Bid Outline)
ITEM 2: Line Item
ITEM 3: HIER (Bid Outline)
ITEM 4: Line Item
However, upon clicking on the Check or Publish button, the system throws a CX_BBP_PD_ABORT exception short dump. The reason for the exception given in ST22 is "Buffer table not up to date".
Due to organisational system security restrictions, I am unable to copy-paste the dump from ST22, so I will be typing out some of the details here.
Transaction: BBP_BID_INV
Program: SAPLBBP_PDH
Screen: SAPLBBP_BID_INV_1000
Termination occurred in the ABAP program "SAPLBBP_PDH" - in "BBP_PD_ABORT". The main program was "SAPLBBP_BID_INV".
In the source code you have the termination point in line 73.
Active Calls/Events
(formatted below as "No: Type: Name")
24: Function: BBP_PD_ABORT
23: Form: ABORT
22: Form: ITMADM_UPDATE
21: Form: ITMADM_MAINTAIN_SINGLE
20: Form: ITEM_F_CHECK_FROM_WTAB
19: Form: ITEMLIST_F_CHECK
18: Function: BBP_ITEMLIST_CHECK
17: Form: PROCDOC_DB_CHECK
16: Form: PROCDOC_CHECK
15: Function: BBP_PROCDOC_CHECK
14: Form: SSIS_DOCUMENT_CHECK_COMPLETE
13: Form: STATUS_SET_AND_INTERNAL_SAVE
12: Form: PROCDOC_UPDATE
11: Function: BBP_PROCDOC_UPDATE
10: Form: CHANGE_VERSION_UPDATES_ACTIVE
9: Function: BBP_PDCV_UPDATE_ACTIVE
8: Form: PROCDOC_CHECK
7: Function: BBP_PROCDOC_CHECK
6: Function: BBP_PD_BID_CHECK
5: Form: FCODE_DOCUMENT_CHECK
4: Form: FCODE
3: Function: BBP_BID_PROCESS
2: Form: PROCESS
1: Module (PAI): PROCESS
The ABORT subroutine is being called in Step 22 (ITMADM_UPDATE), at the following line:
* parent guids have to be identical ...
if gt_itmadm-parent <> w_itmadm-parent.
perform abort.
endif.
Upon tracing the variable values in the ST22 dump, it appears the program terminates when processing line 3 (i.e. the second Bid Outline). It appears that the first Bid Outline processes fine.
In the iteration for Line 3, the dump provides that gt_itmadm-parent contains the GUID to the Change Version document header, whereas w_itmadm-parent contains the GUID to the original Bid Invitation document header, resulting in a mismatch and subsequently program termination.
We have tried to replicate this bid invitation in our Development environment, but when executing, both gt_itmadm-parent and w_itmadm-parent contain GUIDs to the Change Version document header, hence the program does not abort.
Any suggestions would be greatly appreciated.
Thanks!
Best regards,
Kwong HowHi Janu,
For Note 1561750, I have looked at the Correction Details and observed that it makes changes to an IF-ELSE block, but will result in the same path of code execution as before. I.e., the Note changes the code from "IF a THEN b, ELSE c" to just "c", but according to my code trace, we are already executing the "ELSE c" block in its current state.
We will look further into Note 1877600. The unfortunate situation here is that since we are unable to replicate this issue in our Development/QA environment, and the Note description does not explicitly fit the situation we are facing, we need to be very sure that the Note can address the issue before bringing it into Production.
Otherwise, we will raise a message as suggested.
Thank you so much for your help!
Best regards,
Kwong How -
The SYSVAL table entry for the database version (16) does not match the required version
We upgraded from Tidal Enterprise Scheduler (TES) 5.31 to 6.1 on fresh Windows x64 2008 R2 servers. I did a fresh install but our DBA restored a copy of our database from pre-prod which was at 5.31. I get the error below. Note, SQL went from 2005 to 2012 during this upgrade. Do I need to change a value in this table to reflect the SQL change?
[04/29 12:47:14:198]:TIDAL Enterprise Scheduler: version 6.1.0.133
[04/29 12:47:14:198]:Java version: 1.8.0
[04/29 12:47:14:198]:Java Virtual Machine version: 25.0-b70
[04/29 12:47:14:198]:Start Time : 04/29/14 12:47:14:198
[04/29 12:47:14:198]:----------------------------------------------------------------------------
[04/29 12:47:14:198]:Database URL :jdbc:sqlserver://SQL2012-Host:1433;responseBuffering=adaptive
[04/29 12:47:14:198]:Database Driver :com.microsoft.sqlserver.jdbc.SQLServerDriver
[04/29 12:47:14:198]:Maximum number of log files = 100
[04/29 12:47:14:198]:Added a LogFile called 'RegularFile'
[04/29 12:47:14:198]:LogManager: setting default log
[04/29 12:47:14:214]:Retrieved a LogFile called 'RegularFile'
[04/29 12:47:14:495]:MessageBroker: Instantiated TcpTransportServer (URI = tcp://0.0.0.0:6215)
[04/29 12:47:16:975]:Retrieved a LogFile called 'RegularFile'
[04/29 12:47:17:272]:The SYSVAL table entry for the database version (16) does not match the required version (23). Shutting down.
[04/29 12:47:20:282]:
[04/29 12:47:20:282]:
[04/29 12:47:20:282]:Shutting down the applicationI had this error last night while applying a patch to 6.0. However in my case the version was 21 versus a required version of 23.
As far as i know 6.1 is not compatible with the 5 db schema, so you will need an update / migration plan for the db also.
Maybe you are looking for
-
Is there any standard report for GR / IR to see as on particular date
Hi , Is there any standard report for GR/ IR to see as on paricular date. please help me. points will be given. regards, Hari priya
-
I want to send a large video file from my Iphone to people via E-mail
Does anyone know how to do this?
-
Hi All, i need some information, how can i find the table name for the below feild. i am using the T:Code : MC44, in this i gave the materian, plant and selected the check box all plant cumulated and then press enter. in the inside screen i hava sel
-
I have a 27" iMac when I open Safari the web page is only 1/3 the size of the full screen Safari window. I know how to ZOOM in on the web page! My question is, How do I keep this ZOOM setting for all web pages and each time I open Safari up?
-
How do I get the difference of time and/or date using JavaScript?
Hi All, I have four user text entry values (Date and Time) which require a difference to be performed on them. The results would be put into a fifth text box named "Burn_Time". The entries are Start_Date, Start_Time, Stop_Date, and Stop_Time. How do