Orphaned records in VETVG

Hi Guys/Experts,
We have some issues in our environment where there were orphaned documents in VETVG.  When the purchase order is viewed in ME23, it couldn't be found but when processed using VL04, it shows up.  I assume they were all garbage.
I have seen an OSS note 61148 and this talks about a program that could clean VETVG.  The problem with the program is that it checks for EKPV right away.  This is the part of the program that checks EKPV.
CHECK i_ebeln NE space.
SELECT * FROM ekpv APPENDING TABLE xekpv WHERE ebeln EQ i_ebeln.
CHECK sy-subrc EQ 0.
SORT xekpv BY ebeln ebelp.
Since the document could not be found in EKPV, it couldn't reach the statement in the program that deletes the records from VETVG.  Here's the part of the program that deletes the actual record from VETVG:
WRITE: / 'old records of table VETVG'.
ULINE.
SELECT * FROM vetvg WHERE vbeln EQ i_ebeln.
  WRITE: / vetvg-vstel,
         vetvg-ledat,
         vetvg-lprio,
         vetvg-route,
         vetvg-spdnr,
         vetvg-wadat,
         vetvg-kunwe,
         vetvg-vbeln,
         vetvg-vkorg,
         vetvg-vtweg,
         vetvg-spart,
         vetvg-auart,
         vetvg-reswk.
  IF db_upd NE space.
    DELETE vetvg.
  ENDIF.
ENDSELECT.
Please let me know if you've encountered the same and how did you get rid of the orphaned VETVG records.
I am looking forward to seeing your responses.  Thanks and have a good day!
- Carlo
Edited by: Carlo Carandang on Sep 7, 2010 8:41 PM

there are many notes regarding VETVG in OSS.  before you execute reports to clean the situation you should try to find the root cause and an OSS note that fixes the root cause.
It sound really strange to me to have records in VETVG that do not have a corresponding record in EKPV
EKPV is shipping data of a purchase order, without shipping data usually no  shipping.
so the situation must be that the purchase order initially was for stock transfers, and then got somehow changed to be not anymore a stock transfer (e.g. changing document type or other fields) which then may have caused the removal from EKPV, but maybe due to program errors the entry in VETVG was not removed.
I would start checking a purchase order  that is referenced in VETVG. woould lookup change history at header and item level, would check whether it is a PO for a stock transfer or something different, would chekc if the PO item has a shipping tab....
would then check OSS notes

Similar Messages

  • How to get orphan records in Wf_notifications table and how to purge records in WF notifications out table

    Hi Team,
    1) I need Query to get the orphan records in the wf_notification tables to purge them.
    2) Same way I need to purge unwanted records in WF NOTIFICATION OUT table.Can you please let me know the process to get unwanted records in
    WF NOTIFICATION OUT table and the process to purge them.
    Thanks in Advance!
    Thanks,
    Rakeesh

    Hi Rakeesh,
    Please review notes:
    11i-12 Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance [Video] (Doc ID 1369938.1)
    bde_wf_data.sql - Query Workflow Runtime Data That Is Eligible For Purging (Doc ID 165316.1)
    What Tables Does the Workflow Purge Obsolete Data Program (FNDWFPR) Touch? (Doc ID 559996.1)
    Workflow Purge Data Collection Script (Doc ID 750497.1)
    FAQ on Purging Oracle Workflow Data (Doc ID 277124.1)
    Thanks &
    Best Regards,

  • Catalog Orphaned Records

    Hi All,
    We are running SRM 5.5. with CCM 2.0 and have the following issue.
    When we run the program \CCM\CLEANUP_MAPPING with no commit, it finds many orphaned records in our Master catalog and Procurement Catalog.
    Although these errors do not appear to be causing any problems for our user community I would never the less prefer to remove them.
    When we run the program outside of normal working hours with commit on to try and remove the errors
    we get the error message
    Exception occured
    ^ Error during catalog update 4400000047; catalog is locked by CATALOG_ENRICHMENT/491035F93F6E67C7E10000000A003214
    Has anyone any ideas on how to resolve the orphaned records problem or what to do to remove the lock?
    Thanks in advance
    Allen Brooks
    SRM BPO
    Sunderland City Council

    Hi Allen,
    I would rather advice to open a new message by SAP to solve this issue!
    As you can see at the top of the report /CCM/CLEANUP_MAPPING:
    Internal Program!!!
    To Be Executed by SAP Support Team Only
    Due to you can damage your whole catalogs with this report!
    However, the problem seems to be, that you upload a new catalog, where the enrichment is still running, or this get stucked. You can check this in the table /CCM/D_CTLG_REQ. When in the column REQ_SENDER you see some entry, then this catalog is locked, which has to be cleaned up by Support!
    Regards,
    Tamás
    SAP

  • Deleting orphan records.

    I happen to be in a situation where I have orphaned records whenever I do a dimension build. I have tried using a load rule with the remove unspecified option but all the children of the members I want to keep get deleted. As an example.
    Member1
    -Child 1 of member 1
    - Child 1 of child 1
    -Child 2 of member 1
    Member2
    Orphan1
    Orphan2
    If I create a load rule with a data file containing Member1 and Member2 and use the remove unspecified option, Orphans 1 and 2 get deleted as required but all the children of member1 also get deleted. Question is Is there any way I can delete the orphaned records without deleting the children of the members I want to keep? I'm an Essbase newbie so any help will be appreciated.

    In order to use the "Remove Unspecified" option, you need to be sure you have specified all the records you want to keep.
    Can you somehow generate a file with all the children you need to keep as well?
    Robert

  • Orphan record owb 10gr2

    Dear, I am urgently needing to solve a problem in version 10gR2 owb, use this version because the OS is zSeries mainframe, so there is only the version 10gR2.
    So my problem is as follows:
    1 - I am using the cube object to find the keys sk's in the dimension tables and bring their record attached to them.
    2 - For keys that exist in the dimension lookup is working perfectly. My problem is for keys that do not exist in dimension. When I make the cargo hub of the object if he does not find the record size in this record is not loaded in the cube.
    Read the version of OWB 11g, I saw that there is a treatment records orphans. I wonder how to do this in version 10gR2, I will be immensely grateful to everyone.
    a hug and look for help
    Grimaldo Oliveira

    What version specificlly are you using?

  • How to update child record when item from parent record changes

    Hi, I have a master and detail form with two regions on the same page.
    Region one references parent record, one of column in the parent record is also in the child record. And is one to many relation
    between parent record and child record. How can I have the column on the child record updated when the column of the parent record is being modified?
    For exemple; Parent record has two columns : ID and Program.
    Child record has Program, goal# and status. I have two pages established, page 27 and page 28.
    Page 27 list out all programs from parent record, by clicking on edit to a program, it braches to page 28 with program listed as editable field in region one, and mulitple records list in region two from the child record with the same program.
    The problem that I am having is once the program in region one got modified using ApplyMRU, the program in child record did not get updated with the new value and therefore those record become orphan records. I need a way to update all current child records with the new program value.
    Thanks in advance for anyone who can hlep out on this problem.
    Julie
    Edited by: JulieHP on May 24, 2012 4:57 PM

    One Idea is
    If possible create a after update database trigger on the parent table to update the relevant child record.
    Next one
    Create a PL/SQL process on the parent page with process sequence next to ApplyMRU, so when the ApplyMRU is seccessfull it goes not to your process where you can have your update statement

  • Record locks created when user closes the Browser and not the Web Form

    Hi. We sometimes encounter the issue where a user updates a record, locking the record on the table, but then they unexpectedly closes the browser without saving by clicking the X in the upper-right of the browser window. Then when another user comes along and attempts to edit that record they get the message Unable to Reserve Record. The orphaned record lock eventually does seem to clear itself out, but that can often take 15-20 minutes.
    Is there any way to speed this up? Or to pragmatically keep this from occurring? Either on the database side or with some code in a particular application?
    Please let me know your thoughts. Thanks in advance.

    If a user closes the browser window the forms runtime on the application server holding the locks is in most cases still up and running. The FORMS_TIMEOUT controls on how long a forms runtime on the server is up and running without the client applet not sending a heartbeat (See MOS note 549735.1). By default this is 15 minutes which would explain your locks being held 15 minutes.
    You could decrease the FORMS_TIMEOUT in the default.env, so the forms runtimes get cleaned earlier and thus the locks get released earlier.
    Note that if you have blocking client_hostcalls with webutil this might be a problem, as it prevents the forms applet from sending the heartbeat and after the FORMS_TIMEOUT passed while the forms applet is blocked the forms runtime on the server gets closed.
    cheers

  • Orphans

    Hi,
    Foreign key null may result in orphaned records.
    As a null may have no parent. why is this allowed?
    What conditions require this?
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help , or you may vote-up a helpful post

    There are scenarios which would require this kind of setup
    1. Consider case of self referencing relationship as in case of employee and his manager. Typically this would be represented by a single table having fields EmpID and MgrID which will point to EmpID of same table itself via foreign key. So in this case
    you may choose to make foreign key column MgrID nullable type to represent root level employee (CEO) who will not have a manager to report. If you want to enforce NOT NULL here then you might have to define a value to represent absence of manager like -1/0
    which would require a record with id as same value present in table to denote absence of manager condition.
    2. Consider case where an entity may belong to more than one group. take case of School Library. Here we will have Member table which will indicate member of library. The member can be a student, teacher or even a non teaching staff within school. So the
    table will have separate columns to indicate StudentID,StaffID,TeacherID (or if you want you can collapse teacher and other staff info into one main table) and each column will be linked to master table by means of foreign key constraint (so StudentID ->
    Students Table id field,StaffID -> Staff table id etc). So if you take any one record it will have value for only one of fields ie the member is either a student or teacher or non teaching staff. This is a case where you make them nullable.
    So scenarios like above you'll always make FK column as of type NULLable
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Deleting invalid records

    Our company is widely using HFM application since 9 years, we have not done this activity in our
    Live environment. I understand it delete invalid or orphan records stayed in application once metadata
    is changed.
    My questions is does it impact the actula data in any way ?
    By performing this activity many be quarterly will improve performance also ?
    Let me know ur thoughts asap plzzzz.
    Thanks,
    Ankush

    The program deletes invalid records which means the members not in heirarchy(outline) so it wont cause any issues with data associated with existing outline members.
    Thanks
    Amith

  • Inconsistency between catalog authoring tool & Search engine

    Hi,
    We are on CCM2.0 and SRM5.0 with catalog deployment in the same client as SRM.
    In our implementation we are using master catalog and product catalog. We have very recently deleted around 65 items from our product catalog which is mapped to mater catalog and hence those 65 items are not found anymore in master catalog in CAT (Catalog authoring tool).
    This master catalog when published, we expect that these 65 products should not be available for end users.
    But end users can still see the deleted items through CSE (Catalog search engine). Thus there is data inconsistency between CAT & CSE.
    Has anyone faced similar issue before if yes can you please share the same with us.
    Edited by: Manoj Surve on Mar 24, 2009 7:05 AM

    Run TCODE SE38 program /CCM/CLEANUP_MAPPING to look for errors/orphaned records in test mode.
    If you want to clear out orphaned records uncheck the Test Mode but:
    Warning: This program can slow down a production system as it clears out the memory buffers for user catalog searches and these have to be built up again by the users doing searches.  This can be  hours to days depending upon the size of your system and the number of users.
    In Dev or UAt no problem.
    Hope this helps?
    Regards
    Allen

  • Supplier sites open interface question

    Hi all,
    Here is what we have done:
    We have a data migration program that looks at a csv file and then brings the data into staging table.
    Then, it inserts the above data into 3 open interface tables (AP_SUPPLIERS_INT, AP_SUPPLIER_SITES_INT, AP_SUP_SITE_CONTACT_INT)
    We are using the vendor_interface_id to link the data.
    For ex.
    Supplier 'ABC' in AP_SUPPLIERS_INT would have vendor_interface_id => 2345
    Supplier Sites that belong to above supplier will have:
    vendor_interface_id => 2345 vendor_site_code => 'ABC-SITE-1'
    vendor_interface_id => 2345 vendor_site_code => 'ABC-SITE-2'
    When we ran the Request Set  [Supplier Open Interface Request Set(1)] , the program considered all the records and imported Supplier record and 'ABC-SITE-1'.
    It rejected 'ABC-SITE-2' because of a setup issue.
    Now, after we fixed the setup issue, we ran the request set again, it rejected the Supplier record saying "Vendor already exists " => No problems with this.
    But, it doesn't attempt to pick the second site 'ABC-SITE-2' which is now good to be picked, because it doesn't update the vendor_id, it stays as an orphan record.
    Is there any way to work this around (preferably from the application)?
    Thanks
    Vinod

    Hi user567944 ,
    While submitting to the Open interface. the import options parameter should be All or Rejected. Please check this.
    Regards/Prasanth

  • Errors, Errors, and More Errors

    I am having massive problems with our Sql Server (v8 sp4)/Websphere (5) application.
    We rolled out a new version of our (previous stable) J2EE application at the beginning of December 2005. Our app uses container-managed transactions, no entity beans.
    We immediately began experiencing blocking processes. KILLing the blocking process almost always resulted in a few widowed and orphaned records.
    I tried tuning the connection/transaction/context. Previously, the app created new connections for almost every function call to every bean. I changed the configuration to share the same connection for all the calls. The problem reduced slightly.
    I have tried reducing the number of queries in each transaction. No help, either.
    Today, I checked the logs and we are getting a crazy collection of database errors:
    - "Statement is closed"
    - "ResultSet is closed"
    - "Invalid parameter binding(s)" (lots of these!)
    - "No valid transaction context present"
    - "Transaction is ended due to timeout"
    - "Transaction has already failed, aborting operation."
    As well as Java exceptions like:
    - "NullPointerException"
    - "SQLException: DSRA9002E ResourceException with error code null"
    This makes no sense whatsoever. We didn't make massive changes to the code. Our changes focused on some new beans, new JSPs, etc.
    Any ideas?
    Russell T
    Atlanta, GA (USA)

    I apologize for explaining badly the connection setup. We are using connection pooling (through Websphere).
    What I intended to say was that we were acquiring connections from the Context willy-nilly throughout the code. I created an abstract bean that every session bean extends in order to eliminate redundant code.
    The abstract session bean gets its connection like this:
    Context envCtx = (Context) new InitialContext();
    String dataSourceName = (String) envCtx.lookup("java:comp/env/dataSourceName");
    DataSource dataSource = (DataSource) envCtx.lookup("jdbc/" + dataSourceName);
    connection = dataSource.getConnection();The Websphere admin has even tried additional tuning of the connection pool settings, such as size, timeout, reap time, etc.

  • Multiple instances of the same calendar events using Oulook and Desktop Manager

    BB Curve 8310
    OS 4.5
    Desktop Manager 5.0
    Theme = Dimension Today
     Outlook 2007 “full version”
    Using two POP3 email accounts. One is a personal gmail and the other is most important a business email. I do not use gmails calendar.
     12 Months or so back I would sometimes get duplicate appointments in my BB calendar after a sync but Outlook showed a single appointment. I had read that it was intermittent with the theme “Today” and the advice was “do not accept appointment invites in Outlook”.
     I forgot and accepted several I the past week. Now almost all my appointments are doubled up in my BB and again, my Outlook seems fine. I just noticed that I can now select which calendar to “show” in my BB.
     Is this caused by getting an email invite to an appointment?  
     1)   Does anyone know what is causing this?
    2)   Could you suggest a quick fix?
    3)   Could you suggest a permanent fix?
     Thanks
    Patrick
    Solved!
    Go to Solution.

    PatS wrote:
    Thanks Bifocals ,,,,, I'm blushing just a bit. I'd put one of those red smiley faces but I would have to post another question to find out how!
    BB Calendar CICAL = personal email
    BB Messaging CMIME = work email
    This is not correct the default services values need both reflect the main email associated account.
    They need to both be your work address. This could very well explain the duplicates
     Outlook has/uses only the work email address show it has to be the default one - I have long ago deleted my personal one from Outlook.
    Outlooks Calendar Data File recurrence (none) 339, in Outlook 2007 I had to find them a different way but I think it's what your looking for.
    My BB shows a "default" = 217, "personal" = 185, and "work" = 309
    The records in the device default calendar will not synchronize. The are in effect "orphaned"
    records that at one time belonged to an email address. Usually caused by deleting the email account or losing the service books for the email account. We need to remove them
    can you tell what email account they were created from? And where they would logically be moved?
    A couple of notes - now that you have had me flipping through different areas I have noticed a few interesting things that maybe would be beneficial. While using Outlook 2003, I used several email addresses. Don't know if Outlook updates to 2007 or installs a new 2007 program using data from 2003. There was a time that I was sending emails in Outlook from my personal account, there was a slight hiccup and every so often it would revert back, hence I simply deleted it.
    I would need to verify it, but I'm pretty sure that the email addresses would have to be installed again.
    I'll split that one with you. I'll take the Blackberry knowledgebase. You take the Microsoft outlook support site
    I'm now wondering if I delete all the calendars on my BB and do the one way sync? That seems simple enough.
    We are gong to do just that, to repair what happened. It will not prevent another recurrence.
     You  asked for three things:
    You wanted a a reason this happened, I think we found it in the default services setup.
    You wanted a quick fix, It will be done in with an  hour's work.
    You also wanted a permanent fix,  Want to go for all three?
    If you have multiple calendars you will have problems. It's just the nature of the beast.
    It's not an "if " but just a matter of "when" deal!
    Personally I can't find any logic in the concept of multiple calendars.
    People explain they use one for work appointments and one for personal appointments.
    I always want to ask why they don't use two watches one for personal and one for work?
    I do see the need for separating personal and work email accounts.
    Here is I think the best of both worlds answer.
    Leave your work email configured and use the work calendar as you would normally.
    For your personal email install the Gmail for mobile application. You have access to your
    personal and work emails, they are kept separate due to the nature of the Gmail app.
    Here is the the cherry on the top. The Gmail app doesn't install a calendar.
    So two email addresses one calendar, ain't life grand!
    So here is the plan Stan, let me know what you think.
    1) On the Blackberry options advanced options, default services. Match both options to your work email.
    2) Decide what needs to happen with the 217 device default "orphans".
    Also 185 personal records. Are they worth keeping, can they be sent to the work calendar, do they need to be on Outlook.
    3) Read this article you can see your options and  help you make the best decision.
     KB19240 How to merge multiple calendars into one calendar on the BlackBerry smartphone
    Read about the Gmail for mobile app see if you think it will work for you: Gmail mobile app
    If you don't like it or don't think it's a good fit just let me know and we can go a different way.
    Let me know what you want to do,
    Thanks,
    Bifocals
    Click Accept as Solution for posts that have solved your issue(s)!
    Be sure to click Like! for those who have helped you.
    Install BlackBerry Protect it's a free application designed to help find your lost BlackBerry smartphone, and keep the information on it secure.

  • Permanently change default error configuration in Analysis Services 2005

    Hi,
    Currently, I am working on a BPC 5.1 application.  The data for this application is loaded(inserted via SQL statement) right to the FACT table and then a full process is run for that cube via an SSIS package using the Analysis Services Processing Task.  Often records are loaded this way where a dimension member for some of the records has not been added to the Account dimension yet.  These records after loading are considered 'orphan records' until the accounts are added to the account dimension.
    This loading process is used because of the volume of records loaded(over 2 million at a time) and the timing of the company's business process.  They will receive data sometimes weeks before the account dimension is updated in BPC with the new dimension members.
    If I try and process the application from the BPC Administration area with these orphan records in the FACT table, the processing stops and an error displays.  Then when I process the cube from Analysis services, an error is displayed telling me that orphan data was found.
    A temporary work-around is to go into the cube properties in Analysis Services 2005, click on Error Configuration, uncheck 'Use default error configuration' and select 'Ignore errors'. Then you can process the application from BPC's Administration page successfully.  But, the problem is that after processing the application successfully, the Analysis Services Error Configuration automatically switches back from 'Ignore errors' to 'Use default error configuration'.
    Does anyone have any suggestions on how to permanently keep the 'Ignore errors' configuration selected so it does not automatically switch back to 'Use default error configuration'?  Prior to BPC 5.0 this was not occurring.
    Also, does anyone know why this was changed in BPC 5.0/5.1?
    Thanks,
    Glenn

    Hi Glenn,
    I understood the problem but I can say that it was a bad migration of appset from 4.2 to 5.0.
    Any way they are using a dts package to import data into our fact table. That's means they have to add another step into that package where they have to do the verfications of records before to insert into fact table. Verfications can be done using the same mechanism from our standard import. Just edit that package and add similar steps into customer package.
    Attention you need somebody with experience developing DTS packages with for BPC to avoid other problems.
    One of big benefits from 5.X compare with 4.2 was the fact that we are able to use optimization schema and aggregations for cubes.
    Heaving that orphan records it is not possible to use optimization schema for cubes and you are not able to create good aggregation into your cube.
    So my idea is to provide all these information to customer and to try to modify that package instead to enable that option which can cause many other issues.
    Sorin

  • Getting DRG-50858: OCI error: OCI_NO_DATA with user_datastore on 9iR2

    Hi there,
    Created a procedure and mapped it to a user_datastore... getting the following error in the ctx error table:
    DRG-12604: execution of user datastore procedure has failed
    DRG-50857: oracle error in drsinopen
    DRG-50858: OCI error: OCI_NO_DATA
    Created the following preferences:
    begin
    -- set the user datastore procedure
    ctx_ddl.create_preference('ccnews_ud', 'user_datastore');
    ctx_ddl.set_attribute('ccnews_ud', 'procedure', 'ctxsys_ccnews_ustore_idx_prc');
    ctx_ddl.set_attribute('ccnews_ud', 'output_type', 'CLOB');
    ctx_ddl.create_preference('english_lexer','basic_lexer');
    ctx_ddl.create_preference('french_lexer','basic_lexer');
    ctx_ddl.create_preference('global_lexer', 'multi_lexer');
    ctx_ddl.add_sub_lexer('global_lexer','default','english_lexer');
    ctx_ddl.add_sub_lexer('global_lexer','ENGLISH','english_lexer','1');
    ctx_ddl.add_sub_lexer('global_lexer','FRENCH','french_lexer','2');
    end;
    Is it possible that the procedure returns an empty clob for a row which causes this problem?
    The index is running on a table with 28,000 rows, I have to admint that the data integrity is not up to par (ie could be existence of orphaned records) Could this cause the problem?

    An empty clob or even an orphaned record should not cause the problem. It is usually due to a bug in the procedure. Data corruption could be a cause. Please post a copy and paste of a run of create or replace for your procedure, followed by show errors and a copy and paste of a run of a query that produces the error stack.

Maybe you are looking for