Long runtimes while performing CCR

Hello All,
After running the delta report job we found some inconsistency for stocks when we try to delete or push to apo ( once performing the iteration ) the entries are not being deleted nor pushed  and is taking long runtimes. We dont see this issue for any other elements except stocks. Please let me know the reasons on why this would be happening and also please let me know if there is any way in which we can rectify this inconsistency for stock b/w ECC and apo .
Thanks
Uday

Uday,
I had one experience several years back with long CCR runtimes for Stock elements that might apply to you.
For CCR, you have 6 categories of stocks to check.  If any of these stock category elements is not actually contained in any of your integration models, the CCR search can take a long time searching through ALL integration models trying to find a 'hit'.
There are two possible solutions.  Ensure that you ONLY select CCR stock types that are contained in your CFM1 integration models.  If possible, deselect the CCR stock types that have no actual stocks within the integration models (where such stocks do not actually exist in ECC).  If this does not meet your business requirement, then try performing your CCR ONLY on the integration model(s) that contain the stock entries.  Do not leave the CCR field "Model Name" blank.
With respect to the stock inconsistencies, 'how bad is it'?  It is common to have one or two Stock inconsistencies every day if you have hundreds of thousands of stock elements to keep in synch.  The most common reason I see for excessive stock entries in CCR is improperly coded enhancements.
Best Regards,
DB49

Similar Messages

  • System is giving ABAP run time error while perform LT06 or create TO with r

    Hi SAP WM Gurus,
    System is giving ABAP run time error while perform LT06 or create TO with respect to posting change notice, below are runtime analysis details.
    Short dump has not been completely stored. It is too big.
    P_MENGA = P_MENGE.
    007940       P_UMREZ = 1.
    007950       P_UMREN = 1.
    Can you give any idea on this issue.
    Thanks and Regards,
    SHARAN.

    Hi,
    Go to Tcode ST22 and mentione the dump code there and check.
    Also take help[ of Abapers to fix it.
    Thanks
    Utsav

  • ABAP run time error while perform LT06 or create TO with respect to posting

    Hi SAP WM Gurus,
    System is giving ABAP run time error while perform LT06 or create TO with respect to posting change notice, below are runtime analysis details.
    >> Short dump has not been completely stored. It is too big.
    >       P_MENGA = P_MENGE.
    007940       P_UMREZ = 1.
    007950       P_UMREN = 1.
    Can you give any idea on this issue.
    Thanks and Regards,
    SHARAN.

    This part is just the place in the program where the error occured, but why the error occured is mentioned earlier in the dump.
    Maybe you have a too big number in the field and hence a field overflow, maybe you have a character instead of a number in the field.
    Read the dump from the beginning. if you dont know how to read a dump,then try to get help from any local Abaper.

  • Web Application Designer 7 - Long Runtime

    Hi,
    I'm working in BI-7 environment and to fulfil the users' requirement we have developed a web template having almost 30 queries in it.
    We are facing very long runtime of that report on web. Afer analysing with BI Statistics we came to know that DB and OLAP are not taking very long time to run but its the front-end (web template) which is causing delay. Another observation is maximum time is consumed while web template is being loaded/initialized, and once loaded fliping between different tabs (reports) doesn't take much time.
    My questions are;
    What can I do to reduce web template intialization/loading time?
    Is there any way I can get time taken by front-end in statistics? (currently we can get DB and OLAP time through BI statistics cube and consider remaing time as front-end time, because standard BI statistics cube is unable to get front-end time when report is running on browser)
    What is the technical processes involve when information moves back from DB to browser?
    Your earliest help would be highly appreciated. Please let me know if you require any further information.
    Regards,
    Shabbar
    0044 (0) 7856 048 843

    Hi,
    It asks you for a log in to the Portal, because the output of the Web Templates can be viewed only through the Enterprise Portal. This is perfectly normal. BI-EP Configuraion should be proper and you need to have a Login-id and Password for the Portal.
    For using WAD and design the front end, go through the below link. It would help you.
    http://help.sap.com/saphelp_nw70/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm

  • Long runtimes due to P to BP integration

    Hi all,
    The folks from my project are wondering if any of the experts out there have faced the following issue before. We have raised an OSS note for it but have yet to receive any concrete solution from SAP. As such, we are exploring other avenues of resolving this matter.
    Currently, we are facing an issue where a standard infotype BADi is causing extremely long runtimes for programs that update certain affected infotypes. The BADi name is HR_INTEGRATION_TO_BP and SAP recommends that it should be activated when E-Recruitment is implemented. A fairly detailed technical description is provided as follows.
    1. Within IN_UPDATE method of the BADi, a function module, HCM_P_BP_INTEGRATION is called to create linkages between a person object and business partner object.
    2. Function module RH_ALEOX_BUPA_WRITE_CP will be called within HCM_P_BP_INTEGRATION to perform the database updates.
    3. Inside RH_ALEOX_BUPA_WRITE_CP, there are several subroutines of interest, such as CP_BP_UPDATE_SMTP_BPS and CP_BP_UPDATE_FAX_BPS. These subroutines are structured similarly and will call function module BUPA_CENTRAL_EXPL_SAVE_HR to create database entries.
    4. In BUPA_CENTRAL_EXPL_SAVE_HR, subroutine ADDRESS_DATA_SAVE_ES_NOUPDTASK will call function module, BUP_MEMORY_PREPARE_FOR_UPD_ADR, which is where the problem begins.
    5. BUP_MEMORY_PREPARE_FOR_UPD_ADR contains 2 subroutines, PREPARE_BUT020 and PREPARE_BUT021. Both contain similar code where a LOOP is performed on a global internal table (GT_BUT020_MEM_SORT/GT_BUT021_MEM_SORT) and entries are appended to another global internal table (GT_BUT020_MEM/GT_BUT021_MEM). These tables (GT_BUT020_MEM/GT_BUT021_MEM) will be used later on for updates to database tables BUT020 or BUT021_FS. However, we noticed that these 2 tables are not cleared after updating the database, which results in an ever increasing number of entries that are updated into the database, even though many of them may have already been updated.
    If any of you are interested in seeing if this issue affects you, simply run a program that will update either infotype 0000, 0001, 0002, 0006 subty 1, 0009 subty 0 or 0105 subty (0001, 0005, 0010 or 0020) to replicate this scenario if E-recruitment is implemented in your system. Not many infotype updates are required to see the issue, just 2 are enough to tell if the tables in point 5 are being cleared. (We have observed that this issue occurs during the creation of a new personnel number (and hence a new business partner). For existing personnel numbers, the same code is executed but the internal tables in point 5 are not populated.)
    System details: SAP ECC 6.0 (Support package: SAPKA70021) with E-Recruitment (Support package: SAPK-60017INERECRUIT) implemented.
    Thanks for reading.

    Hi Annabelle,
    We have a similar setup, but are on SAPK-60406INERECRUIT.  Although the issue does not always occur, we do have a case where the error ADDRESS_DATA_SAVE_ES is thrown.
    Did you ever resolve your issue?  Hoping that solution can help guide me.
    Thanks
    Shane

  • BPS0 - very long runtime

    Hi gurus,
    During the manual planning in BPS0 long runtime occurs.
    FOX formulas are used.
    There is lot of data selected, but it is business needs.
    Memory is OK as I can see in st02 - 10-15% of resources are usually used, no dumps, but very long runtime.
    I examine hardware, system, db with different methods, nothing unusual.
    Could you please give me more advices, how I can do extra check of the system? (from basis point of view preferably)
    BW 3.1. - patch 22
    SEM-BW 3.5 - patch 18
    Thanks in advance
    Elena

    Hello Elena,
    you need to take a structured approach. "Examining" things is fine but usually does not lead to results quickly.
    Performance tuning works best as follows:
    1) Check statistics or run a trace
    2) Find the slowest part
    3) Make this part run faster (better, eliminate it)
    4) Back to #1 until it is fast enough
    For the first round, use the BPS statistics. They will tell you if BW data selection or BPS functions are the slowest part.
    If BW is the problem, use aggregates and do all the things to speed up BW (see course BW360).
    If BPS is the problem, check the webinar I did earlier this year: https://www.sdn.sap.com/irj/sdn/webinar?rid=/webcontent/uuid/2ad07de4-0601-0010-a58c-96b6685298f9 [original link is broken]
    Also the BPS performance guide is a must read: https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7c85d590-0201-0010-20b5-f9d0aa10c53f
    Next, would be SQL trace and ABAP performance trace (ST05, SE30). Check the traces for any custom coding or custom tables at the top of the runtime measurements.
    Finally, you can often see from the program names in the ABAP runtime trace, which components in BPS are the slowest. See if you can match this to the configuration that's used in BPS (variables, characteristic relationships, data slices, etc).
    Regards
    Marc
    SAP NetWeaver RIG

  • Error this object is corrupt or is no longer available while trying to open a embedded files in word 2007 or 2013

    Hi All
    Word 2007 and 2013 always show this error
    This object is corrupt or is no longer available while clicking a embedded files( like pdf, doc and xls). I tried reinstall office 2007, and office 2013. But still have same problems.
    I surf internet, find out solution like turnning  off the add-on,  but not working for me, and I do not install Norton AntiVirus.
    Does anyone know what the solution of this issue is, I do not have any idea right now.
    Thanks in advance!!!!!

    I notice there is a cross post on Answer Forum:
    http://answers.microsoft.com/en-us/office/forum/office_2013_release-word/error-the-object-is-corrupt-or-is-no-longer/fde2160e-fc19-4f90-81db-4f569fac7b95
    Is Dinel's suggestion helpful?
    Tony Chen
    TechNet Community Support

  • How to bring in the Open Sales Orders while performing Sales Order Conv

    Hi,
    Could you give some suggestions on the below:
    How to bring in the Open Sales Orders while performing Sales Order Conversion in R12.
    Thanks
    Pravin

    See http://ramugvs.wordpress.com/2011/08/27/how-to-stage-and-import-an-order-using-order-import-api-apps-wms-ebiz-11i-r12/
    Sandeep Gandhi

  • Connection error while performing basic settings SOLMAN 4.0 SP12

    Hi All,
    We are currently facing a problem while performing Solution Manager 4.0 (SP12) basic configurations through the IMG Wizard. Within the "Initial Configuration Part I" section, an error while connecting to SAP is obtained (This is the full error message "Error creating communication to SAP Service and Support"). After checking at TX SM59, it was noticed that the RFC connections SAP-OSS y SAP-OSS-LIST-O01 (what are supposed to be created in previous steps by copying SAPOSS connection) were no created. We created them both manually (also tested) and everything was ok. We started the Wizard again, and the same error was obtained. However, it was noticed that the SAP-OSS RFC connection had been deleted. The system log shows the following:
    00789 - Dialog work process  No. 000 - XXXXXX - EQUIPO - 1 - SM_CONFIG_WZ - M_CONFIG_WZ_START - T - Transaction Problem - STSK
    The same scenario occurs every time the above wizard is used.
    It worth to mention that the SAPOSS connection works fine and also, just to perform a test, I was able to download a note correction through the SNOTE transaction.
    Where should I check to see what is causing this connection error? Do I missing a previous configuration step?
    Also, would be very helpful for us to get an step by step reference guide for SM configuration. We want to be sure we are no missing or mixed up anything.
    Thanks so much for your help.
    Regards

    >
    diego77 wrote:
    > We are currently facing a problem while performing Solution Manager 4.0 (SP12) basic configurations through the IMG Wizard.
    Why do you start configuration for SP12, although most current version is SolMan 7.0 EhP1 SPS19?
    What scenarios / functions are you planning to use?
    Best regards,
    Ruediger

  • "An unspecified error occurred while performing a conform action on the following file:"

    Has anyone else gotten this error in any of their projects?  I've searched around Adobe support pages and elsewhere on the internet, but have found nothing.
    It comes up in the events window and I can't even read what file it's talking about because it's on the next line, but the error message only shows one line and I haven't figured out a way to expand it.
    I'm not even sure if it's causing any problems, or not (the project seems to work ok).  This message shows up twice, as a red circle with an 'X' in it, but I also get the following message numerous times referring to a large amount of my .avi files:
    "File importer detected an inconsistency in the file structure of filename.avi.  Reading and writing this file's metadata (XMP) has been disabled."
    That message shows up as a yellow triangle with an '!' in it.  Since I haven't been able to turn anything up about these messages, I figured I'd ask here and see if anyone knew what was going on, how to fix the problem or if I should even worry about them.
    Thanks.

    I am having this problem now when i use dynamic link from Premiere to after effects. I copy the footage from premiere then paste into after effects, i add alittle text maybe some light rays etc save it then go back to premiere. Then i get this message "An unspecified error occurred while performing a conform action on the following file C:\Caches\Media Cache Files\text comp 1 48000.cfa." does anyone have any idea as to what this is? i have had a few problems using the dynamic link from premiere to after effects problems such as Premiere freezing, not playing the after effects comps back in real time, crashes etc. Its as if the 2 programs dont like the dynamic link.
    Im running Master Collection CS5 & CS 5.5 on running on a 24gb ram, Intel Core i7 950 @3.07GHz PC with a NVIDIA GeForce GTX470 graphics card.

  • PRE10 "Unspecified error occurred while performing a conform action on the following file:..."

    I really need some help!
    I am making a short video on premiere elements 10 and I seem to get this conforming error everytime I open the project and I cannot save it either!!
    "Unspecified error occured while performing a conform action on the following file:
    File0002.AVI 48000.CFA"
    This happens everytime I open premiere elements 10  to lots (or all) of my video clips and I can't find any help in the adobe forums or anywhere else online!
    Also, I want to save this video as an AVI file, but when I try to export it, it says it has finished quite quickly and has not even added anything to click on or when it does, it does not play. Sometimes it says it has finished then comes up with: "Error compiling movie. Unknown error". one of these happen everytime that I try to save it.
    Lots (or all) of my avi files come up as this same conforming error when I load the project and I have no clue as to what to do!...
    It also crashes nearly everytime after these compiling errors!
    My computer is running windows 7 ultimate OS
    Please help!
    Please...!

    Operating System
    System Model
    Windows 7 Ultimate (build 7100)
    Gigabyte Technology Co., Ltd. M61PME-S2P
    Enclosure Type: Desktop
    Processor
    Main Circuit Board
    2.80 gigahertz AMD Athlon X2 240
    256 kilobyte primary memory cache
    1024 kilobyte secondary memory cache
    64-bit ready
    Multi-core (2 total)
    Not hyper-threaded
    Board: Gigabyte Technology Co., Ltd. M61PME-S2P
    Bus Clock: 200 megahertz
    BIOS: Award Software International, Inc. F2 12/30/2008
    Drives
    Memory Modules
    320.07 Gigabytes Usable Hard Drive Capacity
    125.82 Gigabytes Hard Drive Free Space
    TSSTcorp CDDVDW SE-S084C USB Device [Optical drive]
    SAMSUNG HD321HJ SCSI Disk Device (320.07 GB) -- drive 0, s/n S13RJ90SB04688, SMART Status: Healthy
    1984 Megabytes Usable Installed Memory
    Slot 'A0' has 2048 MB
    Slot 'A1' is Empty
    Local Drive Volumes
    c: (NTFS on drive 0) *
    319.97 GB
    125.76 GB free
    d: (NTFS on drive 0)
    105 MB
    65 MB free
    * Operating System is installed on c:
    The media files are avi and are in my videos folder (videos-windows 7)

  • JCAActivationAgent::load - Error while performing endpoint activation:java.

    Hi,
    I am getting following error while deploying a BPEL process. This is very surprising because the process used to run fine until yesterday. Also for some time I was getting a funny error - the JDev was not able to "read" a wsdl file on local machine. I restarted the server, machine many times but it did not help. I do not have any proxies set and the file resides on my local harddrive.
    <2006-04-11 11:30:20,090> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound> JCAActivationAgent::load - Er
    ror while performing endpoint activation:java.lang.NullPointerException
    <2006-04-11 11:30:20,090> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
    java.lang.NullPointerException
    at oracle.tip.adapter.fw.agent.jca.JCAActivationAgent.load(JCAActivationAgent.java:208)
    at com.collaxa.cube.engine.core.BaseCubeProcess.loadActivationAgents(BaseCubeProcess.java:931)
    at com.collaxa.cube.engine.core.BaseCubeProcess.load(BaseCubeProcess.java:302)
    at com.collaxa.cube.engine.deployment.CubeProcessFactory.create(CubeProcessFactory.java:66)
    at com.collaxa.cube.engine.deployment.CubeProcessLoader.create(CubeProcessLoader.java:391)
    at com.collaxa.cube.engine.deployment.CubeProcessLoader.load(CubeProcessLoader.java:302)
    at com.collaxa.cube.engine.deployment.CubeProcessHolder.loadAndBind(CubeProcessHolder.java:881)
    at com.collaxa.cube.engine.deployment.CubeProcessHolder.getProcess(CubeProcessHolder.java:789)
    at com.collaxa.cube.engine.deployment.CubeProcessHolder.loadAll(CubeProcessHolder.java:361)
    at com.collaxa.cube.engine.CubeEngine.loadAllProcesses(CubeEngine.java:960)
    at com.collaxa.cube.admin.ServerManager.loadProcesses(ServerManager.java:284)
    at com.collaxa.cube.admin.ServerManager.loadProcesses(ServerManager.java:250)
    at com.collaxa.cube.ejb.impl.ServerBean.loadProcesses(ServerBean.java:219)
    at IServerBean_StatelessSessionBeanWrapper14.loadProcesses(IServerBean_StatelessSessionBeanWrapper14.java:2399)
    at com.collaxa.cube.admin.agents.ProcessLoaderAgent$ProcessJob.execute(ProcessLoaderAgent.java:395)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:141)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:281)
    <2006-04-11 11:30:20,152> <ERROR> <default.collaxa.cube.engine.deployment> <CubeProcessLoader::create>
    <2006-04-11 11:30:20,152> <ERROR> <default.collaxa.cube.engine.deployment> Process "CallHomeBPEL" (revision "1.0") load FAILE
    D!!
    <2006-04-11 11:30:20,230> <ERROR> <default.collaxa.cube.engine.deployment> <CubeProcessHolder::loadAll> Error while loading p
    rocess 'CallHomeBPEL', rev '1.0': Error while loading process.
    The process domain encountered the following errors while loading the process "CallHomeBPEL" (revision "1.0"): null.
    If you have installed a patch to the server, please check that the bpelcClasspath domain property includes the patch classes.
    Please help.

    Finally , I was able to redeploy the process. After comparing new files with old files I observed an activationAgents entry in bpel.xml, which was not present previously.
    <activationAgents>
    <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="CallHomeFileUtility">
    <property name="portType">Read_ptt</property>
    </activationAgent>
    </activationAgents>
    when I removed this from bpel.xml the process deployed successfully. Not sure when the <activationAgents> is added to bpel.xml
    Thanks for your inputs.

  • Error while performing Risk Analysis at user level for a cross system user

    Dear All,
    I am getting the below error, while performing the risk analysis at user level for a cross system (Oracle) user.
    The error is as follows:
    "ResourceException in method ConnectionFactoryImpl.getConnection(): com.sap.engine.services.connector.exceptions.BaseResourceException: Cannot get connection for 120 seconds. Possible reasons: 1) Connections are cached within SystemThread(can be any server service or any code invoked within SystemThread in the SAP J2EE Engine), 2) The pool size of adapter "SAPJ2EDB" is not enough according to the current load of the system or 3) The specified time to wait for connection is not enough according to the pool size and current load of the system. In case 1) the solution is to check for cached connections using the Connector Service list-conns command, in case 2) to increase the size of the pool and in case 3) to increase the time to wait for connection property. In case of application thread, there is an automatic mechanism which detects unclosed connections and unfinished transactions.RC:1
    Can anyone please help.
    Regards,
    Gurugobinda

    Hi..
    Check the note # SAP Note 1121978
    SAP Note 1121978 - Recommended settings to improve peformance risk analysis.
    Check for the following...
    CONFIGTOOL>SERVER>MANAGERS>THREADMANAGER
    ChangeThreadCountStep =50
    InitialThreadCount= 100
    MaxThreadCount =200
    MinThreadCount =50
    Regards
    Gangadhar

  • Ipod touch can no longer tilt while playing Temple Run or Back Breaker Football

    IPOD Touch can no longer tilt while playing Temple Run or Back Breaker Football or any game where you need to control the tilt or turn.

    Managed to finally factory reset the unit (see my other post).
    Should be OK now.

  • Windows shuts down while performing iPod sync!

    Help! Just in the last couple of days my computer has started shutting down on its own but only while performing a sync between iTunes and my 60GB iPod. I upgraded my software to iTunes 7.5 and even installed the update for the iPod as well. I am still getting a shutdown during the first few minutes of my sync no matter what. Help! (FYI - my music is housed on my external hard drive and my iTunes library file is housed within My Music on my C drive if that matters.) Thank you soooo much! Any help would be appreciated!

    Your's isn't a dual processor (or simulated dual processor, aka, hyperthreaded) machine, is it? I and many others have found this problem to occur on such machines (when is Apple going to fix this problem?) Some have found forcing iTunes to run on only one processor to be a successful workaround (there are at least a couple freeware utilities out there (I don't recall their names OTTOMH) that will force a PC app to run on only one processor on multi-processor machines, or you can try, after starting iTunes up, cntr-alt-deleting and selecting Task Manager, then in the list of running apps, right-click on iTunes and choose Go To Process, then right-click on the selected process and select Set Affinity, and uncheck one (or all but one if you happen to have more than two processors) of the boxes) but this didn't work for me: I had to physically remove one of my two processors - I haven't had the problem since, so I'm confident that was (is) the problem in my case. (But I'm still ****** 'cause I very purposefully bought a dual processor machine; it just so happens that right now I'm doing more CD importing than dual core work, so for now it's the best solution for me, but I eagerly await the day I hear that Apple has FIXED THIS PROBLEM!)

Maybe you are looking for