SPAM_UI_QUEUE causes a dump

Hi
I have installed the SAPWAS successfully but am having problems importing the transports/SPAM Update and patches.  Basically the RDDIMPDP job does not get kicked off automatically, I have to do it manually.  I have updated the kernel, tp and sapevt and scheduled the job RDDIMPDP with user DDIC via RDDNEWPP.
After forcing the SPAM update in via the manual execution of RDDIMPDP when I now try to define a queue on the front screen the program dumps.
Any ideas anyone?
Cheers
Ian

Hi Craig
The dump is as follows.  I have check the FM and it contains no code!  I presume this is not normal. I can define the queue from within the package directory but still I have to execute RDDIMPDP manually.
Runtime errors         CALL_FUNCTION_NOT_ACTIVE                                         
Exception              CX_SY_DYN_CALL_ILLEGAL_FUNC                                      
       Occurred on     09.02.2005 at 22:10:24                                                                               
This function module is not active or contains no code.                                                                               
What happened?                                                                               
The function module "SPAM_UI_QUEUE" is called, but                                      
cannot be found in its function group.                                                                               
Error in ABAP application program.                                                                               
The current ABAP program "SAPMSPAM " had to be terminated because one of the            
statements could not be executed.                                                                               
This is probably due to an error in the ABAP program.                                                                               
What can you do?                                                                               
Print out the error message (using the "Print" function)                                
and make a note of the actions and input that caused the                                
error.                                                                               
To resolve the problem, contact your SAP system administrator.                          
You can use transaction ST22 (ABAP Dump Analysis) to view and administer                
termination messages, especially those beyond their normal deletion                    
date.                                                                               
is especially useful if you want to keep a particular message.                                                                               
Error analysis                                                                               
An exception occurred. This exception is dealt with in more detail below                
. The exception, which is assigned to the class 'CX_SY_DYN_CALL_ILLEGAL_FUNC',          
was neither                                                                            
caught nor passed along using a RAISING clause, in the procedure                        
"SET_DISPLAY_QUEUE" "(FORM)"                                                           
Since the caller of the procedure could not have expected this exception                
to occur, the running program was terminated.                                          
The reason for the exception is:                                                        
The program "SAPMSPAM " contains the CALL FUNCTION statement.                           
The name of the function module to be called is "SPAM_UI_QUEUE".                        
but "SPAM_UI_QUEUE" cannot be found in its function group.                                                                               
Possible reasons:                                                                       
a) The function module "SPAM_UI_QUEUE" has not been activated. Therefore,               
   it cannot be found at runtime.

Similar Messages

  • HP OV Operations integration is causing short dumps RFC_CONVERSION_FIELD

    Application Integration to monitor CCMS 4x using r3monal process from HP Open View Operations is causing short dumps on R/3 Database and Application Server.
    DUMP INFO:
    Conversion error "ab_rfcImplode" from character set 4102 to character set 1100.
    When executing a remote function call a conversion error occurred. This
    occurred when receiving or sending the data. The conversion error can
    only appear, when the data is transferred from a Unicode system to a
    non-Unicode system.
    Internal SAP notes
    The termination occurred in the function "ab_rfcImplode" of the SAP
    Basis System, specifically in line 7788 of the module
    "//bas/640_REL/src/krn/rfc/abrfcimp.c#25".
    The internal operation just processed is "FUNC".
    The internal session was started at 20061229000713.
      CLUDE INCL_INSTALLATION_ERROR

    Dear Pallavi,
    Very useful post!
    I am looking for similar accelerators for
    Software Inventory Accelerator
    Hardware Inventory Accelerator
    Interfaces Inventory
    Customization Assessment Accelerator
    Sizing Tool
    Which helps us to come up with the relevant Bill of Matetials for every area mentioned above, and the ones which I dont know...
    Request help on such accelerators... Any clues?
    Any reply, help is highly appreciated.
    Regards
    Manish Madhav

  • How to find which Badi Implementation is causing Short Dump

    Hi Experts,
    We are getting short dump on IW32 transaction when the users hit SAVE button.
    The Dump Details:
    Runtime Errors SYNTAX_ERROR
    Short text
    Syntax error in program "WORKORDER_UPDATE==============CP ".
    What happened?
    Error in the ABAP Application Program
    The current ABAP program "CL_EX_WORKORDER_UPDATE========CP" had
    terminated because it has
    come across a statement that unfortunately cannot be executed.
    The following syntax error occurred in program
    "WORKORDER_UPDATE==============CP " in include
    "WORKORDER_UPDATE==============CM000 " in
    line 12:
    "The specified type cannot be converted into the target variable
    The current ABAP program "CL_EX_WORKORDER_UPDATE========CP" had to be
    terminated because it has
    come across a statement that unfortunately cannot be executed.
    Error analysis
    The following syntax error was found in the program
    WORKORDER_UPDATE==============CP :
    "The specified type cannot be converted into the target variables."
    Information on where terminated
    Termination occurred in the ABAP program "CL_EX_WORKORDER_UPDATE========CP" -
    in "IF_EX_WORKORDER_UPDATE~AT_SAVE".
    The main program was "SAPLCOIH ".
    After going through the dump analysis, I found that there are around 25 BADI Implementations got created on Enhancement Spot "WORKORDER_UPDATE".
    How to find out which BADI Implementations is causing this dump?
    Please let me know.
    Thanks.
    Edited by: ravi kumar on Feb 22, 2011 1:12 PM

    Yes, I tried the Active Calls Stack also. The Short Dump is occuring exactly at the line :-
    method IF_EX_WORKORDER_UPDATE~AT_SAVE.
    in Class:-
    CL_EX_WORKORDER_UPDATE
    I was informedby the user that this is working perfectly fine in DEV System. The Short Dump is occuring in SBX System.
    When I executed the Class "CL_EX_WORKORDER_UPDATE" and Method "IF_EX_WORKORDER_UPDATE~AT_SAVE" via SE24 in SBX, I got the same dump. However it worked fine in DEV System.
    I am wondering if the issue is with the Standard Program itself?
    Any ideas?
    Thanks a lot again.
    Another thing is: It is short dumping even before calling any of the BADI Implementations.
    Wanted to give some more info on this: I put a break point in Function "CO_ZV_ORDER_POST" after the CALL FUNCTION 'CO_BADI_GET_BUSINESS_ADD_IN'.
    And the program dumps right after that when it calls the: CALL METHOD lp_badi_if->at_save.
    Thanks.
    Edited by: ravi kumar on Feb 22, 2011 1:51 PM
    Edited by: ravi kumar on Feb 22, 2011 2:11 PM

  • Initial customizing loads (DNL_*) causing short dumps in R/3

    Hi,
    I'm trying to run the initail loads for the DNL_* customizing objects.  When I start a load in R3AS and check the status in R3AM1, it shows "Running".  In the CRM oubound queue (SMQ1), there is an entry for the load with status SYSFAIL.  The detailed status message is "The current application triggered a termination with a short dump". 
    The short dump in R/3 shows an error "No external system (such as CRM) connected" which occurs in function module CRM_FIRST_CALL_OPERATIONS.  Looking at the code, there seems to be a problem with the CRMRFCPAR table.  I had only one record in this table for object CUSTOMER_MAIN, so I tried adding a record with Object Name "*" and Load Type "I", but this had no effect. 
    What is the proper configuration of table CRMRFCPAR for customizing loads?  Is it possible that this error is related to some other config or a problem with the RFC connection?  These loads work fine in our quality system which is configured the same as production (as far as I can tell).
    Thanks.
    Martin

    hi,
    Maintaining Table CRMRFCPAR (SAP R/3)
    The parameters in this table indicate the RFC destinations which receive data. The required parameters include, for example: consumer, client, object name, and download type.
    You can send data to a certain consumer only in an initial download and not in a delta download by making the appropriate specifications for the data exchange.
    User- That uses the OLTP plug-in functions as data receivers
    Object name -Object name
    Destination -Specifies the destination of the CRM server
    Load Type -Restricts CRMRFCPAR entries to the initial (I) or delta (D) download.
    Out Queue - Name RFC outbound queue
    In Queue -Name RFC inbound queue
    BAPI name
    INFO -Information/Comments
    InQueue flag- Controls whether qRFC inbound queues are used on the CRM server
    Send XML -XML supports (Should data be sent in XML format?)
    Stop data -Causes the OLTP System to place data into the outbound queue.
    Regards
    sri

  • Value type exception causes core dump

    Hi,
    Here is the situation: Session EJB (SynchronousAdapterBean) running on WebLogic
    7.0 is being called by C++ client using Tuxedo 8.1 Solaris 8. The SynchronousAdapterBean
    has a find method which throws a user defined exception (SynchronousAdapterException)
    resulting in a core dump.
    I have read through much documentation on the newsgroups, edocs.bea.com, and
    the examples coming with WebLogic.
    Here some of the things I did:
    1.
    When using the 'idl' command to generate the C++ code, I did not forget to use
    a '-i' to generate the implementation files: SynchronousAdapterException_i.cpp
    and SynchronousAdapterException_i.h.
    In my "play with it to make it work" phase, I also did this for: SynchronousAdapterEx_i.cpp
    and SynchronousAdapterEx_i.h.
    2.
    In my C++ client, I did not forget to register the factory as such:
    orb->register_value_factory
    ((char* const)com::trs::cv::comm::cnja::synchadapter::_tc_SynchronousAdapterException->id(),
    (CORBA::ValueFactory)
    new com_trs_cv_comm_cnja_synchadapter_SynchronousAdapterException_factory());
    This seemed to work because I actually put debug cout statements in the 'com_trs_cv_comm_cnja_synchadapter_SynchronousAdapterException_factory'
    in the file SynchronousAdapterEx_i.cpp and it confirmed that it was constructed
    and the reference count was incremented.
    3.
    I know for sure that it is a 'SynchronousAdapterException' being thrown on the
    server side because all the find method does it this point is:
    throw new SynchronousAdapterException("test1");
    Questions:
    1. I read at a few places that one only has to register the "most derived" class
    being thrown (in my case 'SynchronousAdapterException'). What does one do with
    all the other exceptions generated:
    EJBException_c.h
    EJBException_c.cpp
    EJBEx_c.h
    EJBEx_c.cpp
    CreateEx_c.h
    CreateEx_c.cpp
    CreateException_c.h
    CreateException_c.cpp
    RemoveEx_c.h
    RemoveEx_c.cpp
    RemoveException_c.h
    RemoveException_c.cpp
    RuntimeEx_c.h
    RuntimeEx_c.cpp
    RuntimeException_c.cpp
    RuntimeException_c.h
    SynchronousAdapterEx_c.cpp
    SynchronousAdapterEx_c.h
    SynchronousAdapterEx_i.cpp
    SynchronousAdapterEx_i.h
    Exc.cpp
    Exc.h
    Exceptionc.cpp
    Exceptionc.h
    2. What step could I have missed?
    Alex

    Hi Andy,
    1.
    Well no new member classes in SynchronousAdapterException it just looks like this:
    public class SynchronousAdapterException extends Exception {
    public SynchronousAdapterException() {
    super();
    Not available in 1.3.1, but is available in 1.4
    public SynchronousAdapterException(String message, Throwable ex) {
    super(message, ex);
    public SynchronousAdapterException(String message) {
    super(message);
    Not available in 1.3.1, but is available in 1.4
    public SynchronousAdapterException(Throwable ex) {
    super(ex);
    2.
    We are using:
    java version "1.3.1_03"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1_03-b03)
    Java HotSpot(TM) Client VM (build 1.3.1_03-b03, mixed mode)
    3.
    dbx produces the following, anything ring a bell?
    Reading libdl.so.1
    Reading libaio.so.1
    Reading libmp.so.2
    Reading libc_psr.so.1
    Reading liborbiiop.so.71
    Reading liborbtcp.so.71
    detected a multithreaded program
    t@2 (l@2) terminated by signal ABRT (Abort)
    0xfec9bbd4: lwpkill+0x0008: bgeu,a lwpkill+0x1c
    Current function is operator>>
    218 mb.UnMarshalValue(::com::trs::cv::comm::cnja::synchadapter::_tc_SynchronousAdapterException,
    obj);
    (/opt/SUNWspro/bin/../WS6U2/bin/sparcv9/dbx) where
    current thread: t@2
    [1] lwpkill(0x0, 0x2, 0x0, 0x6, 0x197f0, 0x4c6e0), at 0xfec9bbd4
    [2] raise(0x6, 0x0, 0xffffffff, 0xc00cc, 0x8, 0xfee355c0), at 0x4c6e8
    [3] abort(0xd2040, 0xbeaf8, 0xff3b2aac, 0xfee34684, 0x15398, 0xfee34874), at
    0x4c6a8
    [4] __Cimpl::ex_terminate(0x0, 0x0, 0xfee4a500, 0xff1edf90, 0xff09ecd0, 0x1),
    at 0xfee34684
    ---- hidden frames, use 'where -h' to see them all ----
    [6] OBB::MarshalBuf::UnMarshalValue(0xfeaf9a28, 0xc9dd8, 0xfeaf99b0, 0xff1edf90,
    0xff1edfd0, 0xfed2ebb8), at 0xff0a3980
    =>[7] operator>>(mb = CLASS, obj = CLASS), line 218 in "SynchronousAdapterExc.cpp"
    [8] _tcr_com_trs_cv_comm_cnja_synchadapter_SynchronousAdapterEx_unmarsh(mh =
    0xce520, obj = 0xfeaf9b7c, flags = 130U), line 259 in "SynchronousAdapterEx_c.cpp"
    [9] TCInterpreter::DecodeByTypeCode(0x16, 0xff1d674c, 0xfeaf9b7c, 0xce520, 0x82,
    0xfeaf9b07), at 0xff0f3aec
    [10] TCInterpreter::DecodeById(0xcbaa0, 0xcf230, 0xfeaf9b7c, 0xce520, 0x82,
    0xfeaf9f98), at 0xff0f3c94
    [11] ReplyMessage::DecodeMessageBody(0x18000, 0xcd820, 0xfeaf9f98, 0xbf358,
    0xfebcffb0, 0xfed2ebb8), at 0xff0e1938
    [12] GIOPMessage::DecodeBody(0xce240, 0xfeaf9f98, 0xff0dd494, 0xff0dd494, 0xffffffd0,
    0xfeb93e24), at 0xff0c23a0
    [13] IiopProtocol::Reply(0xd0958, 0xce240, 0xce148, 0xfeaf9f98, 0x100, 0xfebd0010),
    at 0xfeb94064
    [14] IiopMsgReceiver::MsgReceived(0xca5a0, 0xce240, 0xce148, 0xfeaf9f98, 0x0,
    0xfeaf9f98), at 0xfeb9bc74
    [15] GiopMessageProtocol::MsgReceived(0xcb358, 0xfeaf9dd8, 0xd1160, 0xfeaf9f98,
    0x0, 0xfeaf9f98), at 0xfebac6fc
    [16] MessageManager::RecvMsgFirstTime(0xcf8c0, 0xce240, 0xd1160, 0xcb358, 0xd11d0,
    0xfeaf9f98), at 0xfebab1b0
    [17] MessageManager::AvailableForRecvMsg(0xcf8c0, 0xd1160, 0xcb358, 0xfeaf9f98,
    0x3e8, 0xd1170), at 0xfebaa030
    [18] ChannelManager::DoReadWork(0xff1d674c, 0xfebd0038, 0xd1160, 0x0, 0x0, 0xcfd58),
    at 0xfeba54dc
    [19] ChannelManager::DoIt(0xcb3b0, 0xfeaf9f98, 0x0, 0xfebcffd0, 0xcfda0, 0xfebcffd0),
    at 0xfeba0750
    [20] WorkerManager::DoWorkerThread(0x0, 0xca648, 0xfeaf9f98, 0xcfda0, 0xfeba9924,
    0x0), at 0xfeba7894
    [21] WorkerThread(0xcfd98, 0xfeafa000, 0x0, 0x0, 0x0, 0x0), at 0xfeba9974
    (/opt/SUNWspro/bin/../WS6U2/bin/sparcv9/dbx)
    Thanks,
    Alex
    Andy Piper <[email protected]> wrote:
    "Alex" <[email protected]> writes:
    It sounds like you have done all the right steps. More comments in-line:
    1.
    When using the 'idl' command to generate the C++ code, I did not forgetto use
    a '-i' to generate the implementation files: SynchronousAdapterException_i.cpp
    and SynchronousAdapterException_i.h. Ok.
    In my "play with it to make it work" phase, I also did this for: SynchronousAdapterEx_i.cpp
    and SynchronousAdapterEx_i.h.This should not be necessary - XXXEx is an IDL exception.
    2.
    In my C++ client, I did not forget to register the factory as such:
    orb->register_value_factory
    ((char* const)com::trs::cv::comm::cnja::synchadapter::_tc_SynchronousAdapterException->id(),
    (CORBA::ValueFactory)
    new com_trs_cv_comm_cnja_synchadapter_SynchronousAdapterException_factory());Ok. What does the Exception actually look like in Java? (Incidentally
    the _i file creates a useful helper function called _register() which
    will do this step for you). Did you register all member classes? You
    are correct about most derived types being the important ones, but you
    need to also register member classes if they are not standard.
    This seemed to work because I actually put debug cout statements inthe 'com_trs_cv_comm_cnja_synchadapter_SynchronousAdapterException_factory'
    in the file SynchronousAdapterEx_i.cpp and it confirmed that it wasconstructed
    and the reference count was incremented.
    3.
    I know for sure that it is a 'SynchronousAdapterException' being thrownon the
    server side because all the find method does it this point is:
    throw new SynchronousAdapterException("test1");Ok (incidentally you are not running the server on JDK 1.4 are
    you. This would cause problems).
    Questions:
    1. I read at a few places that one only has to register the "most derived"class
    being thrown (in my case 'SynchronousAdapterException'). What doesone do with
    Yes.
    all the other exceptions generated:These are just needed at compile time so that you don't get undefined
    symbols.
    2. What step could I have missed?Its not clear. You should try running in a debugger and seeing where
    its falling over.
    andy

  • Pro*c program causing core dump in 11g

    hello every one,
    I am trying to debug a pro*c program which is resulting in a core dump. It used to work fine in with Oracle 10g precompiler but is causing a core dump with 11g. When I run dbx here is what I get.
    dbx wreg309
    Type 'help' for help.
    [using memory image in core]
    reading symbolic information ...
    Segmentation fault in u_fsetcodepage_3_8 at 0x9000000014f4f70 ($t1)
    0x9000000014f4f70 (u_fsetcodepage_3_8+0x68) f87f0010 std r3,0x10(r31)
    Any ideas what this means ? Thanks.

    Hi,
    I came across your problem on the Oracle Discussions Forum from back on June of 2009.
    I am working with a pro*c program, in Banner 8. I getting the same message from a core dump,, that you got. I was hoping
    you might have written down what you did to resolve it.
    My pro*c program is key to running all the SQR code we have. So it's very important. The version of sqr that gets
    linked into it is 32-bit and our environment is 64-bit. Our contract with Oracle for SQR has lapsed (it's a long
    and expensive story and this is probably not the place). My whole migration to Oracle 11 is being held up by this.
    I realize it's been a while since you worked on it but if you could tell me how you resolved your problem, I might be
    able to do the same.
    Thanks,
    Tom Mayne
    North Shore Community College
    Danvers MA
    [email protected]
    cell 978 423 6867

  • OCI connect error reported to cause core dump

    I use OCI in my developped library. Other developers use my library for their development purposes. Last night, the oracle server went down for patching, and a user reported a core dump that I recognized it is comming from my OCI section of library.
    I do OCI initialization, select, execute a query, and cleanup.
    Has there been any reports as for the case, when a connection is initialized, and the server goes down after successful initialization but before calls to execute a query or commit?
    Any comment???
    I have the error message below, and replace prepriotory info with xxxxxxx.
    thanks,
    --M
    Fatal NI connect error 12541, connecting to:
    (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxxxxx)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=xxxxxx)(CID=(PROGRAM=xxxxxx@fashing)(HOST=xxxxxx)(USER=xxxxx))))
    VERSION INFORMATION:
    TNS for Solaris: Version 10.1.0.4.0 - Production
    TCP/IP NT Protocol Adapter for Solaris: Version 10.1.0.4.0 - Production
    Time: 15-FEB-2007 15:49:09
    Tracing not turned on.
    Tns error struct:
    ns main err code: 12541
    TNS-12541: TNS:no listener
    ns secondary err code: 12560
    nt main err code: 511
    TNS-00511: No listener

    We have seen core dumps in system calls earlier and have tried to debug them. We did not have the exact same stack as you but the core used to show up a stack in _malloc.
    As it turned out that we had out of bound array reads (used purify to detect this) and which resulted in this dump. So the stack shown in this case could be totally misleading and the real cause could be somewhere else.
    Hope this helps.

  • What is the maximum number of recursive call as i think it is causing short dump

    Hi All,
    This may not have a specific answer i suppose but approx is also fine. There was a short dump in production ( the background job failed , an is failing each day since a week now ). I was analysing it  what i found was the cause for there was numerous call to the same routine within it self ( recursion ). The exact number i dont have but they were atleast more the 200.
    The error message which i get is :
    Are there any solution regarding
    Category          
    Resource Shortage
    Runtime Errors    
    TSV_TNEW_PAGE_ALLOC_FAILED
    Short text
        No more storage space available for extending an internal table.
    What happened?
        You attempted to extend an internal table, but the required space was
        not available.
    it would be great if we know the max numbers of an recursive call so that i can do the necessary changes or if some 1 has any other alternative solution pls. help.
    Regards,
    Dhruvin
    P.s : it's a productive environment issue.

    The dump is not caused by depth of recursion but rather memory limits being exceeded by your process, these are controlled by profile parameters (System administrator stuff) and "normally" (unless changed to other values) allow up to 4 GB of memory usage (mostly by internal tables). You can quickly check the current memory consumption of your active process via SM50.
    Make sure that the routine that is called recursively (or other relevant parts of your process) free up memory by using CLEAR and FREE statements for internal tables that are no longer needed.
    If data is being selected from the database also look into block processing via the PACKAGE SIZE option.
    Thomas

  • _smalloc cause core dump

    Hi:
    I got a core dump on _smalloc. Is it due to memory corruption on my application ? The frame stack on gdb is attached:
    #0 0xff0456c8 in _smalloc () from /usr/lib/libc.so.1
    (gdb) bt
    #0 0xff0456c8 in _smalloc () from /usr/lib/libc.so.1
    #1 0xff04570c in malloc () from /usr/lib/libc.so.1
    #2 0x1fde4 in debug_malloc (nsize=11, ntype=1 '\001') at comm_utility.cpp:312
    #3 0x1c144 in LNXSocket_OnReceive (sockfd=12) at comm_connect.cpp:418
    #4 0x1cbc8 in Select_socket () at comm_connect.cpp:612
    #5 0x1e028 in main () at comm.cpp:177
    (gdb) quit
    Thanks and Regards

    We have seen core dumps in system calls earlier and have tried to debug them. We did not have the exact same stack as you but the core used to show up a stack in _malloc.
    As it turned out that we had out of bound array reads (used purify to detect this) and which resulted in this dump. So the stack shown in this case could be totally misleading and the real cause could be somewhere else.
    Hope this helps.

  • Xcelsius Causes core dump when importing a workbook

    When I try to import a certain workbook a message comes up saying that the server is busy and switch to the application. Switching to the application does nothing and the only way to get out of this is to kill Xcelsius which then causes a blue screen and core dump. Does anyone know how to address this problem?

    Hi there,
    I have posted a suggestion via this link [Server Busy issue with Xcelsius|Re: Server Busy; or you can refer to my following steps:
    1. Open the Windows Task Manager.
    2. Go to tab Applications and look for task Microsoft Excel - Compatibility Check and double click on it. This action will bring you to the small windows of Excel which alerts you about the compatibility check of the Excel file.
    3. Click button Continue to process the import.
    4. Go back to Xcelsius and click Retry button to import the Excel sheet.
    Well, the issue is solved.
    Hopefully this tips will be useful for you.
    Cheers,
    Danny Pham

  • Log Reader Agent error "could not execute sp_replcmds' and causes stack dump

    Publisher/Subscriber db:  SQL 2008 R2, 2000 compatability mode
    Distributor database is on separate server.
    (note:  There is another database on this instance that is running replication without error, it is not in compatibility mode)
    After snapshot agent finishes, the log reader agent starts and fails immediately with this error in the Agent Job.
    Then I get a SEV20 error and stack dump in the error logs.
    Date  6/12/2014 3:12:26 PM
    Log  Job History (SERVER\INSTANCE-DBNAME-43)
    Step ID  2
    Server  ######RT02
    Job Name  SERVER\INSTANCE-DBNAME-43
    Step Name  Run agent.
    Duration  00:00:01
    Sql Severity  0
    Sql Message ID  0
    Operator Emailed  
    Operator Net sent  
    Operator Paged  
    Retries Attempted  0
    Message
    2014-06-12 20:12:26.302 Copyright (c) 2008 Microsoft Corporation
    2014-06-12 20:12:26.302 Microsoft SQL Server Replication Agent: logread
    2014-06-12 20:12:26.302
    2014-06-12 20:12:26.302 The timestamps prepended to the output lines are expressed in terms of UTC time.
    2014-06-12 20:12:26.302 User-specified agent parameter values:
       -Publisher SERVER\INSTANCE
       -PublisherDB DBNAME
       -Distributor ######RT02
       -DistributorSecurityMode 1
       -Continuous
       -XJOBID 0x8958DF32810C6849B28A037A8FF8DD92
       -XJOBNAME SERVER\INSTANCE-DBNAME-43
       -XSTEPID 2
       -XSUBSYSTEM LogReader
       -XSERVER SERVER\INSTANCE
       -XCMDLINE 0
       -XCancelEventHandle 0000000000000F98
       -XParentProcessHandle 0000000000000F34
    2014-06-12 20:12:26.459 Parameter values obtained from agent profile:
       -pollinginterval 5000
       -historyverboselevel 1
       -logintimeout 15
       -querytimeout 1800
       -readbatchsize 500
       -readbatchsize 500000
    2014-06-12 20:12:26.493 Status: 4096, code: 20024, text: 'Initializing'.
    2014-06-12 20:12:26.493 The agent is running. Use Replication Monitor to view the details of this agent session.
    2014-06-12 20:12:27.885 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on 'SERVER\INSTANCE'.'.
    2014-06-12 20:12:27.886 The process could not execute 'sp_replcmds' on 'SERVER\INSTANCE'.
    2014-06-12 20:12:27.886 Status: 0, code: 21, text: 'Warning: Fatal error 3624 occurred at Jun 12 2014  3:12PM. Note the error and time, and contact your system administrator.'.
    2014-06-12 20:12:27.886 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'SERVER\INSTANCE'.'.
    I've tried removing replication and setting it back up again, restarting SQL, and restarting the server itself.
    Let me know if you need any more information to help troubleshoot.  Thanks.
    Please help, thanks. 

    Hi,
    Enable Verbose logging and check the results.
    Execute following commands: -Output C:\Temp\OUTPUTFILE.txt –Outputverboselevel 2.
    Please refer following KB article for your reference -
    http://support.microsoft.com/kb/q312292/
    Thanks.
    Tracy Cai
    TechNet Community Support

  • /AdminMain causes core dump

    I've installed WebLogic on a Solaris (SPARC) 2.7, JDK 1.2.2.
    I'm just going through WebLogic and testing the various functionalities.
    When I load then /AdminMain page the first time, it works beutifully.
    However, if I click on any of the links or reload the page, the web server
    goes down in flames (Core Dump)
    Any Ideas?
    I have attached two files, the one listing the webserver's output at startup
    and the second being the webserver's output when it dies.
    Regards
    Johan Locke
    http://www.JohanLocke.co.za
    Certified Oracle 8 & 8i DBA
    Certified Oracle Developer
    Dimension Data i-Commerce Internet Services
    Direct Line: +27 11 283 5789
    mailto:[email protected]
    http://www.is.co.za
    [webLogic_startup.txt]
    [web_logic_dump.txt]

    Yes there is a problem with JD 1.2.2_05a. It's documented on our platform support
    page
    Check out: http://www.weblogic.com/platforms/index.html#solaris
    Kumar
    Cliff Hafen wrote:
    I have the same problem and can't go back to an earlier jdk. Running Solcaris
    2.7, WL5.1sp4. I can provide more info if needed.
    Cliff
    First Last wrote:
    I got the same problem with 122-05a but the problem goes away once I change it
    back to 121_04. Is there a problem with running weblogic51 under jdk122_05a ?
    Michael Girdley wrote:
    Are you on JDK version 1.2.2_05 a?
    Thanks,
    Michael
    Michael Girdley
    BEA Systems Inc
    "Johan Locke" <[email protected]> wrote in message
    news:[email protected]..
    I've installed WebLogic on a Solaris (SPARC) 2.7, JDK 1.2.2.
    I'm just going through WebLogic and testing the various functionalities.
    When I load then /AdminMain page the first time, it works beutifully.
    However, if I click on any of the links or reload the page, the web server
    goes down in flames (Core Dump)
    Any Ideas?
    I have attached two files, the one listing the webserver's output atstartup
    and the second being the webserver's output when it dies.
    Regards
    Johan Locke
    http://www.JohanLocke.co.za
    Certified Oracle 8 & 8i DBA
    Certified Oracle Developer
    Dimension Data i-Commerce Internet Services
    Direct Line: +27 11 283 5789
    mailto:[email protected]
    http://www.is.co.za

  • Extended Classic Scenario causing a dump "Buffer Table not up to date'

    Hi All,
    I could create a Shopping Cart in the Classic Scenario.
    After activating the Global settings together with the Back end groups responsible for the ECS, I have also a Badi in place for a certain Category id..I expected that for this category id if I create a shopping cart thro' the describe requirement it would create a Local PO and a replica would be generated in the Back end system.
    But  as and when I add the item to the Shopping Cart, I get an ABAP Dump indicating the buffer table not up to date and a program error "Uncaught exception" in the Program SALBBP_PDH.
    I also tried searching SAP Notes,but to vain.Working on SRM 5.0 Release level 12 and ECC 6.0 as back end.
    Any body who has faced this before, Pl give me the clues.
    Thanks & Regards,
    Nagarajan

    Hi
    it is basis consulatant responsiblity. you must take help from Basis .
    Note 26171 - Possible entry values for command field ("OK-code")
    Buffer commands:
    WARNING: Resetting buffers can significantly change the performance of the
    entire system for a long time. It should therefore only be used when there
    is a good reason to do so. As of release 3.0B system administator
    authorization is required (authorization object S_ADMI_FCD). The action is
    noted in the system log.
    /$SYNC- This resets all buffers of the application server
    /$CUA - This resets the CUA buffers of the application server
    /$TAB - This resets the table buffers of the application server
    /$NAM - This resets the nametab buffers of the application server
    /$DYNP - This resets the screen buffers of the application server
    regards
    Muthu

  • WBS Conversion FM using RFC Causing Short Dumps

    Hello,
    I have been trying to validate WBS element entries by using an RFC call to a FM on the back-end.  However, I keep getting short dumps on the back-end system with 'Statement "MESSAGE E" is not allowed in this form.' message.  The custom back-end FM calls one of the CPJN FMs like CONVERSION_EXIT* that deal with WBS elements.
    So, I tried changing the custom FM to use the data element that does a the conversion instead of the FM. Now, I get a short dumps on the front-end with CALL_FUNCTION_REMOTE_ERROR with 'Key does not correspond to mask: B/0000-XXXXXXX.0.0.0' message.
    Has anyone been successful without duplicating the CONVERSION_EXIT* FMs in order to validate a WBS element entry remotely?
    Thank you in advance for any assistance.
    Regards, Dean.

    Hello Dean
    On ERP 6.0 (IDES) there is the RFC-enabled fm /SPE/WBS_ELEM_CONV_INT_EXT available.
    If you feed the fm with an external ID it tries to convert it into the corresponding internal ID and vice versa. Perhaps this fm might be useful for you.
    Probably the BAPI BAPI_PROJECT_EXISTENCECHECK is more useful (can be feed either with internal or external ID).
    Regards
      Uwe

  • SRW.APPLY_DEFINITION cause (core dumped) Segmentation fault

    When I run report under Concurrent manager, i.e. within Oracle Applications I get this error:
    (core dumped) Segmentation fault.
    I use SRW.APPLY_DEFINITION in After Parameter Form. When I remove SRW.APPLY_DEFINITION report runs fine.
    I tried following in the command line:
    $ ar60runb report=/PATH_TO_RDF_FILE/NAME.rdf batch=yes destype=file desname=/PATH_TO_OUTPUT_FILE/NAME.out desformat=XML
    and I get this error:
    ar60runb: uisfz.c:173: uisfzfn_Fns: Assertion `0' failed.
    Aborted (core dumped)
    When I remove "batch=yes" or use "batch=no" everything passed fine.
    I get the same result when I use .RDF file without SRW.APPLY_DEFINITION in the code.
    I have applied the solution from Note:368457.1 but it didn't help.
    I want to set "batch=no" when I run report under Oracle Applications or just resolve this problem somehow.
    Platform     SUSE \ UnitedLinux x86-64
    Product Version      11.5.10.2

    Come on, guys. Help! :(

Maybe you are looking for