Problem Analysis

HI, As PI administrator which things need to be consider while detecting the problem and for analysis of it. Is there any document available for this

Hi,
Refer this link
http://help.sap.com/saphelp_nw04/helpdata/en/6a/e6194119d8f323e10000000a155106/content.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/108ba391-e826-2a10-608f-c1769c51dc29
Refer these blog
The specified item was not found.
Shortest Path Problem: A Solution from XI (Part I)
XI CCMS Alert Monitoring : Overview and Features
Regards,
Surya

Similar Messages

  • Problem: analysis with essbase data

    Hi experts, I have essbase multidimensional database.
    So practically I don't use my oracle administration.
    I can see some weird in answers page when I want to do an analysis.
    If I select my dimensions and a measure I can see some data and works fine.
    For example:
    YEAR GEN1 - PERIOD GEN1 - PERIOD GEN2- Measure
    2010- PERIOD- MONTHLY- 250
    2010- PERIOD- CUMULATIVE- 250
    My problem is if I filtered in my measure some dimension.
    YEAR GEN1 - PERIOD GEN1 - PERIOD GEN2- Measure FILTERED by GEN2='MONTHLY' - Measure FILTERED by GEN2='CUMULATIVE'
    I expected next result:
    2010- PERIOD- MONTHLY- 250 - (blank)
    2010- PERIOD- CUMULATIVE- (blank) - 250
    But I see this result
    2010- PERIOD- MONTHLY- (blank) - (blank)
    So I can't filtered my measures
    Any help?
    Thanks!!

    Hi Dhar!
    This is XML code of my analysis:
    <saw:report xmlns:saw="com.siebel.analytics.web/report/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlVersion="201008230" xmlns:sawx="com.siebel.analytics.web/expression/v1.1">
    <saw:criteria xsi:type="saw:simpleCriteria" subjectArea="&quot;OBI_ACTI&quot;">
    <saw:columns>
    <saw:column xsi:type="saw:regularColumn" columnID="c4bf3a8497e63f8ab">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Año"."Gen2,Año"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c2c5abbacfab55a66">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Período"."Gen2,Período"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c1ab901b03bc40bf6">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Período"."Gen3,Período"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c2b4e4457d2772c76">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">FILTER("Measures"."value" USING ("Escenario"."Gen2,Escenario" = 'Real'))</sawx:expr></saw:columnFormula>
    <saw:tableHeading>
    <saw:caption fmt="text">
    <saw:text>Measures</saw:text></saw:caption></saw:tableHeading>
    <saw:columnHeading>
    <saw:caption fmt="text">
    <saw:text>Real</saw:text></saw:caption></saw:columnHeading></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c22300e3c1d0f87c7">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">FILTER("Measures"."value" USING ("Escenario"."Gen2,Escenario" = 'Ppto_Def'))</sawx:expr></saw:columnFormula>
    <saw:tableHeading>
    <saw:caption fmt="text">
    <saw:text>Measures</saw:text></saw:caption></saw:tableHeading>
    <saw:columnHeading>
    <saw:caption fmt="text">
    <saw:text>Ppto</saw:text></saw:caption></saw:columnHeading></saw:column></saw:columns></saw:criteria>
    <saw:views currentView="0">
    <saw:view xsi:type="saw:compoundView" name="compoundView!1">
    <saw:cvTable>
    <saw:cvRow>
    <saw:cvCell viewName="titleView!1">
    <saw:displayFormat>
    <saw:formatSpec/></saw:displayFormat></saw:cvCell></saw:cvRow>
    <saw:cvRow>
    <saw:cvCell viewName="tableView!1">
    <saw:displayFormat>
    <saw:formatSpec/></saw:displayFormat></saw:cvCell></saw:cvRow></saw:cvTable></saw:view>
    <saw:view xsi:type="saw:titleView" name="titleView!1"/>
    <saw:view xsi:type="saw:tableView" name="tableView!1">
    <saw:edges>
    <saw:edge axis="page" showColumnHeader="true"/>
    <saw:edge axis="section"/>
    <saw:edge axis="row" showColumnHeader="true">
    <saw:edgeLayers>
    <saw:edgeLayer type="column" columnID="c4bf3a8497e63f8ab"/>
    <saw:edgeLayer type="column" columnID="c2c5abbacfab55a66"/>
    <saw:edgeLayer type="column" columnID="c1ab901b03bc40bf6"/>
    <saw:edgeLayer type="column" columnID="c2b4e4457d2772c76"/>
    <saw:edgeLayer type="column" columnID="c22300e3c1d0f87c7"/></saw:edgeLayers></saw:edge>
    <saw:edge axis="column"/></saw:edges></saw:view></saw:views></saw:report>
    The problem is if column VALUE is filtered by a dimension it works...but if I add another column value and I filter with another member of dimension, all values dissapear.
    This is the result:
    http://imageshack.us/photo/my-images/855/resultmd.jpg/
    And it's wrong because exists data in this scenarios....
    Which is the problem?
    Thanks!

  • "cor" file for problem analysis

    Hello,
    I have 2 "sections" in "diagnosis files" that appeared after a database check, one on 29th 01 and the other yesterday.
    What can i do with these files ?
    ****For the "first" core file, i have done 2 checks :
    First :
    2009-01-29 18:02:18 10874 ERR 51080 SYSERROR -9005 BD Illegal key
    2009-01-29 18:03:10 10874 ERR 53000 B*TREE   07010000000000019823000000000000
    2009-01-29 18:03:10 10874 ERR 53000 B*TREE   Index Root  480166
    2009-01-29 18:03:10 10874 ERR 53348 B*TREE   bd402SearchIndexForQuali: 481217
    2009-01-29 18:03:10 10874 ERR 53250 INDEX    Bad Index 480166 (Root)
    2009-01-29 18:03:10 10874 ERR 53250 INDEX    Reason "System error: BD Invalid invli"
    2009-01-29 18:03:11 10874 ERR 51080 SYSERROR -9041 BD Index not accessible
    2009-01-29 18:04:28 10876 ERR 53019 CHECK    Base error: index_not_accessib
    2009-01-29 18:04:28 10876 ERR 53019 CHECK    Root pageNo: 480166
    2009-01-29 18:04:30 10876 ERR 53000 CHECK    Check data finished unsuccessfully
    2009-01-29 18:14:58 10873 ERR 53019 CHECK    Base error: index_not_accessib
    2009-01-29 18:14:58 10873 ERR 53019 CHECK    Root pageNo: 480166
    2009-01-29 18:14:59 10873 ERR 53000 CHECK    Check data finished unsuccessfully
    Second :
    2009-01-29 18:17:21  9746 ERR 53000 CHECK    Check data finished unsuccessfully
    2009-01-29 18:29:31  9744 ERR 53000 B*TREE   07010000000000019823000000000000
    2009-01-29 18:29:31  9744 ERR 53000 B*TREE   Index Root  480166
    2009-01-29 18:29:31  9744 ERR 53367 B*TREE   bd400_DeleteSubTrees: 481217
    For the "Second" core file, i have done 1 check :
    2009-02-05 21:07:10  4672 ERR 53370 B*TREE   Illegal record length: 7823
    2009-02-05 21:07:10  4672 ERR 53370 B*TREE   Corrupted data page: 206842
    2009-02-05 21:07:10  4672 ERR 53000 B*TREE   0701000000000001CBDE000000000000
    2009-02-05 21:07:10  4672 ERR 53000 B*TREE   Index Root  662212
    2009-02-05 21:07:10  4672 ERR 53250 INDEX    Bad Index 662212 (Root)
    2009-02-05 21:07:10  4672 ERR 53250 INDEX    Reason "System error: BD Illegal entry"
    2009-02-05 21:07:10  4671 ERR 53019 CHECK    Base error: index_not_accessib
    2009-02-05 21:07:10  4671 ERR 53019 CHECK    Root pageNo: 662212
    2009-02-05 21:20:02  4671 ERR 53000 CHECK    Check data finished unsuccessfully
    I guess a disk problem. The disks subsystem is in raid 10 (6 disks, 3 stripped  * 2 disks mirrored).
    I haven't to now installed specific software to read states about the sas card and the disks, but
    i will seach.
    I big question : when a "corrupted date  page" arrived, how do maxDB handle these problem ?
    Does le data in the page be lost after repair ?

    > I have 2 "sections" in "diagnosis files" that appeared after a database check, one on 29th 01 and the other yesterday.
    >
    > What can i do with these files ?
    Since you've to ask the answer is: nothing.
    Developers can use the COR files (these are the dumped corrupt pages) and check, why they were found corrupt.
    > I big question : when a "corrupted date  page" arrived, how do maxDB handle these problem ?
    > Does le data in the page be lost after repair ?
    MaxDB does not handle corruptions different than any other DBMS for SAP.
    It reports that there is something wrong and gives up trying to read the data.
    (Unfortunately MaxDB has yet to learn that not all corruptions are a reason to crash - but that's a different topic...).
    Corrupted data can never be repaired - by no DBMS available. It may be possible to recreate the data (e.g. Index rebuild, reloading of BW data etc.) but the database software cannot know what was supposed to be in the damaged data page.
    That's one of the reasons, why taking and checking database backups is crucial.
    Anyhow, if you are a SAP customer, don't miss to open a support call for this.
    regards,
    Lars

  • Integration engine problem

    Dear All ,
    I am facing one problem related to file generation and pickup . In file to IDOC AND IDOC TO FIle SCENARIO PROCEESING IS NOT BEING DONE BY INTEGRATION ENGINE .
    When I am checking the status of message in SXMB_MONI
    IT IS SHOWING <b>"RECORDED"</b> , <b>it is not showing message " processed successfuly</b> . <b>It is also showing the message is waiting in queue "to be delivered ".</b>em .
    Can some body suggest how to remove this problIt was working fine IDOC TO FILE and FILE TO IDOC before this probelem occured .
    Thanks in advance
    Regards
    Prabhat

    Hi Prabhat
    Follow the steps mentioned in this link, it is a Problem Analysis Guide.
    http://help.sap.com/saphelp_nw04/helpdata/en/6a/e6194119d8f323e10000000a155106/content.htm
    cheers
    Sameer

  • Material ledger (ML) price analysis,  LACCS & CKMLLA return err. CKMLLA001

    Dear colleagues,
    We're experiencing problems analysing actual cost component split on material level after Material Ledger closing and need some help.
    OSS-note 872421 "Cost component split display for activity types" comes with two transactions for the purpose of analysing prices. But both transactions LACCS & CKMLLA don't work in our system, returning error message "Header not found: message № CKMLLA001" after trying to view ML prices (F5) or get Price report from LACCS as well as on Price report (F8) in CKMLLA.
    There's no further info available on message CKMLLA001. Did anyone here happened to get this message? What could it mean, what should we check?
    We need for analysing ML prices because the values in cost components split (according to CKM3, Itemization) differ greatly from what was expected on the basis of price analysis in Cost Center accounting (e.g. S_ALR_87013611).
    OSS-note note 880217 "Cost component split and price do not match" is implemented in our system, as well as recommendations from the corresponding note 1090144 "Preventing rounding errors in prices" are taken into account. For activity prices number of significant digits is set to 10, "no optimization" not selected. Though we don't get any warnings # 204: "Split and price of cost elem./activity type &1/&2 not consistent" during multi-level price determination, the totals are close to equal in ML costing compared to CC costing.

  • Problem with SNC configuration

    We Gui SNC configured for almost 10 system and it works. Now we want to configure it for a new one and unfortunately it works.
    We have checked the config:
    - parameters in RZ10
    - existence of Kerberos dll in /windows/system32
    - active directory settings for a SAPServiceSID user
    - advance setting in saplogon
    All looks fine but yet when we try to connect we get:
    SAP System message: Secure Network Layer (SNC) error
    Please help

    Hi,
    The Sap Note 95810 - Problem analysis when using SNC with Secude says the following about your prob. Pl check.
    2.1 Error in the Security Network Layer
    During the logon a dialog box appears:
    SAP system message:
    'Error in the Security Network Layer'.
    In this case, a problem was recognized in the SNC layer in the application server.
    The cause must be checked in the trace files of the work processes (dev_w*).
    Since it cannot be said in which work process the logon was attempted, you must scan the trace files of all work processes if necessary.
    Transaction ST11 displays the corresponding files sorted correponding with the last access time so that you should begin with the first file shown there.
    Error scenarios which can be identified in dev_w*:
    2.1.1 The signature of a certificate cannot be checked.
    ERROR => SncPEstablishContext()==SNCERR_GSSAPI  [sncxxall.c ....]
          GSS-API(maj): A token had an invalid signature
          GSS-API(min): Certification path incomplete
        Unable to establish the security context
    <<- SncProcessInput()==SNCERR_GSSAPI
    ERROR => ThSncIn: SncProcessInput (SNCERR_GSSAPI) [thxxsnc. ....]
    2.1.2 Invalid PIN
    ERROR => SncPEstablishContext()==SNCERR_GSSAPI  [sncxxall.c ....]
          GSS-API(maj): Miscellaneous failure
          GSS-API(min): Invalid PIN
        Unable to establish the security context
    <<- SncProcessInput()==SNCERR_GSSAPI
    ERROR => ThSncIn: SncProcessInput (SNCERR_GSSAPI) [thxxsnc. ....]
    Regards

  • 2008 R2 Server Time problems - Gaining/Losing 6s/min

    I'm having problems with a server on our network. I can go for days without a problem but then without warning the server will either gain or lose time at anywhere up to around 6 seconds per minute.  The server is running Server 2008 R2, member
    server, IIS running internal sites, SiteCore CMS v6.4.1.
    The w32time service is configured as follows
    PDC - retrieves it's time from uk.pool.ntp.org, all other DCs and member servers have the w32tm service configured to syncfromflags:domhier.
    The server in question is a virtual machine running on a server 2012 Hyper-V cluster.  All of the virtual servers on there have the Hyper-V time sync service enabled however we have edited the registry to set HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider\Enabled
    to 0.
    w32tm /query /source responds with a DC local to the server.
    I've been monitoring the server for a number of days by outputting  the following command to a file on the server w32tm /stripchart /computer:localdc /dataonly and on the whole the server has been within +-5 seconds of the reference computer.
    At approximate 13:06 this afternoon the server started to lose approximately 0.1s every second and is still doing so.  I've tried running the following command w32tm /resync /computer:localdc /nowait but to no avail.  I've had to manually change
    the time in order to get services available again.
    I've searched through the event logs on the server at or around 13:06 and can't find anything of any significance happening at that time.  I'm unable to restart the server during work hours as it's got key services running on it.
    Any help on how to keep the time in sync on the server would be appreciated.

    I would recommend that you read the Wiki I started here: http://social.technet.microsoft.com/wiki/contents/articles/18573.time-synchronization-in-active-directory-forests.aspx
    As I mentioned:
    Microsoft do not guarantee and do not support the accuracy of Windows Time Service between nodes on a network as this service is not a full-featured NTP solution that can meet time-sensitive application needs. Windows Time Service was not designated in such
    way that it maintains time synchronization to the range of one (1) or two (2) seconds.
    If you have an application that needs to have a high accuracy NTP solution then Windows Time Service should not be used in this case. Instead, third party software are available to satisfy this need.
    High Accuracy W32time Requirements: http://blogs.technet.com/b/askds/archive/2007/10/23/high-accuracy-w32time-requirements.aspx 
    Support boundary to configure the Windows Time service for high accuracy environments:http://support.microsoft.com/kb/939322/ 
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile
    Thanks for that Mr X, I have already been on that page and implemented the Windows time service in the manner described in the article and in the other articles relating to configuring the Windows time service in a Hyper-V environment.
    The issue is not relating to the configuration of the service as a whole as my other 72 server instances are working fine, it is just this one that is giving me a problem.  I have no need for a high accuracy environment, just one that keeps general
    time and doesn't lose or gain 6 seconds/minute at random intervals through out the day.
    I have been doing some further problem analysis since my original post with regards to the Hyper-V time sync service and my results are as follows.
    When the time is a long way out on the server restarting the Hyper-V time sync service resets the time to the correct time instantly.
    If you live migrate the VM to another host in the Hyper-v cluster when the time is incorrect, again the time resets to the correct time instantly.
    I have reinstalled the Hyper-V integration services but this has had no effect on the problem.
    As a last ditched attempt to resolve the issue I have re-enabled the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\w32time\TimeProviders\VMICTimeProvider
    registry key so that when you run a w32m /query /source command the result is "VM IC Time Synchronization Provider"
    I'll report back my findings.

  • Incoming Survey/Mail Problem

    Hi Gurus,
    I met an incoming survey/mail problem, and need your help:
    "Survey" can be created and sent to vendor's external mail address  successfully via the "Portal --> Supplier Evaluation --> Create Survey", but the problem is, the system could not receive any incoming mails or survey results. Kindly please help.
    Thanks and best regards
    Alan

    Hi alan
    Please check SAP note :
    779972     SURVEY: Configuration required to receive emails
    607108     Problem analysis when sending or receiving e-mails
    552616     FAQ: SAPoffice - Sending to external recipients
    455140     Configuration of e-mail, fax, paging or SMS using
    Summer Wang

  • ITS repeat logon Problem ?

    Hi,
    Many of the users are facing repeat logon prompt while accessing ITS in our system.Could you please help me.
    ITS details :
    ITS Version 6200.1035.9507.7, build 1121051
    Installation type - single host
    Thanks
    Navin

    Hello,
    I think you might be having problem with Single Sign on, have you set the parameter: ~mysapcomusesso2cookie = 1?
    Also, check the following SAP Note:
    Note 356691 - Problem analysis: SAP logon ticket with Workplace SSO
    Regards,
    Siddhesh

  • Scenario IDoc-XI-FlatFile

    Hi,
    I am trying to Push Idoc from SAP R/34.7 to Flat File thru XI.Message Type is HRMD_ABA.The Idoc is forwarding successfully from SAP R/3,But I am not able see the details and Idoc in XI at all even though in sxmb_moni as well as at Message Display Tool.Earlier i have forwarded so many Idoc's like MATMAS01-04,CREMAS,Customized Idoc's etc to the Flat File Successfully and monitored at XI.This time i am not able to monitor even at XI.Ne messages are displaying.
    Plz tell me how to resolve this issue..
    Regards
    Sridhar Raju Mahali

    Hi
    Go thru this Problem Analysis Guide on <b>Sending an IDoc through XI Failed</b>http://help.sap.com/saphelp_nw04/helpdata/en/6a/e6194119d8f323e10000000a155106/content.htm
    Hope it helps
    Regards
    Arpit

  • Error Logs in the Default Trace

    Hi All,
    Iam getting following eror for every second in the Default Trace log but no users are effected with this log. Can anybody tell me what this error is about .
    Date : 12/20/2006
    Time : 11:18:43:702
    Message : Exception in method: com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Path to object does not exist at jmsfactory, the whole lookup name is jmsfactory/default/QueueConnectionFactory.
    [EXCEPTION]
    com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Path to object does not exist at jmsfactory, the whole lookup name is jmsfactory/default/QueueConnectionFactory.
         at com.sap.engine.services.jndi.implserver.ServerContextImpl.getLastContainer(ServerContextImpl.java:261)
         at com.sap.engine.services.jndi.implserver.ServerContextImpl.lookup(ServerContextImpl.java:624)
         at com.sap.engine.services.jndi.implclient.ClientContext.lookup(ClientContext.java:344)
         at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:254)
         at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:271)
         at javax.naming.InitialContext.lookup(InitialContext.java:347)
         at javax.naming.InitialContext.lookup(InitialContext.java:347)
         at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.startRunning(RTMFMessaging.java:1093)
         at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.run(RTMFMessaging.java:1037)
         at java.lang.Thread.run(Thread.java:534)
    Severity : Error
    Category :
    Location : com.sap.ip.collaboration.rtc.class com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging.JMSPolling.startRunning()
    Application : sap.com/irj
    Thread : Thread[Thread-37,5,SAPEngine_Application_Thread[impl:3]_Group]
    Datasource : 34368150:/usr/sap/PP1/JC03/j2ee/cluster/server0/log/defaultTrace.trc
    Message ID : 0003BA4DBF04001100003473000012530004250B918DE14E
    Source Name : com.sap.ip.collaboration.rtc
    Argument Objs : com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Path to object does not exist at jmsfactory, the whole lookup name is jmsfactory/default/QueueConnectionFactory.
         at com.sap.engine.services.jndi.implserver.ServerContextImpl.getLastContainer(ServerContextImpl.java:261)
         at com.sap.engine.services.jndi.implserver.ServerContextImpl.lookup(ServerContextImpl.java:624)
         at com.sap.engine.services.jndi.implclient.ClientContext.lookup(ClientContext.java:344)
         at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:254)
         at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:271)
         at javax.naming.InitialContext.lookup(InitialContext.java:347)
         at javax.naming.InitialContext.lookup(InitialContext.java:347)
         at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.startRunning(RTMFMessaging.java:1093)
         at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.run(RTMFMessaging.java:1037)
         at java.lang.Thread.run(Thread.java:534)
    Arguments : com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Path to object does not exist at jmsfactory, the whole lookup name is jmsfactory/default/QueueConnectionFactory.
         at com.sap.engine.services.jndi.implserver.ServerContextImpl.getLastContainer(ServerContextImpl.java:261)
         at com.sap.engine.services.jndi.implserver.ServerContextImpl.lookup(ServerContextImpl.java:624)
         at com.sap.engine.services.jndi.implclient.ClientContext.lookup(ClientContext.java:344)
         at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:254)
         at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:271)
         at javax.naming.InitialContext.lookup(InitialContext.java:347)
         at javax.naming.InitialContext.lookup(InitialContext.java:347)
         at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.startRunning(RTMFMessaging.java:1093)
         at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.run(RTMFMessaging.java:1037)
         at java.lang.Thread.run(Thread.java:534)
    Dsr Component :
    Dsr Transaction :
    Dsr User :
    Indent : 0
    Level : 0
    Message Code :
    Message Type : 1
    Relatives :
    Resource Bundlename :
    Session : 0
    Source : com.sap.ip.collaboration.rtc
    ThreadObject : Thread[Thread-37,5,SAPEngine_Application_Thread[impl:3]_Group]
    Transaction :
    User : j2ee_guest
    Thanks,
    Master

    I am having this same issue as well.  I upgraded our BI and BI Portal to SP12.  Everything worked fine.  Then I started the BI integration as outlined in OSS note 917950 -> Setting up BEx Web - Problem analysis.doc and restarted the J2EE engine and now I have this same error in the default trace and I can't log into the visual admin as the j2ee_admin user.  I get "error while connecting".   But I can connect to the visual admin as myself.  If I login to the BI portal as my self and click on user administration I get this error "A required service for the identity management user interface is not available. Contact your system administrator"
    Here's the error as seen in the default trace file that keeps repeating.
    #1.#0003BA21A6F5005C000000590000595B000439DE957600B4#1189528059902#com.sap.ip.collaboration.rtc#sap.com/irj#com.sap.ip.collaboration
    .rtc.class com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging.JMSPolling.startRunning()#J2EE_GUEST#0####d71a1250608111dcbcf70
    003ba21a6f5#Thread[Thread-57,5,SAPEngine_Application_Thread[impl:3]_Group]##0#0#Error##Java###Exception in method: com.sap.engine.se
    rvices.jndi.persistent.exceptions.NameNotFoundException: Path to object does not exist at jmsfactory, the whole lookup name is jmsfa
    ctory/default/QueueConnectionFactory.
    [EXCEPTION]
    #1#com.sap.engine.services.jndi.persistent.exceptions.NameNotFoundException: Path to object does not exist at jmsfactory, the wh
    ole lookup name is jmsfactory/default/QueueConnectionFactory.
            at com.sap.engine.services.jndi.implserver.ServerContextImpl.getLastContainer(ServerContextImpl.java:261)
            at com.sap.engine.services.jndi.implserver.ServerContextImpl.lookup(ServerContextImpl.java:624)
            at com.sap.engine.services.jndi.implclient.ClientContext.lookup(ClientContext.java:344)
            at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:254)
            at com.sap.engine.services.jndi.implclient.OffsetClientContext.lookup(OffsetClientContext.java:271)
            at javax.naming.InitialContext.lookup(InitialContext.java:347)
            at javax.naming.InitialContext.lookup(InitialContext.java:347)
            at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.startRunning(RTMFMessaging.java:1238)
            at com.sap.ip.collaboration.core.api.rtmf.core.RTMFMessaging$JMSPolling.run(RTMFMessaging.java:1182)
            at java.lang.Thread.run(Thread.java:534)
    Also, at the very beginning of the default trace file I see this error:
    ##0#0#Error##Java###A SecurityException was caught while attempting to retrieve user with Administrator rights in the fallback attem
    pt. The performed attempt was userCtx.getUserInfo(role.getRunAsIdentity(true)) but it ended with the following security exception :
    [EXCEPTION]
    #1#com.sap.security.core.server.userstore.UserstoreException: Could not get user null
    ##0#0#Error##Java###A SecurityException was caught while attempting to retrieve user with Administrator rights in the fallback attem
    pt. The performed attempt was userCtx.getUserInfo(role.getRunAsIdentity(true)) but it ended with the following security exception :
    [EXCEPTION]
    #1#com.sap.security.core.server.userstore.UserstoreException: Could not get user null

  • IDOC not received in XI; error in SM58 of sending system

    Hello everybody,
    we got a SAP CRM sending a "generated" IDOC.
    (IDOC-Type: CRMXIF_PARTNER_SAVE01)
    The IDOC seemes to be rejected by XI because
    SM58 shows:
    - No Service for System Binding_Error Client ...
    - function: IDOC_INBOUND_ASYNCHRONOUS
    All settings (IDX1, IDX2, ID, IR) are OK!!
    All reimports (delete & import IDOC) are made!!
    (btw: we got several other IDOCs-Types that work fine!)
    Could it be, that the IDOC-type is GENERATED?
    Regards Mario

    Hi Mario,
    Solution for this error is provided in link given below
    of Problem Analysis Guide "Sending an IDoc through XI Failed "
    http://help.sap.com/saphelp_nw04/helpdata/en/6a/e6194119d8f323e10000000a155106/content.htm
    Also have you set adapter specific identifiers?
    For the same, refer question 3 (integration engine section)
    /people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions
    Regards,
    Abhy

  • Umount not possible on Solaris 10

    Hello,
    This afternoon, I asked for support on Sun Support Center for a problem which occurs on a new Solaris 10 installation.
    Problems on Solaris 10 are not supported by Sun Support Center ... :-(
    (I think a umount problem is not a Solaris 10 problem , but ...)
    It's not a critical problem but I'm frustrated if I must reboot for this problem.
    We are not under Microsoft OS ...
    Sun Support Center asked me to try on Sun Developer site.
    Can you help me ?
    Thanks
    Jean Berthold
    Below, my problem:
    Update/View Service Request Details
    Service Request Number:      37128194
    Status:      Unresolved, Closed
    Sun Engineer Summary:      How to umount this volume correctly (Notes)
    Updated By     Date Updated
    Sun Engineer     May 10, 2004 5:19:04 PM, Central European Summer Time (CEST GMT+02:00)
    EMEA-CES Closure template (solution note in case, public)
    TYPE OF CALL: Problem
    PROBLEM SUMMARY :
    unable to umount /cdrom under solaris 10 beta version
    SOLUTION :
    beta version pf solaris 10 is not supported.
    Consult the FAQs or discuss in the forum of the developers's site :
    http://developers.sun.com/prodtech/solaris/
    Sun Engineer     May 10, 2004 4:37:42 PM, Central European Summer Time (CEST GMT+02:00)
    EMEA-CES Information_Knowledge Gathering (problem note in Initial Investigation task, public)
    PROBLEM DESCRIPTION: unable to umount /cdrom/disk1
    PROBLEM SUMMARY: unable to umount /cdrom/disk1
    WHEN WAS THE PROBLEM FIRST SEEN? this after noon
    WHAT HAS CHANGED ON THE SYSTEM? nothing to declare
    ERROR MESSAGES:
    |><|
    WHAT IS THE IMPACT ON YOUR BUSINESS
    |>Low<| |>Medium<| |>High<|
    PRODUCT: Solaris 10
    SYSTEM TYPE: Sun-Blade-100
    SPEC: |>CPU Speed, Memory, Disk size + qty, Storage<|
    OS VERSION: SunOS zanzibar 5.10 s10_51 sun4u sparc SUNW
    HAS THE CUSTOMER TRIED USING SUNSOLVE.SUN.COM TO SOLVE HIS PROBLEM
    No
    If NO explain the benefits of SunSolve, if YES ask :
    WHAT STOPPED HIM FROM SOLVING HIS PROBLEM:
    n/a
    ARE CONTACT DETAILS CORRECT:
    yes
    PROBLEM ANALYSIS :
    The customer has mounted the cdrom on the machine zanzibar
    and share it by NFS. See the customer's note for more details.
    After that, he has launched Oracle installation on machine udaipur
    by using CD ROM mounted via NFS on zanzibar machine.
    After the usage of the cdrom from the NFS client udaipur, it is
    impossible to umount the cdrom.
    # umount /cdrom/disk1
    umount: /cdrom/disk1 busy
    The customer has stopped all services of NFS :
    # ps -ef | grep nfs
    daemon 12459 1 0 12:20:54 ? 0:00 /usr/lib/nfs/statd
    daemon 12461 1 0 12:20:54 ? 0:00 /usr/lib/nfs/lockd
    root 12463 1 0 12:20:54 ? 0:00 /usr/lib/nfs/mountd
    daemon 12465 1 0 12:20:54 ? 0:22 /usr/lib/nfs/nfsd
    # unshareall
    # /etc/init.d/nfs.server stop
    # umount /cdrom/disk1
    umount: /cdrom/disk1 busy
    # ls /etc/init.d/volmg*
    /etc/init.d/volmgt
    # /etc/init.d/volmgt stop
    # ps -ef | grep vol
    bej 12313 12309 0 11:37:15 pts/2 0:00 /usr/bin/gnome-volcheck -i 30 -z 3 -m cdrom,floppy,zip,jaz,dvdrom
    But the problem is still persisting : impossible to umount cdrom
    # umount /cdrom/disk1
    umount: /cdrom/disk1 busy
    The customer is using gnome for interface
    ifuser command gives 3 processes: vold, the terminal and lockd.
    We stop all processes and retry again :
    # umount /cdrom/disk1
    umount: Operation not supported
    umount: cannot unmount /cdrom/disk1
    # umount /cdrom/cdrom0
    umount: Operation not supported
    umount: cannot unmount /cdrom/cdrom0
    #umount -f /cdrom/disk1
    umount: Operation not supported
    umount: cannot unmount /cdrom/disk1
    #umount -F hsfs /cdrom/disk1
    umount: Operation not supported
    umount: cannot unmount /cdrom/disk1
    #umount /vol/dev/dsk/c0t1d0/disk1
    umount: Operation not supported
    umount: cannot unmount /vol/dev/dsk/c0t1d0/disk1
    The problem is not really urgent but the customer does not
    want to reboot this machine just to umount the cdrom !!!!
    It is really impossible to umount the cdrom.
    i've found a bug id for swap and this problem needs more investigation.
    So i'm escalating this case!
    Customer     May 10, 2004 4:00:40 PM, Central European Summer Time (CEST GMT+02:00)
    Hardware Platform: Sun Blade 100
    Product Affected: OS File System
    OS Version: bej@zanzibar # uname -a
    SunOS zanzibar 5.10 s10_51 sun4u sparc SUNW,Sun-Blade-100
    bej@zanzibar #
    State the problem Cut/Paste error messages
    (30kb maximum):
    # df -h | grep cdrom
    /vol/dev/dsk/c0t1d0/disk1 0K 0K 0K 0% /cdrom/disk1
    # umount /cdrom/disk1
    umount: /cdrom/disk1 busy
    # cat /etc/dfs/dfstab
    # Place share(1M) commands here for automatic execution
    # on entering init state 3.
    # Issue the command '/etc/init.d/nfs.server start' to run the NFS
    # daemon processes and the share commands, after adding the very
    # first entry to this file.
    # share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
    # .e.g,
    # share -F nfs -o rw=engineering -d "home dirs" /export/home2
    share -o anon=0 /cdrom/cdrom0
    # ps -ef | grep nfs
    daemon 12459 1 0 12:20:54 ? 0:00 /usr/lib/nfs/statd
    daemon 12461 1 0 12:20:54 ? 0:00 /usr/lib/nfs/lockd
    root 12463 1 0 12:20:54 ? 0:00 /usr/lib/nfs/mountd
    daemon 12465 1 0 12:20:54 ? 0:22 /usr/lib/nfs/nfsd
    # unshareall
    # /etc/init.d/nfs.server stop
    # umount /cdrom/disk1
    umount: /cdrom/disk1 busy
    # ls /etc/init.d/volmg*
    /etc/init.d/volmgt
    # /etc/init.d/volmgt stop
    # ps -ef | grep vol
    bej 12313 12309 0 11:37:15 pts/2 0:00 /usr/bin/gnome-volcheck -i 30 -z 3 -m cdrom,floppy,zip,jaz,dvdrom
    List steps to reproduce problem (if applicable):
    What software is having the problem?:
    From the CD ROM server:
    I shared a CD ROM whith NFS via /etc/dfs/dfshare file:
    share -o anon=0 /cdrom/cdrom0
    # shareall
    # dfshares
    RESOURCE SERVER ACCESS TRANSPORT
    zanzibar:/cdrom/disk1 zanzibar - -
    From the client:
    root@udaipur # dfshares zanzibar
    RESOURCE SERVER ACCESS TRANSPORT
    zanzibar:/cdrom/disk1 zanzibar - -
    root@udaipur #
    root@udaipur # mount zanzibar:/cdrom/disk1 /zanzibar_CDROM
    root@udaipur # df -h | grep cdrom
    zanzibar:/cdrom/disk1 0K 0K 0K 0% /zanzibar_CDROM
    root@udaipur #
    After that, I launched Oracle installation on machine udaipur by using CD ROM mounted via NFS on zanzibar machine.
    When the second CD-ROM was needed:
    Client:
    root@udaipur # umount /zanzibar_CDROM
    CD ROM Server:
    # hostname
    zanzibar
    # df -h | grep cdrom
    /vol/dev/dsk/c0t1d0/disk1 0K 0K 0K 0% /cdrom/disk1
    # dfshares
    RESOURCE SERVER ACCESS TRANSPORT
    zanzibar:/cdrom/disk1 zanzibar - -
    # unshareall
    # dfshares
    # pwd
    # umount /cdrom/disk1
    umount: /cdrom/disk1 busy
    Question:
    1. How to umount this volume correctly. ?
    2. which process is blocking the CD-ROM ?
    When was the problem first noticed? # date
    Mon May 10 15:55:39 CEST 2004
    Is the problem getting: staying the same
    Any Changes Recently? None

    Hi there,
    This will depend on how your VM is configured, please have a look at the following documentation and make sure you are using the appropriate networking mode:
    https://www.virtualbox.org/manual/ch06.html#networkingmodes
    Then check your # ifconfig -a output and verify that your interface is marked UP and RUNNING, please refer to the following document for more details on how to set up your network interfaces:
    http://docs.oracle.com/cd/E23823_01/html/816-4554/ipconfig-12.html#scrolltoc
    Hope that helps

  • Help me with my survy for my projeect

    I am doing this survy for my professor- I am at Bethedsa community college. I am already behind on this survey- so I will appreciate if you guys can quickly look at the survey file- One lucky winner will get $50 gift coupon at amazon.com
    We are building a better version of JVM which will allow profiling in production. This technology, if feasible and successful may be ported to commercial application servers such as WebLogic, WebSphere, JBoss and others, with the cooperation of J2EE vendors.
    In this document, we will be calling this system- "DiagnoseNow".
    The DiagnoseNow project is in research stage, and we want to adjust the goals of the project to better meet the requirements of the application administrator, designer and IT manager.
    Your inputs will be tremendously appreciated.
    I will appreciate your response- I need to get 20-30 responses-
    before my professor will release my stipend... I am sure you were a
    student once- So I will really really appreciate your inputs. You do
    not have to be complementary or negative to the idea- just be
    unbiased and honest...
    PROFILING AND TRANSACTION TRACKING IN PRODUCTION
    We are building a better version of JVM which will allow profiling
    in production. This technology, if feasible and successful may be
    ported to commercial application servers such as WebLogic,
    WebSphere, JBoss and others, with the cooperation of J2EE vendors.
    In this document, we will be calling this system- "DiagnoseNow".
    The DiagnoseNow project is in research stage, and we want to adjust
    the goals of the project to better meet the requirements of the
    application administrator, designer and IT manager.
    Your inputs will be tremendously appreciated.
    We are giving away one $50 gift certificate at Amazon.com to one
    respondent. We expect about 25-50 responses only- So your odds of
    winning are high.
    The features of this proposed JVM are best explained using an
    example of an ecommerce site.
    Ecommerce site scenario
    Description of system
    Consider a typical ecommerce site with the following features: User
    Validation, Product Search, Shopping Cart Management and Checkout.
    Each user that actually purchases or attempts to purchase a product
    on the website is consider a "Tracking unit" or a "Session". This
    new research JVM will allow the time spent in a
    particular "Sessions" to be tracked at the method level.
    System administrator can if he / she so wishes get a report with
    the following columns:
    Session ID, MethodName, Time spent in the method(inclusive of
    called methods), Time spent in the method(excluding called methods)
    Put simply, the goal is to support session level tracking and
    profiling at the method level in the JVM itself.
    This will allow faster debugging and application turnaround,
    reducing application maintenance cost.
    A production ecommerce site could have millions of users and many
    more transactions per day. We are optimistic about supporting real
    life production sites.
    User Validation
    1. login: This validates the user using user name and password.
    2. logout: This logs the user off.
    3. passwordVerification: Using a database, the password is
    verified.
    4. retrievalOfLostPassword: If the user has lost his/her
    password the password is emailed to the user.
    Product search
    1. byname: Search product catalog by name.
    2. byDescription: Search product catalog by Description
    3. bySku: Search product catalog by SKU
    4. byBrand: Search product catalog by brand
    5. Shopping Cart
    6. add: Adds items to the cart
    7. remove: Removes Items from the cart.
    8. update: Updates Items from the cart.
    9. checkout: Checkout items from the cart, by asking for credit
    card. .
    10. retrievePastData: Retrieve user information from the past.
    11. validateCreditCard: Validate credit Card by checking with
    the bank.
    12. sendConfirmationEmail: Send Email confirming the purchase.
    13. sendConfirmationEmailShipping: Send Email confirming that
    the product has shipped.
    14. sendSurveyEmail: Send Email checking customer satisfaction.
    What is a "Transaction" or a "Tracking Unit" or a "Session"?
    Each user who searches the product catalog is considered a "Tracking
    unit" or sessions. A session ends when the users leaves the website.
    Transaction tracking/ reporting data at method level
    Time spent by each user, in each and every method above is tracked
    in two ways: Time spent in a method itself, Time spent in the method
    and the called methods. This will be done for each and every
    invocation of the method.
    Email alerts
    Emails can be generated whenever a particular method(eg.
    sendSurveyEmail) takes longer than a specified threshold.
    Analysis Reports
    Analysis can be done on a variety of topics: Reasons for abandoning
    shopping carts, Slow or underperforming parts of the application.
    Thread dumps
    If a particular method runs slower than a threshold value, then
    optionally a thread dump may be taken and stored. In this way, even
    if the slowdown occurs at midnight, the thread dump will still be
    available.
    Triggered heap profile
    Similarly, if the system is running out of heap memory a heap
    profiler may become active. Heap profile may also be taken
    periodically, to allow analysis of heap growth.
    The system is best explained using a few examples:
    Slowdown in search-books
    Problem: Analysis of sessions shows that a large number of
    customers are browsing for books, but conversion to actual sales is
    slow.
    Analysis: The book database has grown in size, and the server
    running the database has many more apps on it. Moving the database
    to another machine solved the problem.
    How and Why did DiagnoseNow help?
    DiagnoseNow maintains history of all sessions. The history was
    analyzed to show customers who were not converting to purchases- a
    large proportion of them were searching for books, and abandoning
    the site afterwards.
    Slowdown in search-general
    Problem: Search has slowed down for all products.
    Analysis and Resolution: Analysis showed that all search methods
    are taking 250% longer than normal. The slowdown happened after a
    specific date- it was the date on which the OS was upgraded. Rolling
    back the upgrade and reinstalling it properly resolved the issue.
    How and Why did DiagnoseNow help?
    DiagnoseNow maintains a baseline of past search method response
    times. So when a slowdown happens, there is no time
    wasted "apportioning blame" or debating whether a problem exists.
    Credit card authorization- intermittent failure
    Problem: Credit card authorization was failing intermittenty after
    hours
    Analysis/Resolution: Credit card authorization was failing
    intermittently. The automatically generated thread dump showed a
    faulty connection. The problem was traced to the credit card
    authorization end, and was resolved.
    How and Why did DiagnoseNow help?
    Without the thread dump this problem could not have been solved, and
    without DiagnoseNow the problem would not have been detected unless
    the system administrator was able to take thread Dumps- for that to
    happen the problem would have had to occur during normal business
    hours. (The off hours system administrators are Unix technicians
    with no app server or Java knowledge. )
    Slowdown of overall system at midnight
    Problem: The entire system slows down at midnight.
    Analysis/Resolution: All methods show a slowdown around midnight,
    degrading system performance.
    How and Why did DiagnoseNow help?
    DiagnoseNow detected the slowdown, and it was discovered that a
    large admin program ran at midnight. Splitting up the work in the
    admin program into 5 chunks, made the performance impact lot
    smaller.
    Slowdown at 9.05 pm Saturday/Sunday evening
    Problem: The entire system performance degrades by 50% around 9.05
    pm on Saturday/Sunday evening
    Analysis/Resolution: The weekend cleaning crew was unplugging one
    of the appservers, to plug in the vaccum cleaner.
    How and Why did DiagnoseNow help?
    DiagnoseNow allows method level historical tracking and system level
    analysis. This allowed quick detection of an overall slowdown.
    Without system level analysis, a variety of alerts may get
    triggered, but the root cause may not be identified.
    SURVEY
    Your Name(Required to get the prize):
    Your role: Developer/IT Administrator/Manager/ App server
    administrator/Software QA
    Email address(Required to get the prize):
    Phone number(optional):
    Place a tick mark against the question:
    1. Reducing application maintenance cost is important to me.
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    2. Triggering a thread dump based on specific conditions is an
    important feature
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    3. Tracking each and every session/transaction down to the
    method level is an important feature
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    4. Rapidly localizing and diagnosing a problem is important to
    me
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    5. Creating alerts based on overall transaction performance is
    important to me
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    6. If DiagnoseNow is proven to be a stable system, then I will
    be willing to pay 1500 dollars per CPU license fee
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    7. I can accept a CPU overhead of 8% for extensive monitoring
    leading to reduced costs.
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    8. If DiagnoseNow is proven to be a stable system, then I will
    be willing to pay 1500 dollars per CPU license fee(inclusive of 2
    business day email support. )
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree
    9. If DiagnoseNow is proven to be a stable system, then I will
    be willing to pay 7500 dollars per CPU license fee(inclusive of
    support with 1 hour response time)
    Disagree Somewhat disagree Neutral Somewhat Agree
    Strongly Agree

    Profiling in production is a pipedream.
    With modern JVMs and processors, hundreds of thousands of methods will be invoked on multi-CPU machines in just one second. Tracking and analyzing this data is beyond the power of the fastest database.
    To analyze one hour of data, 360 million methods will have to be handled for a 4 CPU Pentium/Xeon machine.
    This is clearly impossible to do in real life, with current hardware. May be 5 to 10 years from now, things will be better- however, by then CPUs will be processing many more methods per second...
    I think it is a nice ACADEMIC project!

  • LiveCache - LC10 message - Index issue

    hi,
    LiveCache - LC10, - Problem Analysis ->Performance -> Database Analyser -> bottleneck report
    The message read as follows:
    LiveCache- Bottle-neck messages:
    2 tables contain > 1.000.000 records but only 20.000 rows will be sampled for statistics.
    Table SAPR3./SAPAPO/ORDKEY contains 8247892 rows(205921 pages), sample rows 20000
    Table SAPR3./SAPAPO/STOCKANC contains 1319385 rows(25413 pages), sample rows 20000
    Looks this would affect the processing and the delay may be an issue for CIF queue processing.
    Wondering, if sampling could be increased ? is there any oss note available to do this.  This index is implemented using BADI
    Any input on this issue is appreciated.
    Thanks,
    RajS

    Please check the following notes regarding changing the sample of the statistics run:
    Note 808060 - Changing the estimated values for Update Statistics
    Note 927882 - FAQ: SAP MaxDB Update Statistics
    Kind regards,
    Mark

Maybe you are looking for