Performance bottleneck in Service.poll

In running a performance evaluation of the Coherence software I'm seeing almost all threads blocked waiting in Service.poll as illustrated in the thread dump below. This particular thread dump is spawned from a call to NamedCache.put( org.apache.commons.collections.keyvalue.MultiKey, Object ). My app (deployed in WLS) is chugging at 150% CPU (on a 24CPU machine) and is giving me about 2.5tps.
     "ExecuteThread: '5' for queue: 'oms.xml'" daemon prio=5 tid=0x013bb138 nid=0x11 in Object.wait() [70c7e000..70c7
     fc28]
     at java.lang.Object.wait(Native Method)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.CDB:26)
     - locked <0x82145d10> (a com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKey
     Request$Poll)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.CDB:1)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.put(Di
     stributedCache.CDB:33)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.put(Di
     stributedCache.CDB:1)
     at com.tangosol.util.ConverterCollections$ConverterObservableMap.put(ConverterCollections.java:1878)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.put(Dist
     ributedCache.CDB:1)
     at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
     at com.tangosol.net.cache.CachingMap.put(CachingMap.java:882)
     at com.tangosol.net.cache.CachingMap.put(CachingMap.java:805)
     at com.tangosol.net.cache.CachingMap.put(CachingMap.java:742)
     I have a near scheme configured with a local scheme fronting a distributed scheme.
     Any suggestions on how to alleviate this bottleneck?
     Regards, San

Ahh, it definitely looks like things are all blocking on a single daemon thread than:
     One of these before each of the following:
     "DistributedCache:EventDispatcher" daemon prio=5 tid=0x0152b068 nid=0x130 in Object.wait() [6347f000..6347fc28]
     at java.lang.Object.wait(Native Method)
     at com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:9)
     - locked <0xa34960f0> (a com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispat cher$Queue)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:31)
     at java.lang.Thread.run(Thread.java:534)
     "DistributedCache" daemon prio=5 tid=0x0116f828 nid=0x12f runnable [6227e000..6227fc28]
     at java.lang.Class.forName0(Native Method)
     at java.lang.Class.forName(Class.java:219)
     at com.tangosol.net.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputSt ream.java:57)
     at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1513)
     at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
     at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1521)
     at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1626)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
     at java.util.HashSet.readObject(HashSet.java:276)
     at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:25)
     at java.lang.reflect.Method.invoke(Method.java:324)
     at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:838)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1746)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1845)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1769)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
     at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.ja va:1626)
     at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:174 3)
     at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:187 )
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$ConverterFromBinary.convert(DistributedCache.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage$BinaryEntry.getValue(DistributedCache.CDB:9)
     at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:78)
     at com.tangosol.util.filter.AllFilter.evaluateEntry(AllFilter.java:75)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage.query(DistributedCache.CDB:107)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onQueryRequest(DistributedCache.CDB:25)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$QueryRequest.run(DistributedCache.CDB:1)
     at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheReq uest.onReceived(DistributedCacheRequest.CDB:12)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(S ervice.CDB:9)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Se rvice.CDB:103)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onNotify(DistributedCache.CDB:3)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
     at java.lang.Thread.run(Thread.java:534)
     "DistributedCache" daemon prio=5 tid=0x0116f828 nid=0x12f runnable [6227f000..6227fc28]
     at java.io.ObjectStreamClass.getReflector(ObjectStreamClass.java:1923)
     - waiting to lock <0xa02a04e0> (a sun.misc.SoftCache)
     at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:501)
     at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1521)
     at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1626)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
     at java.util.HashSet.readObject(HashSet.java:276)
     at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:25)
     at java.lang.reflect.Method.invoke(Method.java:324)
     at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:838)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1746)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1845)
     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1769)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
     at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.ja va:1626)
     at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:174 3)
     at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:187 )
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$ConverterFromBinary.convert(DistributedCache.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage$BinaryEntry.getValue(DistributedCache.CDB:9)
     at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:78)
     at com.tangosol.util.filter.AllFilter.evaluateEntry(AllFilter.java:75)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage.query(DistributedCache.CDB:107)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onQueryRequest(DistributedCache.CDB:25)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$QueryRequest.run(DistributedCache.CDB:1)
     at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheReq uest.onReceived(DistributedCacheRequest.CDB:12)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(S ervice.CDB:9)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Se rvice.CDB:103)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onNotify(DistributedCache.CDB:3)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
     at java.lang.Thread.run(Thread.java:534)
     "DistributedCache" daemon prio=5 tid=0x0116f828 nid=0x12f runnable [6227f000..6227fc28]
     at java.lang.String.intern(Native Method)
     at java.io.ObjectStreamField.(ObjectStreamField.java:84)
     at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:543)
     at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:762)
     at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1503)
     at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1626)
     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
     at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.ja va:1626)
     at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:174 3)
     at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:187 )
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$ConverterFromBinary.convert(DistributedCache.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage$BinaryEntry.getValue(DistributedCache.CDB:9)
     at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:78)
     at com.tangosol.util.filter.AllFilter.evaluateEntry(AllFilter.java:75)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage.query(DistributedCache.CDB:107)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onQueryRequest(DistributedCache.CDB:25)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$QueryRequest.run(DistributedCache.CDB:1)
     at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheReq uest.onReceived(DistributedCacheRequest.CDB:12)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(S ervice.CDB:9)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Se rvice.CDB:103)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onNotify(DistributedCache.CDB:3)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
     at java.lang.Thread.run(Thread.java:534)
     We'll try incrementing the thread-count.

Similar Messages

  • Performance bottleneck with subreports

    I have an SSRS performance bottleneck on my production server that we have diagnosed as being related to the use of subreports.
    Background facts:
    * Our Production and Development servers are identically configured
    * We've tried the basic restart/reboot activities, didn't change anything about the performance.
    * The Development server was "cloned" from the Production server about a month ago, so all application settings (memory usage, logging, etc.) are identical between the two
    * For the bottlenecked report the underlying stored procedure executes in 3 seconds, returning 901 rows, in both environments with the same parameters.  The execution plan is identical between the two servers, and the underlying tables and indexing
    is identical.  Stats run regularly on both servers.
    * In the development environment the report runs in 12 seconds. But on Production the report takes well over a minute to return, ranging from 1:10 up to 1:40.
    * If I point the Development SSRS report to the PROD datasource I get a return time of 14 seconds (the additional two seconds due to the transfer of data over the network).
    * If I point the Production SSRS report to the DEV datasource I get a return time of well over a minute.
    * I have tried deleting the Production report definition and uploading it as new to see if there was a corruption issue, this didn't change the runtimes.
    * Out of the hundreds of Production SSRS reports that we have, the only two that exhibit dramatically different performance between Dev and Prod are the ones that contain subreports.
    * Queries against the ReportServerTempDB also confirm that these two reports are the major contributors to TempDB utilization.
    * We have verified that the ReportServerTempDB is being backed up and shrunk on a regular basis.
    These factors tell me that the issue is not with the database or the SQL.  The tests on the Development server also prove that the reports and subreports are not an issue in themselves - it is possible to get acceptable performance from them in the
    Development environment, or when they are pointed from the Dev reportserver over to the Prod database.
    Based on these details, what should we check on our Prod server to resolve the performance issue with subreports on this particular server?

    Hi GottaLoveSQL,
    According to your description, you want to improve the performance of report with subreports. Right?
    In Reporting Services, the usage of subreport will impact the report performance, because the report server processes each instance of a subreport as a separate report. So the best way is avoid using subreport by using LookUp , MultiLookUp , LookUpSet, which
    will bridge different data sources. In this scenario, we suggest you cache the report with subreport. We can create a cache refresh plan for the report in Report Manager. Please refer to the link below:
    http://technet.microsoft.com/en-us/library/ms155927.aspx
    Reference:
    Report Performance Optimization Tips (Subreports, Drilldown)
    Performance, Snapshots, Caching (Reporting Services)
    Performance Issue in SSRS 2008
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • I have files that may have been created in various versions of Illustrator. We need to be able to open them, but all we have is CS6 suite. I have called and been told that "technicians at adobe" could perform a paid service to find out what the files were

    I have files that may have been created in various versions of Illustrator. We need to be able to open them, but all we have is CS6 suite. I have called and been told that "technicians at adobe" could perform a paid service to find out what the files were created in and get them to be useable in InDesign 6, but I'd need an email address that was registered to our software to give the help desk person. After finding the correct email address, different adobe help number people told me to come here to ask as there is no phone support. Can anyone get me to an adobe technician that can provide a price quote for finding out what created these files and for getting them converted? Thanks.

    Sorry, I understood you to say you had the whole CS6 suite, which includes Illustrator.
    As far as finding out what the files are, sometimes you can look at them in a text editor to figure this out. See below; the, Creator Tool line. Not every software package makes it this easy, though.

  • Import can only be performed in a Service Desk Transaction

    Hi CHARM gurus,
    I'm testing different scenarios in CHARM based on a document posted by Dolores Correa [http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=%28J2EE3414900%29ID0826428450DB10670017877649482582End?blog=/pub/wlg/15165]
    Here's the scenario: I am in "Development with Release" phase and I created a Normal Correction (SDMJ), created a transport request, then developer released the task under that transport and set the normal correction transaction to "Complete Development". The last action created a transport of copies and exported it. The TOC is now in the QA system buffer waiting to be imported. Now, as an "IT_operator" I want to import the TOC in the QA system. I go to the Normal Correction transaction and select action "Go To Task Plan", which brings me to the task list where I have to option to "import Transport Request" in the Quality Assurance System section. When I execute the "import Transport Request" task I get an informational pop-up saying "This action can only be performed in a Service Desk Transaction"
    Am I missing something? What am I doing wrong?
    The only way I can import this transport is from STMS. I would think that I should be able to do this from the task plan or even from an action in the Normal correction transaction itself !?!? From what I can see I can only change the maintenance cycle from the service desk transaction (SDMN).
    I need some guidance Please. What is the best practice in this scenario.
    Thanks

    Hi Stephane,
    I think you are on the right track. While in the task list you should choose the task Import Transport Request (Background), in the section for for the QAS system. There should be two almost identical tasks. Choose the one with the icon that looks like a blue square. Right click on this task and choose "Schedule". This is the standard instruction from SAP and works fine in our SolMan.
    Good luck!
    /Christer

  • Performance Optimization Self Service- SAP help requirement

    Hi,
    I want to know whether for SAP's help is required for performing the self service of Performance Optimization.
    If we collect ST12 trace and use it to perform the self service then is the report which is generated from the self service sufficient to take further action or will I need some SAP expertise to implement / take corrective actions?
    In short, whether I can do the Performance Optimization by myself or I need help from SAP?
    Regards,
    Vishal

    hi,
    1) Is this service available to all the customer? (by all the customers I mean "Max Attention", "Enterprise Support" etc)
    i answer this above is it, from mz above reply, have you checked
    enterprise support customers can get five EGI sessions as free per year. please check
    http://service.sap.com/esacademy
    - click browse egis
    for your second question also I answered above
    Does the report itself gives suggestions or we need to provide the report to SAP
    here my reply above
    because Guided procedure itself the proven methodlogy from SAP, the report provides the lots of suggestions against the SAP best practices.
    you can use it yourself most of the time. if still you need expert guidance from SAP, book for EGI sessions. they called as expert guided implementations, remote support. duration might vary based on the session.
    again, service report is the source, you have to review yourself, if you are in EGI, sap use that report for guiding. Please review
    Thanks
    Jansi

  • OWB Performance Bottleneck

    Is there any session log that is produced by the OWB mapping execution other than seeing the results in OWB Runtime Audit Browser.
    Suppose that the mapping is doing some hash join which is consuming too much amount of time and I would like to see which are the tables that are being joined at that instant. This would help me in identifying the exact area of the problem in a mapping. Does OWB provide a session log which can help me get that information, or from any other place where I can get some information regarding the operation which is causing a performance bottleneck
    regards
    -AP

    Thanks for all your suggestions. The mapping was using a join between some 4 - 5 tables and I think this was the place the mapping was getting stuck during execution in Set Based Mode. Moreover the mapping loads some 70 million records into the target table. Perhaps, loading such huge volume of data and that too in a set based mode and also with a massive join in the beginning, mapping should have got stuck somwhere.
    The solution that came up was to create a table with the join condition and use the table as input to the mapping. This helps us to get rid of the joiner in the very beginning and also the mapping be run in Row Based Target Only mode. The data (70 million) got loaded in some 4 hours.
    regards
    -AP

  • Performance of web services with XMLBeans on WLS 9

    We are planning to use XMLBeans extensively for web services development on WLS. Recently another group within our company did a performance study indicating horrible performance with XMLBeans and WLS 9.2 especially when processing long requests (> 1000 xml elements). This puts in question whether XMLBeans with WLS 9.x is actually the right platform for us to build/host a large number of services.
    XMLBeans is a highly regarded XML<->Java binding tool as explained in numerous past articles both on Dev2Dev and other technology websites. We also understand that it is an integral technology used by other BEA products such as WLI and Liquid Data.
    We are wondering if BEA has any resources/reports on performance of web services (which uses XMLBeans) on WebLogic. Thanks.
    Edited by ductrinh at 06/04/2007 1:05 PM
    Edited by ductrinh at 06/04/2007 1:06 PM

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • Will RAC's performance bottleneck be the shared disk storage ?

    Hi All
    I'm studying RAC and I'm concerned about RAC's I/O performance bottleneck.
    If I have 10 nodes and they use the same storage disk to hold database, then
    they will do I/Os to the disk simultaneously.
    Maybe we got more latency ...
    Will that be a performance problem?
    How does RAC solve this kind of problem?
    Thanks.

    J.Laurence wrote:
    I see FC can solve the problem with bandwidth(throughput),There are a couple of layers in the I/O subsystem for RAC.
    There is CacheFusion as already mentioned. Why read a data block from disk when another node has it in is buffer cache and can provide that instead (over the Interconnect communication layer).
    Then there is the actual pipes between the server nodes and the storage system. Fibre is slow and not what the latest RAC architecture (such as Exadata) uses.
    Traditionally, you pop a HBA card into the server that provides you with 2 fibre channel pipes to the storage switch. These usually run at 2Gb/s and the I/O driver can load balance and fail over. So it in theory can scale to 4Gb/s and provide redundancy should one one fail.
    Exadata and more "+modern+" RAC systems use HCA cards running Infiniband (IB). This provides scalability of up to 40Gb/s. Also dual port, which means that you have 2 cables running into the storage switch.
    IB supports a protocol called RDMA (Remote Direct Memory Access). This essentially allow memory to be "+shared+" across the IB fabric layer - and is used to read data blocks from the storage array's buffer cache into the local Oracle RAC instance's buffer cache.
    Port to port latency for a properly configured IB layer running QDR (4 speed) can be lower than 70ns.
    And this does not stop there. You can of course add a huge memory cache in the storage array (which is essentially a server with a bunch of disks). Current x86-64 motherboard technology supports up to 512GB RAM.
    Exadata takes it even further as special ASM software on the storage node reconstructs data blocks on the fly to supply the RAC instance with only relevant data. This reduces the data volume to push from the storage node to the database node.
    So fibre channels in this sense is a bit dated. As is GigE.
    But what about the hard drive's reading & writing I/O? Not a problem as the storage array deals with that. A RAC instance that writes a data block, writes it into storage buffer cache.. where the storage array s/w manages that cache and will do the physical write to disk.
    Of course, it will stripe heavily and will have 24+ disk controllers available to write that data block.. so do not think of I/O latency ito of the actual speed of a single disk.

  • Major performance bottleneck in JSF RI 1.0

    We've been doing some load testing this week, and have come up with what I believe is a major performance bottleneck in the reference implementation.
    Our test suite was conducted two different application servers (JBoss and Oracle) and we found that in both cases response time degraded dramatically when hitting about 25-30 concurrent users.
    On analyzing a thread dump when the application server was in this state we noticed that close to twenty threads were waiting on the same locked resource.
    The resource is the 'descriptors' static field in the javax.faces.component.UIComponentBase class. It is a WeakHashMap. The contention occurs in the getPropertyDescriptors method, which has a large synchronized block.

    Well not the answer I was hoping for. But at least that's clear.
    Jayashri, I'm using JSF RI for an application that will be delivered to testing in august. Can you give advice wether I can expect an update for this bottleneck problem within that timeframe?
    Sincerely,
    Joost de Vries
    ps hi netbug. Saw you at theserverside! :-)

  • Performance monitoring by service operation

    Hi,
    We are using PI 7.1 I am drafting naming conventions for our company's PI development and using PI Best Practices : Naming Conventions as a starting point.
    Now one issue I have , and I beleive it is quite a big one, is that the best practice recommends that service interfaces group service operations on the same object.
    The problem I have is where do we actually get performance statistics by service operation ? Early watch does not have these statistics but only response time by service interface. SXMB_MONi either.
    Do you know if we can get statistics by service operation ?
    Is this comming in future enhencement pack for PI or solution manager ?
    Thanks.

    Thanks for the document.
    However, I don't see where we can monitor web service performance by "service operation" ... it seems that PI 7.1 added service operation concept but no way to actually monitor their performance...
    By monitor , I mean, get a performance report. For example, I would like the average response time of a specific service operation.
    All I see now are statistics for services . These statistics are too "vague" or rather smoothed out if you have many service operations in one service interface.
    Is the strategy to use the service call statistics from the caller system... this is unfortunately not always possible since the calling system could be at outside company... (which is often the case).
    Ideas or recommendations anyone... ! I would expect SAP could respond to this. Perhaps I will also open a customer message.
    Thanks.

  • J2EE application performance bottlenecks

    For anyone interested in learning how to get resolve J2EE application performance bottlenecks, found a great resource:
    http://www.cyanea.com/email/throttle_form2.html
    registering with them can have you win 1 of 3 iPod mini's

    I agree with yawmark's response #1 in one of your evil spams http://forum.java.sun.com/thread.jsp?thread=514026&forum=54&message=2446641

  • Array as Shared Memory - performance bottleneck

    Array as shared memory - performance bottleneck
    Hello,
    currently I work on a multi-threaded application, where many threads work on shared memory.
    I wondering why the application doesn't become faster by using many threads (I have i7 machine).
    Here is an example for initialization in single thread:
              final int arrayLength = (int)1e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
               * init array in single thread
              Integer[] a1 = new Integer[arrayLength];
              startTime = System.currentTimeMillis();
              for(int i=0; i<arrayLength; i++){
                   a1[i] = i;
              System.out.println("single thread=" + (System.currentTimeMillis()-startTime));and here initialization with many threads:
               * init array in many threads
              final Integer[] a3 = new Integer[arrayLength];
              List<Thread> threadList = new ArrayList<Thread>();
              for(int i=0; i<threadNumber; i++){
                   final int iF = i;
                   Thread t = new Thread(new Runnable(){
                        @Override
                        public void run() {
                             int end = (iF+1)*offset;
                             if(iF==(threadNumber-1))
                                  end = a3.length;
                             for(int i=iF*offset; i<end; i++){
                                  a3[i] = i;
                   threadList.add(t);
              startTime = System.currentTimeMillis();
              for(Thread t:threadList)
                   t.start();
              for(Thread t:threadList)
                   t.join();After execution it looks like this:
    single thread=2372
    many threads List=3760I have i7 4GB RAM
    System + Parameters:
    JVM-64bit JDK1.6.0_14
    -Xmx3g
    Why the executing of one thread is faster as executing of many threads?
    As you can see I didn't use any synchronization.
    Maybe I have to configure JVM in some way to gain wished performance (I expected the performance gain on i7 x8 times) ?

    Hello,
    I'm from [happy-guys|http://www.happy-guys.com] , and we developed new sorting-algorithm to sort an array on the multi-core machine.
    But after the algorithm was implemented it was a little-bit slower as standard sorting-algorithm from JDK (Array.sort(...)). After searching for the reason, I created performance tests which shows that the arrays in Java don't allow to access them by many threads at the same time.
    The bad news is: different threads slowdown each-other even if they use different array-objects.
    I believe all array-objects are natively managed by a global manager in JVM, thus this manager builds a global-lock for all threads.
    Only one thread can access any array at the same time!
    I used:
    Software:
    1)Windows Vista 64bit,
    2) java version "1.6.0_14"
    Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
    Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
    Hardware:
    Intel(R) Core(TM) i7 CPU 920 @ 2,67GHz 2,79 GHz, 6G RAM
    Test1: initialization of array in a single thread
    Test2: the array initialization in many threads on the single array
    Test3: array initialization in many threads on many arrays
    Results in ms:
    Test1 = 5588
    Test2 = 4976
    Test3 = 5429
    Test1:
    package org.happy.concurrent.sort.forum;
    * simulates the initialization of array in a single thread
    * @author Andreas Hollmann
    public class ArraySingleThread {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              long startTime;
                    * init array in single thread
                   Integer[] a1 = new Integer[arrayLength];
                   startTime = System.currentTimeMillis();
                   for(int i=0; i<arrayLength; i++){
                        a1[i] = i;
                   System.out.println("single thread=" + (System.currentTimeMillis()-startTime));
    }Test2:
    package org.happy.concurrent.sort.forum;
    import java.util.ArrayList;
    import java.util.List;
    * simulates the array initialization in many threads on the single array
    * @author Andreas Hollmann
    public class ArrayManyThreads {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
              final int offset = arrayLength/threadNumber;
                    * init array in many threads
                   final Integer[] a = new Integer[arrayLength];
                   List<Thread> threadList = new ArrayList<Thread>();
                   for(int i=0; i<threadNumber; i++){
                        final int iF = i;
                        Thread t = new Thread(new Runnable(){
                             @Override
                             public void run() {
                                  int end = (iF+1)*offset;
                                  if(iF==(threadNumber-1))
                                       end = a.length;
                                  for(int i=iF*offset; i<end; i++){
                                       a[i] = i;
                        threadList.add(t);
                   startTime = System.currentTimeMillis();
                   for(Thread t:threadList)
                        t.start();
                   for(Thread t:threadList)
                        t.join();
                   System.out.println("many threads List=" + (System.currentTimeMillis()-startTime));
    }Test3:
    package org.happy.concurrent.sort.forum;
    import java.util.ArrayList;
    import java.util.List;
    * simulates the array initialization in many threads on many arrays
    * @author Andreas Hollmann
    public class ArrayManyThreadsManyArrays {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
              final int offset = arrayLength/threadNumber;
                    * init many arrays in many threads
                   final ArrayList<Integer[]> list = new ArrayList<Integer[]>();
                   for(int i=0; i<threadNumber; i++){
                        int size = offset;
                        if(i<(threadNumber-1))
                             size = offset + arrayLength%threadNumber;
                        list.add(new Integer[size]);
                   List<Thread> threadList = new ArrayList<Thread>();
                   for(int i=0; i<threadNumber; i++){
                        final int index = i;
                        Thread t = new Thread(new Runnable(){
                             @Override
                             public void run() {
                                  Integer[] a = list.get(index);
                                  int value = index*offset;
                                  for(int i=0; i<a.length; i++){
                                       value++;
                                       a[i] = value;
                        threadList.add(t);
                   startTime = System.currentTimeMillis();
                   for(Thread t:threadList)
                        t.start();
                   for(Thread t:threadList)
                        t.join();
                   System.out.println("many threads - many List=" + (System.currentTimeMillis()-startTime));
    }

  • Threadlock at daemon.queueProcessor.Service.poll

    Hi,
    We have a situation in a production environment where the Tangosol queue processor thread is waiting (locked) during a routine get call. This is happening with version 3.2.2.There is also a write through backing store configured for the distributed caches. Any insights would be helpful.The thread dump is shown below
    Sincerely,
    Pranab
    TIBCO Software Inc.
    State: WAITING on com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$GetRequest$Poll@43d589
    Total blocked: 4,874 Total waited: 3,220,986
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.CDB:27)
    com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.CDB:1)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.get(DistributedCache.CDB:27)
    com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1300)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    com.tangosol.net.cache.CachingMap.get(CachingMap.java:484)
    com.xxx.ObjectCache.getObject(ObjectCache.java:64)

    Hi Pranab,
    The thread dump you posted is perfectly normal - that is how it always looks when a client thread awaits a response from a cache server.
    If you believe that you encountered an abnormal Coherence behavior, please send the entire thread dump (preferably from both client and server tier) to Coherence support at Oracle Metalink (http://metalink.oracle.com)
    Regards,
    Gene

  • Performance bottleneck of hard drive: assets vs. cache vs. render-to drive?

    so i'm beefing up my old mac pro tower (5,1) and was wondering which combination of use of hard drives is fastest, if anyone has any firsthand or theoretical suggestions...
    if someone has all three of these hard drives:
    A) PCIe SSD (OWC Mercury Accelsior_E2 PCI Express SSD)
    B) internal drive bay SSD
    C) external SSD connected via 600MB/s eSATA port of the above linked card
    … which is best to use in combination for the following in After Effects CC/CC2014?
    1) storage of assets files used in the AE project (ie. 1080/4k/RAW/etc footage, PSD files)
    2) AE disk cache
    3) the drive that AE is rendering to
    … for example is 1A + 2C + 3B the fastest use for rendering? and is 1AC + 2B the fastest for while working in AE?
    between assets, disk cache, and render location, which are more of a performance bottleneck?
    and does the optimal combination vary if someone had 16 GB vs 64GB vs 128GB of RAM?
    thanks in advance for any insight!

    Well, the long and short answer is: It won't matter. All your system buses only have so much overall transfer bandwith and ultimately they all end up being in some way piped through your PCI bus, which in addition is shared by your graphics card, audio devices and what have you as well. There are going to be wait states and data collisions and whether or not you can make your machine fly to Mars is ultimately not relevant. There may be perhaps some tiny advantage in using a native PCI card SSD for Cache, but otherwise the overall combined data transfer rates will be way above and beyond what your system can handle, so it will put in the throttle one way or the other.
    Mylenium

  • Not able to perform j1iex for service tax capturing

    The service PO is similar to that of any other PO in terms of taxing.
    In service tax we will have
    Service tax          - 12 %
    Education cess          - 2 %
    Surcharge          - 1%
    So for service tax code we will be maintaining three rates in the Tax code.
    Follow the below process for Service Cycle :-
    1. AC01 - Create Service master in AC01 and assign the Valuation Class for the Service Masters and do Account 
                     Determination in OBYC for A/c Key GBB and Valuation Modifier VBR (to this Assign the Service Expense A/c) and 
                     for A/c Key WRX, assign GR/IR Clearing A/c
    2. ME21N - In PO (ME21N), use Item category as "D" (Services) along with Account Assignment Category "K" (Cost Center)
                        and In "Services" tab, specify the Service Master and pricing details
    3. ML81N - Create Service Entry Sheet w.r.t. Service PO
    Service Expense A/c - Dr
    GR/IR Clearing A/c - Cr
    4. MIRO - LIV for Service Entry Sheet or Service PO
    Service Vendor A/c - Cr
    GR/IR Clearing A/c - Dr
    Service Tax A/c - Dr
    Ed Cess on Service tax - Dr
    Sec Ed Cess on Service tax - Dr
    In normal purchase order having excise duty we would perform a t.code j1iex  once for capturing excise and again for posting  excise duty.
    we want same step between step 3 and step 4 for service tax also. can anyone tell us how to achive that.

    Hi
    We dont have the provision to maintain registers for service tax.
    so there is no question of capturing service tax thro J1IEX.
    Service tax is posted to a G/L account at the time of MIRO.
    We will utilize this service tax input credit at the time of monthly utilization (J2IUN) where in there is option for giving this G/L account.
    then the amounts of input service tax will be populated in the screen.
    regards
    prasad

Maybe you are looking for

  • HP OfficeJet Pro 8600 Wireless Connection Problems

    Started having problems scanning. Then problems printing from my desktop - Unable to find printer. Uninstalled / Re-installed HP Software / Drivers; Desktop is still unable to find the printer - During setup: The IP Address is either incorrect or it

  • Quicktime loaded and not working - Can't uninstall

    I have tried for the past few weeks to get my iTunes working. Everytime I click on iTunes, a message pops up stating that Quicktime Streaming cant run because it is already loaded. I went to Run>MSConfig and turned off the QTTask button, restarted an

  • 10G installation questions

    HI At the time I was installing my Oracle10G in my WinXP, the ip address of my host ( PC) was "192.168.1.105". Now it is changed to "192.168.1.100", my EM is not working ( em_start_out,txt - see below ) Questions: ======== 1. How do I handle and addr

  • Partial sound failure, need help

    Tecra A9-S9015X running Windows XP SP3 After an unexplained system crash when I unplugged the AC converter at the wall, all sounds failed (system, CD, DVD, mp3). After I updated the BIOS from the origianl vers. 1.1 to 2.0, system sounds returned loud

  • Execute .bat file using cfexecute tag...

    Hello, I am trying to execute a batch file using cfexecute tag. This batch file needs 1 command line arguement, which I am passing using arguments Text attribute. When I run my batch file directly from command prompt, it works perfectly. but while us