Performance impact of going via Proxy seems high?

Just wondered if anyone had tested the impact of a client app. that joins the cluster directly, versus one that goes via a Proxy?
I've been testing retrievals of Position objects (roughly 300 bytes each) on 4 server cluster (2 Xeon Quads in each). The Positions cache is a distributed cache and the objects are stored in POF format. For queries that return few positions (i.e. 10s/100s), the performance seems pretty comparable, whether you go via the Proxy or not. However, as I query more Positions, the Proxy figures get much slower (50-90%), and seem much more volatile.
For example, here's the timings for queries without the proxy:
Total Positions          Time(ms)
24598.....68
38579.....294
97781.....385
106151.....449
107317.....433
107518.....436
And here's the same queries via the Proxy (Proxy running on the same servers, same LAN, as is the client):
Total Positions          Time(ms)
24598.....290
38579.....421
97781.....754
106151.....591
107317.....787
107518.....604
As you can see, the figures are much slower in places, and also much more volatile (look at the last two rows, for example.) CPU stats are low, and I've configured plenty of RAM in the JVMs for both the cache servers and proxies. I've also got plenty of service threads configured on both the Cache Service and the Proxy service (although I'm only running one test at a time, so this shouldn't be a factor anyhow.)
I've looked through the "Best Practice" section of the Docs for proxies, and I'm following all the hints given in it. Still, I'm very surprised by how much of a hit the Proxy seems to be introducing. Is this normal? Or is there something I can try to improve the situation?
Cheers,
Steve

Hi Wei,
Thanks for the update.
I've re-run the tests, and the results are pretty much the same. If I go via the proxy, the time increases by about a factor of 2. For smaller result sets (but still 25k entries), that can be as high as 4. Here's the data from a re-run:
With Proxy:
Total Books Total Positions Indexed Time(ms)
5 24598 true 179
10 38579 true 422
50 97781 true 880
100 106151 true 770
200 107317 true 802
300 107518 true 841
Without:
Total Books Total Positions Indexed Time(ms)
5 24598 true 67
10 38579 true 296
50 97781 true 391
100 106151 true 436
200 107317 true 415
300 107518 true 439
I actually run a whole variety of tests, these results are just a sub-set. I have indexes/queries (using auto-generated ReflectionExtractors, as I use CohQL/QueryHelper) and select Positions using "where" clauses against Book, AggregationUnit and Instrument - all attributes of this one object type. I also do a "key().type = 'XXX'" call in the where clause also.
General format of all the code is ("book" type selections shown here):
// Create a Filter to select Positions for the supplied Book.
Filter filter = QueryHelper.createFilter("key().type = 'TRADING' and bookId in " + bookList);
// Create an aggregator to extract the data required from the Position.
ReducerAggregator aggregator = new ReducerAggregator(new MultiExtractor("getBookId,getPositionType"));
// Get the Positions and iterate through them.
Map positionsMap = (Map) cache.aggregate(filter, aggregator);
Set<Map.Entry<TradingPositionKey,List>> positions = (Set<Map.Entry<TradingPositionKey,List>>)positionsMap.entrySet();
for(Map.Entry<TradingPositionKey,List> e : positions) {
If, as you suggest, these timings match Oracle's own internal testing, then I guess I'll just have to accept it. Still, was a little higher than what I thought it would be. I expected the proxy to simply act as a "pipe" between my client and the grid and not really do much, adding nothing more than a few milli-seconds overhead at most.
Proxy, client and cache server are all using POF, BTW, all reading it from the same pof-config file.
Cheers,
Steve

Similar Messages

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Regarding performance impact if I do DB accessing coding in comp Controller

    Hi ,
    This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
    I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
    The architecture is aas below
    WEBDYNPRO    >>   JAVA Classes object( generated by the JAVA- COM bridge   tool )                         >>   JAVA-COM bridge  tool >> VB COM+ Component   >> SQL server.
    The issue
       Performance :-   first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
    Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
    The causing exception is nested.
    [EXCEPTION]
    com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
    ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
    Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
    My questions regarding
    What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
    Please address  this with respedct to performance  point of view .
    thanks
    pkiran

    Hi Both,
    Thanks for the reply.
    Yes they are closed and set it to null;
    Connection max and mini properties are controlled at COM+ components in VB.
    Since I am using COM - JAVA bridge,  I am just invoking the methods defined ijn the VB code  thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
    My question is
    if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
    regards
    pkiran

  • Performance Impact for the Application when using ADFLogger

    Hi All,
    I am very new to ADFLogger and I am going to implement this in my Application. I go through Duncan Mill's acticles and got the basic understanding.
    I have some questions to be clear.
    Is there any Performance Impact when using ADFLogger to slower the Appllication.
    Is there any Best Practices to follow with ADFLogger to minimize the negative impact(if exists).
    Thanks
    Dk

    Well, add a call to a logger is a method call. So if you add a log message for every line of code, you'll see an impact.
    You can implement it in a way that you only write the log messages if the log level if set to a level which your logger writes (or lower). In this case the impact is like having an if statement, and a method call if the if statement returns true.
    After this theory here is my personal finding, as I use ADFLogger quite a lot. In production systems you turn the log lever to WARNING or higher so you will not see many log messages in the log. Only when a problem is reported you set the log level to a lower value to get more output.
    I normally use the 'check log level before logging a message' and the 'just print the message' combined. When I know that a message is printed very often, I first check the level. If I assume or know that a message is only logged seldom, I just log it.
    I personally have not seen a negative impact this way.
    Timo

  • Unable to check large model into SQL Server repository via Proxy

    I have a model that has over 1000 changes (probably) and I am unable to check the model into a repository on SQL Server 2008R2.  We have a proxy service running, and in general the check-in process is much better via the proxy than direct connect.  But for a model with "a lot" of changes, the check-in process never completes.  I have let it run overnight and it still does not complete.  The SQL Server DBA also reports a large "blocking" query running from the PROXY service to the database server when this process seems to "hang".
    Has anyone else encountered this behavior?  I have had a support ticket open for over a year on this one, with no solution, so I was curious if anyone else found a way to resolve it. (other than avoid "making a lot of changes".

    Hi,
    First if you want good performance you must use the Proxy. When you use the Proxy PowerDesigner sending large packet of data to the Proxy under a proprietary format (more efficient) and the Proxy run long transactions - many atomic SQL transactions - to the DBMS (SQL Server).
    In "PowerDesigner World" the primary use of a direct connection to the DBMS is to create the repository.
    For adequate performance the Proxy and the repository (RDBMS) must be installed on the same server. (Also the PD Portal). Thus the thousands of transactions are executed in memory and not over the network (latency problem)
    Because you use the SQL Server DBMS you must set ''max server memory'' variable to an appropriate value, by example for 40 users you must set to 8 MB minimum.
    You must also activate these two settings - See the PowerDesigner installation Guide
    • ALLOW_SNAPSHOT_ISOLATION
    • READ_COMMITTED_SNAPSHOT
    Physical Server memory usage
    a) OS Operating System has need minimum 2 Gb
    b) SQL Server runtime 1 Gb
    c) SQL Server memory space for PowerAMC 8Gb
    d) Proxy memory 6-8Gb
    e) I/O memory space 1 Gb minimum
    Total 20Gb
    Also : Under 'C:\Program Files\Sybase\PowerDesigner 16\ you have "readme.html" file. In this document SAP give information to optimize the referential. Aka modify attribute length TDAT of table PMTEXT
    Good luck
    Note : SAP know well this problem. SAP would have reduced the number of atomic transactions to the DBMS. Programming techniques exist on this subject. To see.

  • Active Table Logging T000 performance impact

    Hi fellow SAP experts,
    I need some advice on system performance impact when switching on Table Logging for T000 - configuration in production please?
    We have decided to turn on Table Logging for auditing purposes, only allowing developer config in production following a volume of evidence being supplied.
    I need to know how much this activation is going to impact the performance of the companies production environments, how much storage, memory, performance, etc. this function is going to consume and how much of the above consumables I need to cater for now and in the future?
    We have a Dual Track environment, BAU want to switch on Table Logging for fix on fail, I want to swich it on for Project deliveries.
    Please advise, with referencing if possible?
    Thank you kindly
    Paul

    Hi Paul,
    There has been a constant debate whether table logging affects the System performance, especially in Produciton environment. Please see my comments below :
    1) To turn on logging for table T000, you will have to activate the parameter rec/client, with values for one client or all the clients in the System depending on your requirement.
    2) This parameter setting will not only log changes to T000, but also for over 28000 tables.
    3) But these are customizing tables which usually contain a relatively small amount of data which is changed occasionally.
    4) After activating, if you suddenly find performance issues, you can check which tables are causing issues via transaction SCU3.
    5) You can go to transaction SE13 and deactivate logging for a table, if you find too many entries for any particular table in SCU3.
    So, logging tables doesn't necessarily impact performance. Hope this helps. Please refer to SAP Note-608835 related to this.
    Best Regards,
    Savitha

  • Performance impact of using Web Services?

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

  • Performance impact of Web Services

    As WebLogic adds support for Web Services to its platform, what is
    your plan for quantifying the performance impact and the functional
    correctness of using web services before going live with the final
    application.
    Empirix is hosting a free one hour web event discussion on web
    services testing and automated web services testing solutions on
    Thursday, January 17, 2-3pm Eastern time.
    To register for this web event or learn about other web events being
    offering by Empirix this month, go to:
    http://webevents.empirix.com
    The complete abstract is below:
    The advent of web services has brought the promises of integrating
    multiple software applications from heterogeneous networks and for
    exchanging information from vendor-to-vendor or vendor-to-consumer in
    a standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be critical that
    web services undergo performance testing. As with any enterprise
    software project, the adoption of proper test methodologies and use of
    testing tools will play a key part in the overall success or failure
    of projects utilizing web services. In a compressed software project
    schedule, an organization must quickly determine if its web services
    will operate successfully under a variety of load conditions. Like
    other web-based technologies, successful web services will need to
    respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing challenges
    created by this emerging technology, along with the variety of testing
    solutions available. Automated web service testing will be discussed
    and demonstrated using FirstACT, the first web services performance
    testing solution available on the market. Using a sample web service,
    automatic test case creation, scalability testing, and results
    analysis will be explored.

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • XI to BI via Proxy

    I am pulling an .xml from a ready folder into XI then sending it to BI via proxy. Once it is in the BI box it shows that the message is successful however i do not receive any data in the PSA.  Is there some kind of config on the BI side that has to be in place first? The data sources are there but it seems like something is missing.

    HI
    Check with the BI person Delta queue in BI can have issue while pushing the data from Proxy.
    for your understanding you can refer this
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/0ccae190-0201-0010-1593-c90ef3c1d159
    Thanks
    Gaurav

  • My homepage is Google but the logo and search box seems higher on the page than normal...can I adjust the position?

    I had to set the settings on Google to Do Not Use Google Instant in their search settings as no cursor was showing on the google search box. After the change it seems that the Google logo and search box seems higher on the Firefox page. I would like to be able to adjust the height of the Google logo/search box. Please tell me if I can make that adjustment.

    Thank you for the information about the zoom feature on Firefox. I had found that under the View menu (although I did find it was a temporary fix and the Tools/Options/Font method does change it for a "stick") and changing the font size does increase the size and enlarges or "moves" the logo and search box on Google but that also increases the font size displayed in the search screen. I was hoping to find a way to move the Google logo and search box down on the home page a bit but not change the font size but realize that may be too fine of an adjustment. The zoom feature added to the toolbar via customization may be a good solution for me. Thank you again for your helpfulness.

  • EBS performance impact using it as a Data Source

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.Hi,
    Tough to answer without looking at data; my suggestion would be to have a test EBS environment setup, get permission from the vendors to run performance test without buying license - compare AWRs from both scenarios and then decide.
    Generally speaking, native XML publisher (BI Publisher) has less of database performance hit than external reporting tools using ODBC.
    Hope this helps.
    Regards,

  • Performance Impact When Using SNC Communication

    Hello,
    Does anybody know if and how much performance impact there is if we use SNC for communication between the SAP Server and SAPGUI?
    I think there are two areas that may be impacted; Network and server CPU.
    For network load, I did find a part in "Front-End Network Requirements for SAP Business Solutions" document saying "overhead of roughly 350 bytes per user interaction step" but it does not specify the type of encryption.  I wonder if there is any other info on this?
    For CPU impact, how much overhead should I consider for sapgui access?
    I see no field for this in the quicksizer and I can't seem to find any white papers on this subject.
    Thank you in advance.

    >
    Peter Adams wrote:
    > Ken,
    >
    > if you plan to use SAPcryptlib for SNC between SAP servers, then you should use a SAPcryptolib-compatible solution for the SNC communication between SAPGUI and SAP server, and there is only one vendor who can provide this. Let me know, if you need help finding it. My contact information is in my SDN business card.
    Just so Kan is clear - It is not legal to use the SAP cryptolib provided by SAP for SNC between SAP GUI and SAP servers, so if x.509 is the desired mechanism you need to purchase additional software from the company which Peter works for to provide SAP GUI SNC-based SSO. I think instead, Kan might be using the free SAP supplied SNC Kerberos library, which is why I asked him to confirm this in my last post. I doubt he is interested to buy any third party software.
    > As to the performance discussion: first of all, yes, there will be a small performance impact if SNC is used (no matter which type or implementation), but from our experience with many actual SNC implementations, I can state that this is practically not relevant. It is not noticeable by users. There were never any performance discussions with customers. See also SAP Note 1043694.
    I agree with this - the performance impact is not noticed by users, but the system managers who look after the servers where SAP is installed, and the team responsible for the network need to be aware of any differences (if any) when SNC is turned on and when SNC is turned off. I think this is why Kan is asking these questions, not because he is concerned about users noticing any difference when they logon to SAP.
    > Just a first quick comment on certain statements above: Tim's arguments for proving his overall statement are not conclusive from my perspective. Nor do I think his overall statement itself is correct.
    The facts I mentioned are well known facts, e.g. symmetric crypto is far better from performance point of view than asymmetric. I know the examples I have shown which I found when doing a quick google search were not conclusive, but they were shown as initial examples, not necessarily the best examples. This is why I specifically mentioned that if you search in google yourself you will see many more references where comparisons are done between Kerberos (e.g. symmatric) compared with PKI (e.g. asymmetric).
    > First of all, he only selects one aspect of performance - CPU impact of encryption algorithms.
    No, I didn't. Some of the examples I referred to also discuss other differences. I also mentioend other differences such as memory and what protection level is used when configuring SNC.
    > But for a true comparison, you'd have to look at all relevant aspects (latency, network overhead, ...).
    Yes, I agree. No doubts here.
    >Network performance overhead is usuallly worse with Kerberos than with PKI.
    This is not true. When SAP is using SNC, the GSS-API standard is used and so the only network communication involves SAP software sending a standard GSS token from the workstation to the SAP server, and this GSS token is often about the same size, regardless of which mechanism is used, so any network performance differences are not related to the mechanism, but more related to the complexity of the cryptography used on each end (mostly on the server side).
    >Second, you need to look at the specific usage scenario. For example, the first report referenced by Tim is an analysis about different Token Profile mechanism for WS Security, for one specific implementation. This does not allow to draw any conclusion for the SNC use case in general, and for sure not for a specific implemenation. It does not take the overhead for the encryption of the message content into account. Third, Tim associates PKI exclusively with asymmetric encryption. Yes, it is well known that asymmetric algorithms are slower than symmetric ones, but it is also well known that the encryption of the message content (by far the majority of the data) happens with symmetric encryption algorithms in the PKI scenario. With PKI-based SNC, you can even select a symmetric algorithm and use a more performant one that the ones that Kerberos prescribes.
    Kerberos works with many different symmetric algorithms as well, so mentioning that the alg is selectable is not relavent to any comparison.
    > To summarize, I will try and collect facts that will support the opposite point of view. From our practical experience, the performance overhead is not relevant, and criteria like consistency with SAPcryptolib, strength of security, ease of administration, choice of authentication and encryption mechanism, etc. are much more important.
    >
    > Peter

  • Performance Impact of Unique Constraint on a Date Column

    In a table I have a compound unique constraint which extends over 3 columns. As a part of functionality I need to add another DATE column to this unique constraint.
    I would like to know the performance implications of adding a DATE column to the unique constraint. Would the DATE column behave like another VARCHAR2 or NUMBER column, or would it degrade the performance significantly?
    Thanks
    Message was edited by:
    user627808

    What performance are you concerned about degrading? Inserts? Or queries? If you're talking about queries, what sort of access path are you concerned about?
    Are you concerned that merely changing the definition of the unique constraint would impact performance? Or are you worried that whatever functional change you are making would impact performance (i.e. if you are now retaining historical data in the table rather than just updating it)?
    Regardless of the performance impact, unique indexes (and unique constraints) need to be correct. If you need to allow duplicates on the 3 current columns with different dates, then you would need to change the unique constraint definition regardless of the performance impact. Fast and wrong generally isn't going to be preferrable to slow and right.
    Generally, though, there probably is no reason to be terribly concerned about performance here. Indexing a date is no different than indexing any other primitive data type.
    Justin

  • Trouble send a message to XI via Proxy

    Hi,
    I'm trying to send a message from R/3 System to XI via Proxy, but I just can see the message in the R/3 side (SXMB_MONI), the message has a Green Flag Status. When I activated a queue that the message belongs to it goes to Red Flag Status and I got an 'ErrorHeader' with:
    - <SAP:ErrorHeader xmlns:SAP="http://sap.com/exchange/MessageFormat">
      <SAP:Context />
      <SAP:Code p1="401" p2="Unauthorized" p3="" p4="">HTTP.HTTP_STATUS_CODE_NEQ_OK</SAP:Code>
      <SAP:Text language="EN">HTTP status code 401 : Unauthorized</SAP:Text>
      </SAP:ErrorHeader>
    When I check a Compoment Monitoring:
    <b>domain
       Integration Server
       Integration Engines
          Proxy Runtime <Business System></b>
    I got an error 'Unable to log on to system <Business System> in language en with user XIRWBUSER'.
    Could someone help me?
    Thanks in advance
    Leo

    Hi Leonardo,
    Check these threads...
    >>HTTP.HTTP_STATUS_CODE_NEQ_OK
    ABAP PROXY CONNECTION TO SAP/XI
    >>I got an error 'Unable to log on to system <Business System> in language en with user XIRWBUSER'.
    Demo application: "You cannot log on to system .... with user XIRWBUSER
    Regards
    Anand
    Message was edited by: Anand Torgal

  • How to handle Integrated Configuration performance impact on AAE/Java AS

    Hi there,
    Recently I have moved  a configuration scenario from standard flow involving both ABAP and Java stacks, to Integrated Configuration usage. Undoubtedly, this will increase the load on AAE/Java stack. However, do you have link to some clear (official - even better) guidelines - what configurational changes should be done on Java side in order to handle the performance impact of such transition?
    Best Regards,
    Lalo

    Hi Lalo,
    In fact, using AAE generates no traffic in ABAP stack at all (it is ommited when processing a message), while the traffic in Java stack should be lower than for normal scenario. The performance should be noticeably better, thanks to smaller number of persistence steps and no costly HTTP connections between stacks. For more details, please refer to this document:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7
    Important quotation from this document:
    Since the Integration Engine is bypassed for local message processing in the AAE, the resource consumption both in memory and CPU is lower. This leads to higher message throughput, and faster response times which especially is important for synchronous scenarios.
    Moreover, have a look at this document, especially its beginning, for details about the architecture of AAE processing:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/70066f78-7794-2c10-2e8c-cb967cef407b
    Hope this helps,
    Greg

Maybe you are looking for

  • I just downloaded the latest version of FF and now I can't send messages via hotmail.

    On April 6th, 2011 I downloaded the latest version of FF and shortly thereafter I found that I could not send mail from my hotmail account. I can receive, edit, read etc. I can do most everything EXCEPT send. I can, however, send mail from IE so it i

  • Searching and impotring files

    Hi, Bought an ipod last week and have music files allover my computer, when i first installed itunes it gave me the option to search and import all music files already on my pc but i had to stop the process half way through, is it possible to start i

  • Photoshop CS5 crashing when making .pdfs

    It's been happening right after I've scanned something. Reboots aren't working well, either. On Windows 7... Here's the message I get: Problem Event Name: APPCRASH Application Name: Photoshop.exe Application Version: 12.0.3.0 Application Timestamp: 4

  • Since 5.1.1 problems with 3g

    since i actualized my ios to 5.1.1 version my 3g comunication is not working.  What can I do to fix it?

  • Why can't I delete my books from iCloud?

    I want to completely remove ALL of my books so that it appears I never had any books on my iPAD. I want to be able to have the "hide iCloud books" logo/link off and not show any books. I've done everything I can think of, even going into the iTunes S