Clarification about parameters to be changed for max no of 100 conv excd

Hi,
In our system we had received the error:
Connect to SAP gateway failed
Connect_PM  TYPE=B MSHOST=xxxxxxxxxxxxxx GROUP=PUBLIC R3NAME=xxx MSSERV=sapmsXXX PCS=1
LOCATION    CPIC (TCP/IP) on local host
ERROR       max no of 100 conversations exceeded
due to which some users were unable to work in the system.
The threads that i have found while searching on the topic have suggested changing the parameter CPIC_MAX_CONV=500 (or greater) as per the note 314530.
And as mentioned in the SAP note 316877:
" Reduce ~timeout on the ITS machine to enforce the automatic termination of unused sessions."
Now in SMGW i can see the "Active Connections" (first page of the transaction) and also the drop down option "logged on clients" which gives another list.
I am now trying to understand which timeout parameter controls which list.
Most of the entries in the Active connections list are from the TP jlaunch (and seem to be coming from the java stack of the same system). And most of the entries in the Logged on clients list are from the TP SAPSLDAPI.
I need to know which timeout parameter (as per note 316877) needs to be changed for the problem being faced.
Or have i misunderstood this ~timeout  parameter and it should be set in Windows environment similar to the CPIC_MAX_CONV parameter ?
Regards,
Rohan.

Hi,
Thanks for your responses.
I will be changing the CPIC_MAX_CONV parameter.
But I would still like to clarify about the other parameter : "~Timeout"
and be prepared to take action on this as well.
As per the (conflicting) information I  have so far this seems to be some parameter that is to be maintained in each individual service in SICF ?
Or do we have any synonymous itsp* or gw* parameter which I can set for the timeout?
Regards,
Rohan.

Similar Messages

  • Clarification about source and destination IPs for internal clients and Edge server

    I just wanted to get some clarification on the correct traffic flow between internal Lync clients and the Edge server.
    From all the diagrams I've looked at I was under the impression that if internal clients need to hit the Edge server to talk to external clients they should always do so through the Edge Internal interface which bridges to the Edge External interface and
    out to the internet.  Specifically port 3478 from the Edge AV External interface to the internal clients.
    We aren't seeing that in our environment.  When internal clients are talking to external clients we see the Edge AV External interface communicating directly with the internal client.  In fact we found this out because after the migration to Lync
    2013 external users couldn't created a AV connection to internal users on either the Lync servers.  We saw traffic on 3478 being dropped between the Edge AV External interface and the internal client.  Once we opened that port AV traffic worked.
    We never put this rule in until we introduced Lync 2013.  Lync 2010 didn't seem to require it.
    Is that the correct flow?

    I would also really love to know the outcome of this but it looks like the thread is marked as "Answered" and it is not so. 
    I've been working with a troublesome Lync deployment in which internal users are having issues sharing their desktop with external and federated users. After opening up all the 50000-59999 range for TCP/UDP on the A/V Edge external interface things are working
    much better, but we still see sporadic failures.
    It lead us to start digging into the network traffic. We see that UDP traffic on port 3478 is being routed back from the external client to the Edge A/V's external interface, inside of the DMZ's perimeter, then directly to the internal client on the internal
    network. It doesn't look like it's making a connection since the stream is so small, so I wonder if there is a design flaw in my topology?
    There are persistent static routes on the Edge server that use the internal interface to route internally directed traffic over the internal gateway. Tracert confirms the flow, but in wireshark traces, running during successful connections, UDP port 3478
    is still sending packets directly to the internal IP from Edge's A/V address. 
    We also see successfully connected sessions communicate on a different network route that we use to handle internet traffic rather than our Lync topology's route (the one defined for A/V traffic). The connection opens on ports in the 50000 range, but goes
    over a router that we have not configured for such traffic. Is that possible?
    Why is UDP traffic on 3478 trying to go directly to internal clients from external interface ?
    It sounds like it's happening elsewhere... Is this a legitimate issue to be diagnosing? Has it been observed and/or resolved by others?

  • Need clarification about purchasing a font/license for cover of my ebook

    I am having a graphic designer make a book cover for my ebook, and she's using an Adobe font in PhotoShop. Do I, as the book author, have to purchase the font/license, in order to use the font on my ebook cover?

    She is allowed to sell her work using the products she owns.

  • [svn:fx-trunk] 11454: ASyncList class ASDoc change: added explicit warning about the lack of support for re-inserting pending items .

    Revision: 11454
    Author:   [email protected]
    Date:     2009-11-04 18:17:33 -0800 (Wed, 04 Nov 2009)
    Log Message:
    ASyncList class ASDoc change: added explicit warning about the lack of support for re-inserting pending items.
    QE notes:
    Doc notes:
    Bugs:
    Reviewer:
    Tests run:
    Is noteworthy for integration:
    Modified Paths:
        flex/sdk/trunk/frameworks/projects/framework/src/mx/collections/AsyncListView.as

  • Change y-axis BACK to auto/ auto for max min

    I have a bar chart and the settings for my y-axis are 250/auto. I need it to revert back to auto/auto for max/min b/c one of my numbers is large and is shooting off the graph.

    Here is a linear scale:
    change to a percentage scale for the y axis:
    Center the value labels:

  • Many doubts about your questionable "change for better" new conditions

    I Am an early adopter of Revel and a satisfied customer. I  upgrade my account some months every year if I have done more than 50 pics I want to save on Revel. When downgrading my pics remain on Revel. Fantastic!
    With the new terms I would have to pay for a permanent upgrade as my account raises the 2Gb free that you are offering.
    this not a change for better as you say. Not at all.
    As an early adopter will I keep the billing rigths and conditions that I signed up when I subscribed the service? I wait for a clear answer as I am thinking about deleting my account.
    Your change sound like a trick... now that I have spend lots of time upgrading many Gb you change the conditions. Not wise at all. and the worst is that we don't know what are you planning for the future....

    Darderes-
    The new model will not change how the premium accounts work, but all free users will be using the new model once the changes take place. We realize this may benefit some users and not others and apologize if it causes you any inconvenience.
    Pattie

  • No color change for mandatory parameters in Report Launch Form

    Report Launch Form (qms0012f) does not color the mandatory report parameters when running on 9IAS webserver.
    We are using Designer 6i with Headstart 6.5.3.0.
    On client server the mandatory report parameters are changed to user preference color.
    Testing when running the form with the Developer Web Previewer of Forms Builder 6i the mandatory parameters are also changed to user preference color but the background color is white instead of gray?

    Cheryl,
    I'm sorry, on creation of Headstart 2.1 we added the implementation_name to the qms_modules table, but we just plain forgot to modify qms0012f as well. As most people have the implementation name identical to the module name, nobody noticed until you did!
    If you want to change this, you should modify library qms0012l.pll.
    Go to package qms0012f,
    find the declaration of g_short_name_item, and add a similar one g_implementation_name_item where 'SHORT_NAME' is replaced by 'IMPLEMENTATION_NAME'. In the same package qms0012f, find the function get_short_name_item and add a similar one get_implementation_name_item.
    Then go to package qms$report, function fill_par_list. At the end of this function, you will see a statement l_module_name := name_in(qms0012f.get_short_name_item);
    Change this into l_module_name := name_in(qms0012f.get_implementation_name_item);
    I am not able to test this, but I think it should work. Please let us know if this workaround helps, then we can include it in Headstart 6i.
    Hope this helps,
    Sandra

  • Clarification on services-config.xml files for RemoteObject services

    I am currently preparing an AMFPHP environment for exchanging
    data with Flex and AIR applications. I can connect to the services
    via Netconnection, but using RemoteObject won't work.
    Documentation for the necessary configuration files
    (services-config.xml, remoting-config.xml and the like) is, um,
    sparse and seems to be slightly incorrect or misleading at times.
    So I'm looking for clarification about
    changes in the syntax of services-config.xml · in
    examples for Flex 2.01, there are "class" attributes with values
    beginning with "flex.", in Flex 3 examples the attribute name has
    changed to "type", values beginning with "flex." have mostly
    vanished and been replaced with similar looking values beginning
    with "mx.". Are "flex." and "mx." prefixes interchangable?
    correct syntax for linking other files into
    services-config.xml · the livedocs state that Adobe
    prefers linking files into services-config.xml (using something
    like <service-include file-path="remoting-config.xml"/>)
    instead of defining all service parameters there. But in the
    example from the Flex 2.01 livedocs, the root element of the
    service-config.xml file is <services> instead of
    <services-config>. Assuming that <services> is meant as
    a
    child of the <services-config> root element, there
    seems to be a mandatory <service> child element missing.
    That's either intended, illogical, and misleading, or simply
    erroneous.
    which classes are still valid in Flex 3, which have changed?
    · Most of the examples for using RemoteObjects out there are
    for FlexBuilder 2 (e.g.
    here)
    and don't seem to work with FlexBuilder 3 Beta 2. But there is no
    statement to be found in the Flex 3 documentation about what has
    changed.
    a meaningful example for services-config.xml · In the
    Flex 3 Beta 2 documentation here are plenty of references to using
    services-config.xml, as I can find even information about the
    ServerConfig class in the ActionScript 3 Language Reference, a sort
    of wrapper class for the information provided in
    service-config.xml, but there is no information about what
    constitutes a
    working service-config.xml configuration (which XML tags of
    what names containing what attributes need to be present).
    As I'd really like to create cutting-edge Flex and AIR OCC
    applications, I'm eager to have those ambiguities clarified.
    Best regards,
    Cathness

    From the docs it looks like this is used for web services as
    well:
    http://livedocs.adobe.com/flex/201/html/dataservices_config_100_3.html#260186
    If this isn't the right config file, which file should I be
    looking at?
    Mike

  • Clarification about  Database_Buffer_cache workings

    Hi All,
    Clarification about Database_Buffer_cache workings:(This statement from my course material)
    *1.The information read from disk is read a block at a time,not a row at a time,because a database block*
    is the smallest addressable storage space on disk.
    Before answering, my please check whether my above statement is correct or not,becoz i get this from My course material.
    If i am querying ,
    select * from emp;
    Whether server_process bring the whole block belongs to EMP table right or it just bring the table itself?
    Thank you,
    Regards,
    DB
    Edited by: DB on May 30, 2013 3:19 PM
    Edited by: DB on May 30, 2013 4:35 PM

    Both happens, the LGWR may call the DBWR to write dirty blocks from the buffer cache to disk. Dirty in this context means, that the blocks in the buffer cache have been modified and not yet written to disk, i.e. their content differs from the on disk image. Conversely the DBWR can also call LGWR to write redo records from the redo log buffers (in memory) to the redo log files on disk.
    To understand why both is possible, you need to understand the mechanics how Oracle does recovery, in particular REDO and UNDO and how they play together. The excellent book "Oracle Core" from Jonathan Lewis describes this in detail.
    I'll try to sketch each of the two cases. I am aware that this is only an overview which leaves out many details. For a complete description please look at the various Oracle books and documentation that cover this topic.
    1. LGWR posts DBWR to write blocks to disk
    As you probably know, any modifications done by DML (which modify data blocks) are recorded in the redo. In case of recovery this redo can be used to bring the data blocks to the last committed stated before failure by re-applying modifications that are recorded in the redo. Redo is written into redo log files and the redo log files are used in a round robin fashion. As the log files are used in a round robin fashion, old redo data is overwritten at some point in time - thus the corresponding redo records are not longer available in a recovery scenario (they may be in the archived redo logs, which may however not exist if your database runs in NOARCHIVELOG mode and even if your database runs in ARCHIVELOG mode, the archived redo log files may not be accessible to the instance without manual intervention by the DBA).
    So before overwriting a redo log file, the Oracle instance must ensure, that the redo records being overwritten will not be used in a potential instance recovery (which the instance is supposed to do automatically, without any action by the DBA, after instance failure, e.g. due to a power outage). The way to ensure this is to have the DBWR write all modifications to disk that are protected by the redo records being overwritten (i.e. all data blocks where the first modification that has not yet been written to disk is older than a certain time) - this is called a "Thread checkpoint".
    2. DBWR posts LGWR to write redo records to disk
    Oracle uses a write ahead protocol (see http://en.wikipedia.org/wiki/Write-ahead_logging and Write Ahead Logging.... This means, that for any modification the corresponding redo records must been written to disk before the actual modification to the data blocks is written to disk (into the data files). The purpose of this I believe is, to ensure that for any data block modification that makes it to disk, the corresponding UNDO information can be restored (from redo) in case of recovery, in order to reverse uncommitted changes in a recovery scenario.
    Before writing a data block to disk, the DBWR must thus make sure, that all redo for modifications affecting this block has already been written to disk by the LGWR. If this is not the case, the DBWR will post the LGWR and only write the data block to the datafile once the redo has been written to the redo log file by LGWR.

  • How do you change the max data when creating a DVD?

    I recently filmed a show and am making DVD copies. The show runs 1hour 45mins and is about 79gb in HD. I'm using compressor 4.1 to create the DVDs so it fits on one disc. I've made a few test copies but so far every DVD freezes at around 43 mins when I play it in another computer or dvd player. I heard in a blog that reducing the max data should solve the problem. This is my first time using compressor and I wanted to know where do you go to change the max data and how will that affect the video quality? Also, what is the suggested range of data limits for a project of this size?
    Thanks

    Duplicate the Create DVD preset from the Settings pane and name it.
    Select the MPEG setting in the batch window.
    Open the Inspector and click the Video tab.
    Choose your "Encoding Mode" (CBR, etc.).
    Uncheck Automatically Select Bit Rate.
    You can then move the slider: somewhere around 4 Mb per second should be about right.
    Just to add that you might also have a problem with the physical media. Try another brand to see whether that makes a difference.
    Russ

  • [svn:fx-trunk] 5005: Last PARB changes for FxContainer/FxDataContainer.

    Revision: 5005
    Author: [email protected]
    Date: 2009-02-18 17:21:52 -0800 (Wed, 18 Feb 2009)
    Log Message:
    Last PARB changes for FxContainer/FxDataContainer. We're now exposing more properties on the FxContainer from the underlying Group (same for FxDataContainer and DataGroup). The only new property being exposed on FxContainer is autoLayout. For FxDataContainer we're now exposing autoLayout, typicalItem, and the renderer_add and renderer_remove events. We're also being smarter about when to add events on the underlying contentGroup/dataGroup--however, we're ignoring some of the parameters in addEventListener and these aren't getting proxied down on to the listeners we add to the contentGroup/dataGroup.
    Also found and fixed a bug in the RendererExistenceEvent.clone() method.
    QE Notes: Please test these new properties/events.
    Doc Notes: -
    Bugs: -
    Reviewer: Hans
    tests: checkintests, DataGroup, FxList, FxButtonBar, FxDataContainer, FxContainer
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/components/FxContainer.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/components/FxDataContainer.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/events/RendererExistenceEvent.as

  • I have to report you about bugs in latest BIOS for MSI-6545 ver.1 (ver

    I have to report you about bugs in latest BIOS for MSI-6545 ver.1 (version BIOS 1.7).
    After flashig BIOS I cannot change parameters "Shutdown temperatures", "Spread spectrum" and some others !!!
    When I try to do it, my computer hangs.
    I have CPU PIV-2.4-400-512 with memory 512MB Samsung original PC-800 ECC
    (By the way,there was no problems with BIOS 1.5 and CPU Intel PIV-1.6)
    [email protected]

    I have also noticed that the bios write protect control is missing.
    I don't know the reason why it's been removed but if you want to protect your bios you can set the jumper (J16) on your motherboard.
    If the jumper (J16) is shorted (Little plastic thing conecting both pins together) then the bios flash is locked and can't be updated.
    If the jumper (J16) is not shorted (Little plastic thing removed) then the bios flash is unlocked.
    Also can anyone tell me if two gameports are listed in the Device Manager after clearing cmos using bios 1.7.
    Bios 1.5 is ok apart from the fact that i can't upgrade my processor to 2.6GHz as it only supports up to 2.2GHz so if i upgrade processor i won't be able to clear cmos because of the problem with the gameports.
    I have tried everything to get rid of one of the gameports but nothing seems to work apart from going back to bios 1.5 which is no good because i wan't to upgrade my processor.

  • Price Change for Revenue Recognition

    Dear experts,
    I am using Revenue Recognition of type B with Revenue Recognition before invoicing.
    I have an issue/question about the way price changes are treated in the functionality.
    The below example is used to describe the issue for your understanding:
    1.Pricing master data: $100/EA
    2.Price of item in sales order: $100/EA
    3.Qty in sales order: 10 EA
    4.Delivery & Goods issue: 10 EA
    5.Revenue Recognised (VF44) for: $1000
    Accounting Entry:
    DR Unbilled Accounts Receivable $1000
    CR Revenue $1000
    6.Change in pricing master data: $120/EA
    7.Partial Invoice for: 2 EA @ $120/EA
    Accounting Entry:
    DR Customer $240
    CR Unbilled Accounts Receivable $240
    8.Partial Invoice for balance qty: 8 EA @ $120/EA
    Accounting Entry:
    DR Customer $960
    CR Unbilled Accounts Receivable $760
    CR Deferred revenue $200
    9.Subsequent Revenue Recognition (VF44)
    Accounting Entry:
    DR Deferred revenue $200
    CR Revenue $200
    As you can see in the example above, the amount of the price difference is not credited to sales in step 7. From what I have seen so far the additional amount due to price difference gets credited to sales only after the total invoice value for the item exceeds the revenue recognized.
    Is there any option to set the system so that the price difference is taken into account for revenue recognition during each invoicing (even
    when the invoicing is partial) ?
    Thanks in advance for your help.

    The price difference will not go to Sales in Step 7.  The reason being Revenue Recognition works on the item rather than the amount.  Since you have run F-44, the system finds the same in Unbilled Receivables and posts to that account.
    For updating the price changes, you have to run VF46 to cancel the original RR entries.  In this case, it will just clear of the RR lines if realization has not happened and post a reversal if realization has happened.  This txn will also create a new RR line for VF44 which you can process.
    Refer Note 820417 Implementation guide for RR.  Download the attachments and look into Part2 doc page 13.  This explains how to deal with price changes in sales document.
    Hope this helps.
    Ravi.

  • TS4088 why is there a limit on 3 years? it should be all the MacBook Pro mid 2010 with symptoms, there should get their logic board changed for free, mine mac is 3 month late and i had this problem for over a year, but i first saw this article today :(

    it should be all the MacBook Pro mid 2010 with symptoms, there should get their logic board changed for free, mine mac is 3 month late and i had this problem for over a year, but i first saw this article today

    Hey Clintonfrombirmingham
    I called Apple technical support in Denmark, but with no positive reply.
    She couldn't do anything, and said that They had sent a recall Email about the problem and with their offer to repair the Macbook Pro, but I'd never recieved an Email about the problem. She wasn't in power to make an exception. It can't be true that i paid a lot of money, for a product that can't barely stand on its own feets, Apple didn't tell me that the product I was about to buy, would restart every 5 minute. and now when  they know the problem, they wont repair it? it just don't make sense for me. If a car seller discovers that all the brakes in a car he had sold, will crash after some years he will call all the cars back to repair no mater what. i just don't understand how Apple will make good service for their custumers, by extending the warranty from 2 to 3 years, but wont take the computers there is a little bit to old, 4 months will make the difference. i can't believe it.
    What can i do now? 
    best regards Oskar

  • What is the best type of xml storage for max performance in 9i r2

    We have many schema based xml's of 1MB in xdb (9.2.0.5)
    and our database is getting slower by every new xml
    delete_resource (300 sec) and create_resource (300 sec ) take a long time
    and if I ftp a schema based xml out the xdb takes a long time ( 200 sec )
    Our system works like this , the xml comes in (ftp) ,the xdb validates the xml and then the xml is archived and the
    last step, the xml data is extracted and put in normal tables
    no xml updates or xml constraints
    I have now defined in the schema, one main table and I use oracle types for complextypes and xdb:defaulttable =""
    in 10g I don't have problems, but our production system
    are 9.2.0.5 and it takes a lot work and downtime to change that
    This goes well in 9i by files smaller then 500k
    what is the best schema configuration for max performance ( create / delete resource and ftp out )
    thanx edwin

    I changed
    maxOccurs="unbounded" xdb:SQLName="TRANSACTIONRANGE" xdb:SQLCollType="TPF_MASTER_D_T_RANGE_COLL_01"
    into
    maxOccurs="unbounded" xdb:SQLName="TRANSACTIONRANGE" xdb:SQLInline="false" xdb:defaultTable="TPF_MASTER_D_T_RANGE_TAB"
    this gives me very good performance
    the total run is now 6 times faster
    and to ftp xml out of the xdb takes now 2 seconds

Maybe you are looking for

  • 1 week with my X100e

    I was compelled to write my own review after reading some bad ones out there (mostly about the MV-40)... My Specs: AMD Turion Neo X2 Dual Core (L625) 4GB RAM 160GB intel X25-M SSD Bluetooth Windows 7 Pro x64 -plus all other standard equipment. The PR

  • How to use property file - sql query define in property file

    Hi All, Anybody please tell me how to use property file. I have placed sql query in propery file and I have to access this in my file. well so far this is my code but don't know how to implement in the following ... pstmt = con.prepareStatement("sele

  • How do I edit image viewer slideshow created in CS3, in CS4?

    I apologize for posting this twice...once in a previous post and once here. The reason being...the other post was old (2008) and I'm afraid it might get overlooked. So I thought I'd create a new discussion. I created a bunch  of slideshows in CS3 usi

  • Creating a sales order document

    Hi, When I create a sales order document in one company I want to automatically create a purchase order document (in the same company), and when this one is created I want to create a sales order document (in another company) with the same data as th

  • 24" LCD AND 19" CRT. Any problem?

    Hi, My current set up is two dual 19" CRTs and the ATI 9800 256MB card. I want to move up to a widescreen LCD, but am aware of a couple limitations with a 24" LCD. These being: 1. You have more screen real estate with two 19" than with one 24". 2. CR