ACE, max conns limit and oversubscription issue

Hi,
I have a question regarding the following output:
show serverfarm SFARM detail 
                                                ----------connections-----------
       real                  weight state        current    total      failures
   ---+---------------------+------+------------+----------+----------+---------
   rserver: REAL_1
       10.0.0.1:80           8      MAXCONNS     10435      65590       130
         description          : -
         max-conns            : 10000
         min-conns            : 9950
There is a sticky source ip configured for primary serverfarm + backup serverfarm (with no sticky). Do you know why we can see more current connections than max-conns limit?
The sticky for primary serverfarm could cause that issue?
Regards,
Krzysztof

Hi Krzysztof,
Normally the current connection counter is number of ESTABLISHED + EMBRYONIC connections. So as soon as ACE forwards the SYN, the current counter is incremented and if the connection establishes, total connection counter is incremented or else failure.
Having said that, i still believe it should not show more than MAX-CONNS limit unless Max-conns is only for ESTABLISHED.
I would suggest opening a TAC case to further investigate this. There have many issues related to these counters which all were cosmetic and had no real impact on the functionality of device itself.
Regards,
Kanwal
Note: Please mark answers if they are helpful.

Similar Messages

  • ACE connection limit and remote TCP security scans

    We are currently running remote TCP security scans on our networks and are running into a major problem where when the scans are taking place the ACE connection resource usage sky rockets and easily reaches the maximum 4 million connections.  This means that anyone can run a simple TCP scan and take down our ACE by maxing the connection limit.  We have the following parameter-map applied to all of our policies but it does not help to clear the connection count on the ACE in a reasonable amount of time.  parameter-map type connection CONNECTION_TIMEOUT   set timeout inactivity 300   set tcp timeout half-closed 60  I should note that we do have normalization turned off because it causes way more problems then it's worth (no resolution with TAC).  Does anyone have an tips on how to accommodate security scan's on networks behind the ACE while not saturating the connection count limit?

    For vips, this particular context only has one class C applied to a class-map.  Not all IP's are in use but regardless the ACE creates connections for those as well.  I've set the timeout inactivity to 120 seconds and I still see connections from the remote scanning host idling well over 45mins for connections destined to the vip's.  Is turning on normalization my only option?  I know there are others who have turned off normalization due to performance and connectivity issues so there must be other ways around this.  Thanks for your help.

  • ACE Sticky Connections, Show Conn Output and Show serverfarm

    Hi Community,
    I'm deploying a Cisco ACE module and I have some questions about sticky connections and about the output of the show conn command and show serverfarm command.
    I have the follwoing configuration:
    rserver host srv_1  ip address 10.4.11.14  inservicerserver host srv_2  ip address 10.4.11.18  inserviceserverfarm host farm_144  rserver srv_1 144    weight 1    inservice  rserver srv_2 144    weight 3    inservice
    sticky ip-netmask 255.255.255.255 address source st_host144
      timeout 10080
      serverfarm farm_144
    class-map match-all vip_144
      2 match virtual-address 10.4.11.208 tcp eq 143
    policy-map type loadbalance first-match lb_144
      class class-default
    policy-map multi-match policy_vip_webcache
      class vip_webcache_144
        loadbalance vip inservice
        loadbalance policy lb_144
        loadbalance vip icmp-reply active
        nat dynamic 411 vlan 411
    We can assume that service policy was applied at the interface vlan. So, let's go to the questions:
    1- If sticky is enabled the output command "show conn" should show just one entry by ip address?
    The real output is:
    DC01-ACE-01-PRIMARY-SW1/context_servidores# show conn | inc :143333046     1  in  TCP   411  10.2.158.87:3616      10.4.11.208:143       ESTAB 286390     3  in  TCP   411  10.2.158.87:3562      10.4.11.208:143       ESTAB310233     1  in  TCP   411  10.1.5.87:3424        10.4.11.208:143       ESTAB
    Look that the ip address 10.2.158.87 is shown 2 times. In same times, the same ip address is shown 4 times to the same VIP and the same port. Is it a normal behavior?
    2- According to the configuration, the srv_2 has weight 3 and srv_1 has weigth 1, but the output of show serverfarm show somethin strange:
    DC01-ACE-01-PRIMARY-SW1/context_servidores# show serverfarm farm_144 serverfarm     : farm_144, type: HOST total rservers : 2 state          : ACTIVE DWS state      : DISABLED ---------------------------------                                                ----------connections-----------       real                  weight state        current    total      failures    ---+---------------------+------+------------+----------+----------+---------   rserver: srv_1       10.4.11.14:144        1   OPERATIONAL     11         386        0   rserver: srv_2       10.4.11.18:144        3   OPERATIONAL     35         66         0
    We can see that the weight is working good, but the total of connections is higher at srv_1 than srv_2. Why?
    Somebody can help me to understand better this problem of if its a normal behavior?
    Thanks in advance!!

    Hi Gaurav,
    About question 1, I got some informations too. It's perfectly normal the client open 2 or more connections at the same time. The client's application is the responsable. We removed the ACE and put the client directly to the server and the result of the total connections opened was the same.
    About question 2, I made some "clears" on the serverfarm, the sticky database and after that, the numbers were more real.
    DC01-ACE-02-SECONDARY-SW1/context_servidores# sh serverfarm farm_webcache_144
    serverfarm     : farm_webcache_144, type: HOST
    total rservers : 2
    state          : ACTIVE
    DWS state      : DISABLED
                                                    ----------connections-----------
           real                  weight state        current    total      failures
       ---+---------------------+------+------------+----------+----------+---------
       rserver: srv_webcache_1
           10.4.11.14:144        1   OPERATIONAL     1025       15499      4436
       rserver: srv_webcache_2
           10.4.11.18:144        2   OPERATIONAL     1794       33471      471
    DC01-ACE-02-SECONDARY-SW1/context_servidores#
    Anyway thank you very much for your feedback.
    Plínio Monteiro

  • Dreamweaver CS3 (9) maxed CPU slow performance and FTP issues.

    Dreamweaver CS3 (9) maxed CPU slow performance and FTP
    issues.
    Never mind...I found it.
    Of course I have all the firewall, worm, scanning, phishing
    filtering, full time protect, and everything else turned off when I
    am working on a remote/testing site however I found the Symantec
    antivirus email scan being enabled (doesn't it just interact with
    my email program?) was getting in the way of DW9 ftp operations,
    causing some lagging that was putting DW9 into an unstable mode
    during and immediately after (10 seconds) ftp operations.
    The CPU still spikes to 99% during ftp functions but DW9
    works even though it's likely taxed to the max but when I turn off
    email scan DW9 returned to a functioning state during ftp and
    remote file view. Whatever it takes huh?
    PS: email scanning only never effected DW8.02
    My System Info: properly configured (as in: everything works
    or I fix it) Toshiba - XP pro SP2 – Pentium 4 - 2.3GHz
    – 1.5 GB ram – Intel extreme graphics card 64 MB ram
    – Apache 2.2 server – PHP 5 – MySQL 5 -
    phpMyAdmin – Dreamweaver 8.02 – Interakt Kollection
    3.7.1 – Dreamwweaver 9 - DeVtoolbox – Flash –
    GoLive - Photoshop - ImageReady – Illustrator – Acrobat
    – CuteFTP – Putty SSH - blabla...)

    Thanks for the response. I managed to Google the 'nobody' account and found the explanation.
    As you say, I assumed the 'syslogd' process had to do with system logs. I did check the console and couldn't find anything of particular interest in the logs.
    I'll continue to monitor but if anyone has any further suggestions please post.
    One thing that might also have an impact... my available drive space is now down to less than 5gb (runs anywhere between 1gb and 5gb free depending on what I'm running). I know this causes problems for Parallels as it keeps moaning at me, but would it cause any problems with OSX?
    Thanks again,
    Tom

  • ACE: conn-limit by source?

    Is it possible to limit the number of concurrent connections to a set number per source IP?

    no.
    Unless you know the ip address you want to limit.
    In this case, you can match that traffic with a class-map and use a separate serverfarm for each ip where you can specify a conn-limit.
    Gilles.

  • SGA Max Size limit?

    Hi,
    I have Fujitsu mid range Server with 16gb RAM and 64 bit Windows Server 2003,10g R2 db installed, current i have SGA size 4gb..
    What is SGA max size limit????
    One of my report runs in 24 seconds...*will this issue b solved increasing the SGA size upto 10,12 gb?*

    Yes,
    You can also go for a 10046 event tracing...
    ACCEPT sid PROMPT 'Enter SID: '
    ACCEPT serial PROMPT 'Enter SERIAL#: '
    ACCEPT action PROMPT 'Enter TRUE or FALSE: '
    EXEC sys.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(&sid,&serial,&action);
    prompt Trace &action for &sid,&serial
    exec DBMS_SYSTEM.SET_EV(10,20,10046,12,”);
    Then you can check your dump file and see whcih events are higher......
    For Eg. content could be like:
    =====================
    PARSING IN CURSOR #6 len=107 dep=1 uid=44 oct=6 lid=44 tim=1621758552415 hv=3988607735 ad='902c07a8'
    UPDATE rn_lu_lastname_loca set entr_loca_id_plz14 = translate(entr_loca_id_plz14,'_','-') where rowid = :b1
    END OF STMT
    PARSE #6:c=0,e=981,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=0,tim=1621758552403
    BINDS #6:
    bind 0: dty=1 mxl=32(18) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=32 offset=0
    bfp=10331d748 bln=32 avl=18 flg=09
    value="AAAHINAATAAAwTTABV"
    WAIT #6: nam='db file sequential read' ela= 12170 p1=6 p2=197843 p3=1
    WAIT #6: nam='db file sequential read' ela= 8051 p1=14 p2=261084 p3=1
    WAIT #6: nam='db file sequential read' ela= 7165 p1=19 p2=147722 p3=1
    WAIT #6: nam='db file sequential read' ela= 9604 p1=19 p2=133999 p3=1
    WAIT #6: nam='db file sequential read' ela= 6381 p1=19 p2=133801 p3=1
    EXEC #6:c=10000,e=45750,p=5,cr=1,cu=10,mis=0,r=1,dep=1,og=4,tim=1621758598343
    FETCH #5:c=0,e=357,p=0,cr=5,cu=0,mis=0,r=0,dep=1,og=4,tim=1621758598896
    EXEC #1:c=30000,e=116691,p=36,cr=35,cu=10,mis=0,r=1,dep=0,og=4,tim=1621758599043
    WAIT #1: nam='SQL*Net message to client' ela= 5 p1=1413697536 p2=1 p3=0
    WAIT #1: nam='SQL*Net message from client' ela= 2283 p1=1413697536 p2=1 p3=0
    Lines that start with WAIT
    len Length of SQL statement.
    dep Recursive depth of the cursor.
    uid Schema user id of parsing user.
    oct Oracle command type.
    lid Privilege user id.
    ela Elapsed time. 8i: in 1/1000th of a second, 9i: 1/1'000'000th of a second 
    tim Timestamp. Pre-Oracle9i, the times recorded by Oracle only have a resolution of 1/100th of a second (10mS). As of Oracle9i some times are available to microsecond accuracy (1/1,000,000th of a second). The timestamp can be used to determine times between points in the trace file. The value is the value in v$timer when the line was written. If there are TIMESTAMPS in the file you can use the difference between 'tim' values to determine an absolute time. 
    hv Hash id.
    ad SQLTEXT address (see v$sqlarea and v$sqltext).
    Lines that start with PARSE, EXEC or FETCH
    #n  n = number of cursor 
    c  cpu time 
    e  elapsed time 
    p  physical reads 
    cr  consistant reads 
    cu  current mode reads 
    mis miss in cache (?) 
    r  rows processed 
    dep recursive depth 
    og  optimizer goal 
    tim time  Content

  • 780 SLI - "Voltage Limit" and "Utilization Limit" Flags

    Hello,
    I'm trying to pinpoint an issue I'm having with my video cards, and wanted additional advice before I make purchases. My questions will be: is there anything else I should try? Is there anything I can do to pinpoint the problem?
    My setup:
    Core i7-950 - OC'd to 3.84 ghz
    ASUS P6X58D-E - mobo
    2x MSI n780 TF in SLI
    Corsair 850AX Power Supply
    Also drawing power: 5x Hard Drives; Typically only 2 are running at one time (OS RAID), the other 2 are a backup RAID I only use to archive stuff, and the last is a Linux drive that Windows doesn't even see
    Issue:
    Most games run without issue; but Metro Last Light is unstable. I run at 1920x1080, Advanced Physx, 2x AA, Very High quality and Very High Tessalation. There are times when I get 60 FPS (V-Sync); however, there are times that the performance drops to 6 or 14 FPS. When it is at 14 FPS it is still playable, but 6 FPS is terrible. I use Afterburner's OSD to monitor GPU Usage, Temp, and FPS. When it is at 6 FPS, it is only using 12% of either GPU at most.
    While investigating, I noticed a few things in Afterburner that stand out when this occurs.
    1) The Core Clock drops down (throttling?)
    2) The "Voltage Limit" and "Utilization Limit" flags are both set on both GPUs.
    My thoughts:
    I'm thinking that my Power Supply isn't quite big enough. I thought this may become an issue when I first bought the two cards (I was previously running 2 460's on SLI, then upgraded to 780s).
    My question is if the video cards are pulling the power supply's limit - will this trip the two flags? I would have thought the Voltage Limit would indicate the voltage going too high; but power supply being strained should drop voltage.
    I have not changed any of the voltage settings to the card - they are whatever MSI has them set to for the stock OC.
    If I buy another PSU, I don't like to go cheap - so it will definitely be beefy. Before I drop a few hundred on it, I want to be sure the Power Supply is the issue. Is there anything else you guys/gals can think of that may be an issue? Any tests I can do to confirm?
    The new Thief seems to run fine; also Kombustor runs without issue (but only runs on one GPU - does not SLI). I have no other stability issues - the PC doesn't randomly shut down or reboot. It just seems to be Metro Last Light - but Metro is also the most demanding game that I own.
    Any assistance or feedback will be greatly appreciated!
    Edit - Update:
    I tried disabling SLI to see if the issue would improve by only loading 1 GPU. No difference at all - no better or worse. Same flag (just only on the one GPU), same exact FPS 6.4.
    I installed GPU-Z to be able to monitor core voltage. It normally runs at about 1.1 volts. However, once the issue occurs it is as low as 0.875 volts. Seems to confirm the PSU may be an issue. Any feedback or additional confirmation is still greatly appreciated!

    Apologize for the bump - I could not see a way to edit the original post. However, I wanted to make sure this information is available in case others had similar issues.
    I found an even better solution. NVIDIA has an updated "Physx" driver - you have to download it from the NVIDIA site (i.e. Other NVIDIA drivers, not GeForce). Look for Physx version 9.13.1220 or later. 9.13.1220 was release on 1/27/2014 - but an older version is included in the most recent GeForce drivers for some reason.
    The patch notes from Physx 9.13.1220 specifically state:
    *Changes & fixed issues in this release
    **Fixes a bug that caused the Metro Last Light to not be GPU accelerated on some systems.
    With this software update I can enable Advanced Physx and max settings in Last Light and still run at 60 FPS.

  • 648 max , testing base and extended memory

    I dont see the thread i answered to one time that someone spoke of using ddr400 with 648 max or max-L board and they were getting testing base and extended memory error from the LED's.  Well i just got the same error today and my system would not boot i cut the power to the PS. and turned it back on it would start to boot and hang i could barely get to the windows logo sometimes the vid card wouldnt even boot the logo to the screen. This is with an updated mobo that msi sent me that i had just put in , it seemed to run fine at first then this problem started occuring.  The only difference i had in the setup from the previous outdated board was i had the mem in DIMM 3 and not DIMM 1, so i moved the memory back over to DIMM 3 and now the problem is gone. Just something for those of you with the same problem to try.
    Oh and i forgot one more issue, i cannot set the dram speed in the bios manually i have ddr 333 it should be 167 mhz so i set it to 4/5  and the system will not boot at all  i end up having to reset cmos.  I called MSI about that and they told me to just have it set to SPD , I said i know that and im fine with leaving it that way but it should let me set it manually if i wanted to and that i wondered if that had anything to do with the random " testing base and extended memory " problem.

    It was probably myt thread...
    http://www.msi.com.tw/program/e_service/forum/viewindex.php?threadid=6204&boardid=10&styleid=1
    Have done some testing with both DDR333 and DDR400 and my results are in the thread.
    Right no I'm using my DDR400 as DDR333, manually set to 167MHz, using DIMM1, and havn't seen any problems. I'll try the DIMM3 to see if I get any changes.
    And by the way.. I tried setting my DDR400 manually to 200MHz and it wouldn't boot at all. After a lot of ugly language I finally got it back to the BIOS....after a reset of the CMOS.
    Seams like you should use SPD if you want to use the correct freq for your RAM.
    Have you tested the new 1.3 BIOS?

  • How to change Purchase Order currency after Good receipt and Good issue?

    Hi,
    I've a PO created last year. The PO currency has been entered wrongly.
    My store colleague has been performed good received and consumed it.
    Now, I am not able to change PO currency to correct one, due to this message:-
    Currency can no longer be changed
    Message no. 06489
    Diagnosis
    As a basic rule, the currency cannot be changed if there has already been a goods receipt against a document.
    If the document contains external service and/or limit items, or if an invoicing plan has been assigned to an item, the currency cannot be changed following the receipt of an invoice either.
    If external service items exist, the currency also cannot be changed if services that have actually been performed have already been recorded or if the item in question has been assigned to a preventive maintenance (servicing) plan.
    Is there anyway to change PO currency after Good receipt and Good issue? Thanks !

    Diagnosis
    As a basic rule, the currency cannot be changed if there has already
    been a goods receipt against a document.
    If the document contains external service and/or limit items,
    or if an invoicing plan has been assigned to an item,
    the currency cannot be changed following the receipt of an invoice either.
    If external service items exist, the currency also cannot be changed if
    services that have actually been performed have already been recorded or
    if the item in question has been assigned to a preventive maintenance (servicing) plan.
    Answer in question itself,need to cancel all documents of wrong currency in order of LIFO,
    Create a new purchase order.

  • XML Parser for PL/SQL and related issues

    I need to have further information about some of the following
    issues and XML features and make a determination useful for
    evaluation and recommendation:
    ISSUES
    1) Is there a maximum size for an XML document to provide data
    for PL/SQL(or SQL) across tables, provided that no CLOB are used?
    2) How about from Oracle to an XML document ?
    3) Is there a ratio between XML document size and main memory and
    SGA size. What are Oracle's recommendations /
    4) Can the Oracle Application Server run on a DHCP NT server when
    using XML parsing ? Is it NT Service Pack 3 and 4 compatible ?
    5) How parsers can interact with one another or related tools ?
    For example, how the XML parser for c/c++ could be useful when
    using Pro*C/C++ (programmer 2000) or OCI interfaces ? In other
    words, what is the business logic in using these tools ?
    null

    Anthony D. Noriega (guest) wrote:
    : I need to have further information about some of the following
    : issues and XML features and make a determination useful for
    : evaluation and recommendation:
    : ISSUES
    : 1) Is there a maximum size for an XML document to provide data
    : for PL/SQL(or SQL) across tables, provided that no CLOB are
    used?
    The limit should be what can be inserted into an object view.
    : 2) How about from Oracle to an XML document ?
    The limit should be what can be retrieved from an object view.
    : 3) Is there a ratio between XML document size and main memory
    :and SGA size. What are Oracle's recommendations /
    Not directly due to the relationship between XML metadata and
    data not being constrained.
    : 4) Can the Oracle Application Server run on a DHCP NT server
    : when using XML parsing ?
    If it can run a JavaVM with the correct permissions there are no
    other special requirements.
    :Is it NT Service Pack 3 and 4 compatible ?
    No special requirements here.
    : 5) How parsers can interact with one another or related tools ?
    : For example, how the XML parser for c/c++ could be useful when
    : using Pro*C/C++ (programmer 2000) or OCI interfaces ? In
    other
    : words, what is the business logic in using these tools ?
    Not really sure of your question. The XML components are useful
    in any application where I am processing documents or data with
    an XML structure. The choice to use XML can be based on quite a
    range of requirements due to its declarative syntax and open
    standards. If you give me a specific application, I can perhaps
    be more helpful.
    Oracle XML Team
    http://technet.oracle.com
    Oracle Technology Network
    null

  • How to Determine Safe Max Memory Limit?

    From what I understand, the amount of memory available to your AIR game on a device will vary depending on how many other apps the user is running, the type of device, etc. 
    So with this in mind, how does one define a safe max memory limit for a given device? What percentage of a device's total RAM can we assume we have access to? 
    Here are the iOS RAM Specs
    iPod (4th Gen): 128Mb.
    iPad 1: 256 MB
    iPad 2, iPod Touch (5th Gen): 512 MB
    iPad3: 1 GB 
    I already have highly optimized texture atlases using PVRTC ATF for best use of memory, but if you've ever worked with a passionate creative department, you'll know they always want more. I need to be able to tell them exactly how many spritesheets they have to work with for a given device - how would I do this? 
    NOTE: iPhones/iPod Touches/iPads all have a Unified Memory Architecture which mean that both the CPU and GPU share system memory, which means there is no dedicated GPU GRAM on these devices. So the RAM listed above, I assume, is shared by both the game logic and the GPU.

    Update, I've found an Objective C post on Stack Overflow that answers this question
    Here are the (rough) max memory limits to expect before an out-of-memory crash will occur:
    iPad1: 127MB/256MB (crash amount/total amount)
    iPad2: 275MB/512MB
    iPad3: 645MB/1024MB
    iPhone4: 325MB/512MB
    Note that max memory limit will vary depending on how many apps a user is running, but these figures are a good rough guide. See the Stack Overflow post for more details.

  • What happens when max job limit of few RAS server is reached?

    Hi, I would like to know what happens when max job limit of server is reached. I know that we need to increase the job limit but my question is a bit different.
    We have 7 RAS per servers. I got this message "The maximum report processing jobs limit configured by your system administrator has been reached" on 4 RAS servers but not all 7. I am tracking servers using PIDs on WIndows server 2003.
    Does this means this is a warning that Job limit of 75 (in our case) has reached for 4 servers and for remaining 3 servers its still not reached. This means 3 servers are still able to handle more report request as they have not reached their max limit.
    OR
    This message means all of the servers have reached to their max limit and there is no more request which can be taken.
    We are doing 3000 users performance testing on 21 RAS servers.
    We are on BOXI R2 SP3 on IIS and opening crystal reports using .NET SDK.
    Thanks,

    This is same as my other question so I am closing this too. We have set limit as unlimited and during peak load we had almost balanced reports on all RAS.

  • FOIP max-conn not working

    max-conn in FOIP is not working properly. Even after i set max-conn to 1, possible to make 2 simultanious faxes to the specific dial-peer.
    Anyone know how to change the FOIP TIF attachemnt name (Cisco_fax.tif) to other name?

    This command applies to off-ramp Store and Forward Fax functions. verify you are using On Ramp or Off Ramp

  • Solaris-x86 mount fat32 partition, the partition max size limit?

    solaris10 x86, laptop, 10G FAT32 partition for windows & x86 exchange data.
    the fat32 partition mount as normal, can be read fine.
    but write some file by x86, that can not find by windows.
    anyboy know did the solaris-x86 mount fat32 partition, the partition max size limit? or no limit, why this problem occur?

    Mounting Windows partition in Solaris
    The easiest way to share data now is to do it through a FAT32 partition. Solaris
    recognises it as partition of type pcfs. It is specified as device:drive where drive is
    either the DOS logical drive letter (c through z) or a drive number (1 through 24).
    Drive letter c is equivalent to drive number 1 and represents the Primary DOS partition
    on the disk; drive letters d through z are equivalent to drive numbers 2 through 24,
    and represent DOS drives within the Extended DOS partition.Syntax is
    mount -F pcfs device:drive /directroy-name
    where directory name specifies the location where the file system is mounted.
    To mount the first logical drive (d:) in the Extended DOS partition from an IDE hard
    disk in the directory /d use
    mount -F pcfs /dev/dsk/c0d0p0:d /d
    You can use mount directory-name after appending following line is in
    /etc/vfstab file
    device:drive directory-name pcfs no rw
    for example
    c0d0s0:c /c pcfs no rw
    If your windows partition like the following means
    C: - NTFS, D:-FAT32, E:-NTFS, F:-FAT32
    Then you can only mount D, F not C & E.
    Mounting D Drive:
    mount -F pcfs /dev/disk/c0d0p0:c /mountpoint
    Mounting F Drive
    mount -F pcfs /dev/disk/c0d0p0:d /mountpoint
    The driveletter only for fat not including other file systems (ntfs or any linux filesystems).

  • Customer Credit limit and Exposure

    Dear Experts,
    I have an issue regading Customer Credit limit and Exposure in f.31.
    My customer have security deposit it is including in customer's credit exposure.
    My question is  this correct to show customer credit exposure along with spl gl indicator (H) security deposit?
    if not how to exclude it from balance.
    Plz its Urgent.
    Regards,
    Javeed

    Hi,
    Thanks for reply
    I have tried in FBKP and remove check box for credit limit
    but there is no immediate effect in (f.31) it is showing balance as previuos one
    but we post or book new security deposit with same its balance is not effected in (f.31)
    but we have already posted lot of entries for sec. deposits for customers.
    what i have to do for immediate balance effect for security deposit?
    Thanks,
    Javeed

Maybe you are looking for