Difference between field staus group for posting key and GL account

Hi all,
can anyone tell me what is the difference of usage for field status group in posting key and GL account as i notice the fileds are the same. during data entry, system will check both field status or how?
thanks.

Hi
Both are to control the field status of the line item.
But, the FSG of the Posting Key and the GL FSG status should not clash like below.
Take 'Assignment' field as example :
Posting key FS - Suppress & GL FS - Required    - will give you error message at the time of posting
Posting key FS - Required & GL FS - Suppress    - will give you error message at the time of posting
Otherthan the above, all other combination works
VVR

Similar Messages

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • Field Status Groups Vs  Posting keys Field Status Groups

    Hi, Seniors,
    In my job posting Keys field status group fields I never used as required field  some cases just I used only Special G/L transaction Indicators except this I kept all the fields as Optional.  My doubt is How this fields can be used as independent priority If there is no important why dont we use only field status group fields like G001, G002, .....  why should we depend on Posting keys field status group Fields.
    Thanks in Advance
    Sarath babu

    Hi Sarath,
    If you do any chages in the FSG it will applicable to all G/L accounts which are using the same FSG. But if change posting keys FSG then it will applicable only to that particular posting key it wont effect rest of the G/L's. But you should takecare of the below things.
    I. A field which is suppressed at the posting key level shouldn’t be required in the
       field status group specified in the General Ledger Account.
    II. A field which is suppressed in the field status group under General Ledger
       Account shouldn’t be specify as required in the posting key level. 
    III. Posting keys are defined by Account type wise.
    Rams.N
    If this is helpful assign me points

  • How to find out the table for Posting Key and A/R & A/P Document types

    hi folks,
    can you let me know which TABLE is used to Posting key and also which TABLE is used for Account Receivables and Account Payables.
    Thank you in advance.

    To find the table of posting key, select any posting key at OB41, then press F1, then click on hammer icon the table used will be displayed. For Posting Key TBSL is the table used.
    Posting Key TBSL
    Customer Master (General)  KNA1
    Customer Master (Company Code) KNB1
    Vendor Master (General) LFA1
    Vendor Master (Company Code) LFB1
    Satish
    (please close the thred if u satisfiy with the answer)

  • Changing field staus group for multiple accounts

    Hi,
    Is it possible to change the FSG for multiple accounts at the same time, or does it have to be done one by one through FS00?
    Thanks and Regards

    Hi Sam,
    Try this t-code :
    OB_GLACC12
    Hope it is useful.

  • Difference bet terms of Payment for Installment Plan and Installment Plan

    Dear All,
                       what is the functionality difference between  terms of Payment for Installment Plan and Installment Plan? please pour your thoughts.
    thanks in advance

    Hi Vinay,
    Terms of payment :-this is nothing but when vendor will pay the amount to party.
    Installment payment :-This is nothing but when vendor will pay no .of terms
    Funtionally we can use for both T.codes OBB8,OBB9
    May be you can understand what i told
    Regards
    Surya

  • Posting Key and Document type

    Hello,
    I have just detected an inconsistency in one of my SAP environment (Development).  I have an invoice posted from the SD module, document type of this invoice is RV.
    If there is a reason to cancell this invoice, the transaction used is VF11. The document type in SD for this cancellation is S1. Thereafter an FI document is generated with document type AB.
    Even though the settings in both Production and Acceptance and Developement are the same.
    I have realised that the initial document when cancelled (reversed) has an FI document RV  and posting key 11. whiles in customizing the settings for posting keys and document types are different from what has been customized. Below is my current settings.
    The posting key 01 has a  reversed posting key 12 and the document type RV has a reverse document type AB. This customizing is in correct in SPRO.
    Transactions : OB41 and OBA7
    Can anyone help on this....
    Thanks
    Elvis

    Hello,
    Did you verify whether AB document type in OBA7 is marked in "Account Types allowed" the Customer checkbox?
    If not, maybe that is why it is not selecting AB as the reversal document type for RV, even though it is set up in RV document the AB document type as reverse document type.
    Other than this (if the mentioned above doesn't solve your issue), I would say there should be any user-exit that can be changing the document type, according to some specific business rule of your SAP client or even a substitution.
    I hope it helps you.
    Regards,
    Daniel.

  • Difference between Field symbols and field group

    Hi experts,
    Can you please advice me what is the difference between field symbols and field groups.
    Thanks in advance,
    Logu.

    Field symbols: are placeholders or symbolic names for other fields. They do not physically reserve space for a field, but point to its contents. A field symbol cam point to any data object. The data object to which a field symbol points is assigned to it after it has been declared in the program.
    Whenever you address a field symbol in a program, you are addressing the field that is assigned to the field symbol. After successful assignment, there is no difference in ABAP whether you reference the field symbol or the field itself. You must assign a field to each field symbol before you can address the latter in programs.
    Field Groups:
    A field group is a user-defined grouping of characteristics and basic key figures from the EC-EIS or EC-BP field catalog.
    Use
    The field catalog contains the fields that are used in the aspects. As the number of fields grows, the field catalog becomes very large and unclear. To simplify maintenance of the aspects, you can group fields in a field group. You can group the fields as you wish, for example, by subject area or responsibility area. A field may be included in several field groups.
    When maintaining the data structure of an aspect, you can select the field group that contains the relevant characteristics and basic key figures. This way you limit the number of fields offered.
    A field group combines several existing fields together under one name
    like
    FIELD-GROUPS: fg.
    then you can use one insert statement to insert values in fields of field-group.
    INSERT f1 f2 ... INTO fg.
    Field symbols
    If u have experience with 'C', then understand this to be similar to a pointer.
    It is used to reference another variable dynamically. So this field symbol will simply point to some other variable. and this pointer can be changed at runtime.
    FIELD-SYMBOLS <FS>.
    DATA FIELD VALUE 'X'.
    ASSIGN FIELD TO <FS>.
    WRITE <FS>.
    Field symbols: are placeholders or symbolic names for other fields. They do not physically reserve space for a field, but point to its contents. A field symbol cam point to any data object. The data object to which a field symbol points is assigned to it after it has been declared in the program.
    Whenever you address a field symbol in a program, you are addressing the field that is assigned to the field symbol. After successful assignment, there is no difference in ABAP whether you reference the field symbol or the field itself. You must assign a field to each field symbol before you can address the latter in programs.
    Field Groups:
    A field group is a user-defined grouping of characteristics and basic key figures from the EC-EIS or EC-BP field catalog.
    Use
    The field catalog contains the fields that are used in the aspects. As the number of fields grows, the field catalog becomes very large and unclear. To simplify maintenance of the aspects, you can group fields in a field group. You can group the fields as you wish, for example, by subject area or responsibility area. A field may be included in several field groups.
    When maintaining the data structure of an aspect, you can select the field group that contains the relevant characteristics and basic key figures. This way you limit the number of fields offered.
    example :
    DATA: BEGIN OF SPTAB OCCURS 0,
    line(1000), " or type string
    END OF SPTAB.
    DATA: IDX LIKE SY-INDEX.
    field-symbols <FS1>.
    split tb_sip AT ';' INTO table sptab.
    LOOP AT SPTAB.
    IDX = IDX + 1.
    ASSIGN COMPONENT IDX OF STRUCTURE tb_detsip TO <FS1>.
    If sy-subrc = 0.
    <FS1> = SPTAB-line.
    Endif.
    Endloop.
    append tb_detsip.
    clear idx.
    Field Groups / Extracts
    http://help.sap.com/saphelp_46c/helpdata/EN/9f/db9ede35c111d1829f0000e829fbfe/frameset.htm
    Field Symbols
    http://help.sap.com/saphelp_46c/helpdata/EN/fc/eb387a358411d1829f0000e829fbfe/frameset.htm
    Reward points if useful.

  • Difference between Field symbols and Field groups

    <b>Hi Friends,
    can you tell me the differences between Field symbols and Field groups? with any examples preferably?
    Regards
    Dinesh</b>

    Hi Dinesh,
    A field group combines several existing fields together under one name
    like
    FIELD-GROUPS: fg.
    then you can use one insert statement to insert values in fields of field-group.
    INSERT f1 f2 ... INTO fg.
    <b>Field symbols</b>
    If u have experience with 'C', then understand this to be similar to a pointer.
    It is used to reference another variable dynamically. So this field symbol will simply point to some other variable. and this pointer can be changed at runtime.
    FIELD-SYMBOLS <FS>.
    DATA FIELD VALUE 'X'.
    ASSIGN FIELD TO <FS>.
    WRITE <FS>.
    Field symbols: are placeholders or symbolic names for other fields. They do not physically reserve space for a field, but point to its contents. A field symbol cam point to any data object. The data object to which a field symbol points is assigned to it after it has been declared in the program.
    Whenever you address a field symbol in a program, you are addressing the field that is assigned to the field symbol. After successful assignment, there is no difference in ABAP whether you reference the field symbol or the field itself. You must assign a field to each field symbol before you can address the latter in programs.
    Field Groups:
    A field group is a user-defined grouping of characteristics and basic key figures from the EC-EIS or EC-BP field catalog.
    Use
    The field catalog contains the fields that are used in the aspects. As the number of fields grows, the field catalog becomes very large and unclear. To simplify maintenance of the aspects, you can group fields in a field group. You can group the fields as you wish, for example, by subject area or responsibility area. A field may be included in several field groups.
    When maintaining the data structure of an aspect, you can select the field group that contains the relevant characteristics and basic key figures. This way you limit the number of fields offered.
    Field Groups / Extracts
    http://help.sap.com/saphelp_46c/helpdata/EN/9f/db9ede35c111d1829f0000e829fbfe/frameset.htm
    Field Symbols
    http://help.sap.com/saphelp_46c/helpdata/EN/fc/eb387a358411d1829f0000e829fbfe/frameset.htm
    Reward points if helpful.
    Regards,
    Hemant

  • Profit center field for posting key 06

    Hi!
    I have configured as profit centre is an optional field for GL account field status group & posting keys field status group.
    But for posting key 06, system is not showing the profit centre for transaction code FB01 or F-28 or F-02 .
    I checked for coding block also, but for other posting keys, system is showing profit centre, but not for this.
    if i have to change coding block , how to assign coding block to transaction codes?
    regs,
    ramesh

    There can't be a profit center on sub-ledger (customer/vendor) items.  Check out the following thread.
    Profit Center for Vendor/Customer line item in F-02
    In new GL, you specify profit center on the GL line items in entry view and then via document splitting characteristic settings, you can see the profit center on receivables/payables reconciliation account in the entry view.

  • Assignment field is not getting populated in KSB1 for posting key 50

    Hi All,
    We are uploading entries thru excel in SAP.
    For posting key 50 assignment field is not getting populated in KSB1.
    But for posting key 40 it is getting populated correctly.
    All setting (FSG, posting key details) are same for both the keys.
    No substitution defined in SAP.
    There is no issue for FBL3N for both the posting keys, only issue with KSB1.
    Any idea what could be the reason for this?
    Thank you.
    Akash

    KSB1 is CCtr report..
    if it is revenue normally it will be treated as statistical posting in CCtr accounting- for the cost element when it is defined as revenue element.. - check Cost elements - whether it is Cost and cost reducing or revenue element--
    possibility this can be one of the reason.
    check and confirm

  • Profit Center as mandatory field for Posting keys 01, 11, 31, etc.

    Hello,
    Currently I am in a project in which we have implemente New GL; my client is asking me to activate the Profit Center field for posting keys 01, 11, 21 and 31, so they can post in example documents with posting key 31 (vendor) vs posting key 01 (client). I modified the accounts field status in order for the Profit Center to be mandatory. I modified, as well, posting keys 01, 11, 21 and 31 field status' in orden for the profit center to be mandatory as well. Regardless of this, the profit center just doesn't appear when I post a document in example through tx F-02.
    Does anyone knows how to make this field appear for the posting keay I mentioned? Is it possible? or is this a system limitation?
    Thanks in advance for your help.
    Regards,
    HP

    Hello,
    I figured out how to make the Profit Center field modifiable on the Client and Vendor postitions.
    Thanks to everyone for their replies.
    Regards,
    Paul

  • Payment block is require for posting key 29

    Dear Guru,
    Payment block field is not apearing for posting ky 29 and 39.
    Can any one tell me how it will come for posting key 29 and 39.
    I have checked in OB41.
    Regards,
    Venkat

    Hi,
    Normally payment block is not set by posting key.
    You can set this field to be showed in account group of vendor or customer.  If you only want payment block to be required fiield for payment key 29, you have to use the suggest above to use a substitution in FI.
    Edited by: Tracy@DE on May 25, 2011 12:15 PM

  • Field Status Group for Special G/L Indicators

    Could you define different field status group for special G/L indicators? My problem is I am using two different special G/L indicators but their recon accounts are assigned with the same field status group. I used the same posting keys as well.  But when it comes to general posting, they have different screen numbers.  One SAPMF05A 303 and the other one is SAPMF05A 304.  How did this happen?

    Hi Noel,
    As you have defined Field Status Group for 2 different SPL GL Indicators then it will create 2 different screens only as it wont consider the reconciliation account.
    Regards
    andrew

  • Billing:Rules for posting key 01 and acct 621201 set incorrectly for"XREF3"

    Hi Experts
    At the time of billing i am getting the below error and its not posting into accounting
    Rules for posting key 01 and acct 621201 set incorrectly for "XREF3" field
    Message no. F5272
    Diagnosis
    One of the rules specifies that the field demands a required entry, the other rule says that the field is to be suppressed.
    Procedure
    Correct one of the two rules for the field selection.
    You find the field status group in the G/L account master record:
    Execute function
    You can find the rules for the field status group in the Financial Accounting Implementation Guide in the activity Maintain field status  variants.
    You can find the posting key in the Financial Accounting Implementation Guide in the activity
    Define posting key.
    Regards
    Selvi

    Check this thread
    [Re: Posting to GL   |Re: Posting to GL]
    thanks
    G. Lakshmipathi

Maybe you are looking for