Best practices in Internal table naming convention GT ,  GS , LT  ,  LS  ??

Hi Gurus,
     Are GT_ ,  GS_ ,  LT_  ,  LS_  --- the  Best practices in Internal table naming convention ????
     I  have  seen this  naming convetions adhered in standard programs .
     What each one  of  the  below  signify
     GT_ ,  GS_ ,  LT_  ,  LS_   ??????? 
Regards
Jaman
Message was edited by:
        ABAP Techie

Hello
I use the following naming conventions:
- G = global variable
- L = local variable
- T = internal table
- S = structure
- D = field
That's how the combinations look like:
- GT_ITAB     = global itab
- GS_STRUC = global structure
- GD_FIELD   = global field
- LT_ITAB     = local itab
- LS_STRUC = local structure
- LD_FIELD   = local field
Function module parameters have to stick to the following rules:
- I = importing
- E = exporting
- [C = changing -> never used]
- IT_ITAB = imported table type (itab)
- IS_STRUC = imported structure
- ID_FIELD   = imported field
- ET_ITAB = exported table type (itab)
- ES_STRUC = exported structure
- ED_FIELD   = exported field
Depending on their semantics TABLES parameters look like:
- IT_ITAB = imported data
- ET_ITAB = exported data
- XT_ITAB = changing data (import & export)
Here are the conventions for FORM routine parameters:
- UT_ITAB = using itab (data are usually treated like constants; no changes will be transfer - although possible - to the calling program)
- CT_ITAB = changing itab (if it is semantically an exporting itab then one of the very
first statements in the routine is: REFRESH ct_itab. )
- US_STRUCT
- UD_FIELD
- CS_STRUCT
- CS_FIELD
Conventions for class/interface parameters:
- IT_ITAB = importing table type
- IS_STRUC = importing structure
- ID_FIELD = importing field
- ET_ITAB = exporting table type
- ES_STRUC = exporting structure
- ED_FIELD = exporting field
- RT_ITAB = returning table type
- RS_STRUC = returning structure
- RD_FIELD = returning field
Conventions for class/interface attributes:
- MT_ITAB = table type
- MS_STRUC = structure
- MD_FIELD = field
- MC_CONST = constant
<b>Question</b>: Are there any advantages of such elaborated naming conventions?
My answer to this question is: Yes, definitively.
I believe that the advantage of semantically differentiating TABLES parameters of function modules is quite obvious:
  CALL FUNCTION 'Z_BAD_NAMING'
    TABLES
       itab1 = ...
       itab2 = ...
       itab3 = ... .
  CALL FUNCTION 'Z_GOOD_NAMING'
    TABLES
       it_itab1 = ...
       et_itab2 = ...
       xt_itab3 = ... .
I also believe that my naming conventions clearly enhance <b>readability </b>and <b>maintainability </b>of my programs.
Regards
  Uwe

Similar Messages

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Best practices in Queue table maintenance

    Hi Fellow AQ Users,
    I am looking to hear from the community about best practices in queue table maintenance.
    I have been mining through metalink about various Oracle recommendations and putting
    together a set of recommendations as a starting point for my DBAs.
    I am looking to answer questions like these --
    How often (in relation to messaging load) would you coalesce and rebuild the indexes?
    How often would you rebuild the table itself to get rid of the high water mark issues ?
    and what procedure would you use to do that?
    Would really love to learn from your experiences in this area. We are using 9.2.0.7
    64 bit DB and have plans to go to 10g over the next year. So, I am looking at 9i related
    stuff and then 10g.
    Thanks
    Vijay

    Hello,
    In general you coalesce once per day ideally during a quiet time to avoid ORA-54 errors as per <Note:271855.1>. Some customers do it more often than that but once per day is a good starting point.
    In terms of shrinking the queue tables you can use the procedure in <Note:304522.1> with a null 3rd parameter. This is an offline procedure so you could only run it during a maintenance window. In 10.2 onwards you can dynamically shrink the queue table and IOTS. Again it depends on exactly what you are doing with your queue tables how often you might need to do this.
    Thanks
    Peter

  • Sap best practices - payroll international

    hello experts,
    Pls forward me the path of SAP Best Practices for international payroll or mail me
    [email protected]
    it is very urgent......
    thanks
    ram

    hello suresh,
    i hv opened the path forwarded by you. ofcourse it is opening but not able to find the required details from it.
    can u pls help on it.
    ram

  • BW Table naming convention

    Hello,
    I find the following table naming conventions from the manuals BW systems
    /BIx/E<InfoCube>  - E Fact table
    /BIx/F<InfoCube>  - F Fact table
    /BIx/D<InfoCube>  - Dimension table
    I have the following questions
    1.Could you please let me know whether all the tables starting with '/BIx/D' will be dimension table only or there could be other tables ?
    2. From the description field in SE11, I could not find the details for some of the tables.Please let me know whether there is any other way to identify the table type(Dimension,Fact..)
    3. Also I would like to know the naming convention for aggregate tables.
    Thanks,
    Aravinthan

    Hi ,
    http://www.learnsapbw.blogspot.com/2008/03/naming-convention-in-sap-bw.html
    Antonio Voce

  • Best Practice for Tranport request Naming

    Hi,
    We are using SolMan 4.0 during implementation of ECC 6.0.
    We have placed the blueprint and we are in configuration phase.
    We have a IMG project created in the DEV system and was assinged in Solution Manager project under  System Landscape->IMG Projects.
    Now that consultants are going to dev system and customizing they are creating their transport requests.
    Is there any best practice for the naming convention or the transport requests..
    By creating one IMG project for entire implementation is that going to create any problem..!!
    Please sgugest.
    Thanks & Regards
    Mrutyunjay

    As per MSFT best practices(Mentioned by Scott) keep it short as much as possible. You can use SP for SharePoint-SUBSite
    also check this blog for best practices.
    http://www.networkworld.com/community/blog/simple-naming-conventions-improve-end-user-experience-sharepoint-sites
    also one more thing you should consider, never use the reserved words into the SharePoint URLs. you will able to create the site/lis/library/folder but when you browse get the 404 errors.
    check this blog:
    http://www.sharepointblog.cz/2012/04/reserved-words-in-sharepoint-url.html
    http://techtrainingnotes.blogspot.com/2012/03/names-you-cant-use-for-sharepoint.html
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • User Tables Naming convention / Namespaces

    Dear all,
    Does anybody know where to get Informations about SBO NameSpace Conventions ?
    I have 2 Questions for naming conventions
    1) Creating a UserDefinedTable like this
       Table Name  :  Z_NameSpace_MyTableName
       Is it necessary that any FieldName of the
       table uses the NameSpace Prefix ?
       Like this: NameSpace_PosNo
    2) Adding UDF to SBO Tables
       Is it necessary that UDF Fields uses the
       NameSpace Prefix ?
    Thanks

    Thomas,
    The NameSpaces are only used while creating a UserDefined Table.
    UserDefined Table :
    The convention for the name is NameSpace_MyTableName (without the Z_). Its length can't exceed 19 characters.
    When adding a user table, SAP Business One automatically adds the symbol @ as a prefix to the table name. For example: if you add a table named "ABC", the resulting table name will be "@ABC".
    When referring to a user defined table you must use the name including the prefix @.
    UserDefined Field :
    you do not have to add the NameSpace in the name of the field. Its length can't exceed 8 characters.
    When you create it using the UserFieldsMD object, the character "U_" will be added to its name, and created in the table.
    To conclude, the NameSpace is only used for the table.
    The DI API will add @ for the table, and U_ for the field.
    You can see SAP note 647987 about NameSpace
    Sebastien
    Message was edited by: Sébastien Danober

  • Best Practice SQL Alias with Named Instance

    Hi
    We have a SQL server, "MySQLServerName" with a named instance, "MySQLInstance", running on port "12345".
    I've created an SQL alias of "MySQLAlias" pointing to "MySQLServerName" Port on "12345" and it works.
    But should I have used "MySQLServerName\MySQLInstance" Port "12345"?
    Which is best practice?
    Thanks
    TJ

    As SQL Server named instance supports both static and dynamic ports, i think you can use the port number specified by you. By default, a named instance of SQL Server listens on a dynamic port. For a named instance of SQL Server, the SQL Server Browser service
    for SQL Server 2008, SQL Server 2005, or the SQL Server Resolution Protocol (SSRP) for SQL Server 2000 is always used to translate the instance name to a port, regardless of whether the port is static or dynamic. The Browser service or SSRP is never used for
    a default instance of SQL Server.
    Refer http://support.microsoft.com/kb/823938
    Regards, RSingh

  • PSA Table Naming convention

    Hi Experts,
    Currently I am working on the BW3.5 version. I would like to delete the old PSA req through Process Chain. I need some clarification. Please provide me your suggestions.
    I have collected full list of PSA Table in Development system through excel, then i can filter out by source system.
    While create the Process Chain for the PSA deletion, i want to add the collected PSA tables(Object Name).
    Please refer the screen shot. But i noticed that "differing in naming convention for PSA from Dev to quality & prod!!".
    So if i transport this Process chain to quality & production, this will not work same as in Dev.
    I have already referred the form and found the thread that discuss about the same issue. But resolution not given.
    Please help me to get it this issue resolved. Thanks in advance.
    Similar issue thread:
    psa
    Screen shot:
    http://img818.imageshack.us/img818/3963/psa1.jpg
    Thanks,
    RR

    To explain this I will take the systems with this naming Convension.
    Dev BW: BWD
    Dev ECC: E01
    Quality BW: BWQ
    Quality ECC: Q01
    When we take the conversion in the quality system you should have the below parameters.
    BWD to BWQ
    E01 to Q01
    Q01 to Q01
    FLAT File to FLAT FILE.
    So lets say the source system related object is goign from D to Q lets say transfer rule, the same will be converted to Quality system based on the conversions maintained in this table: RSLOGSYSMAP.
    So the source system related objects will gets converted to the target system objects using the refernce maintained in this place.
    Hope this is clear for you now.
    Thanks
    Murali

  • Best practice for OID Net Naming Configuration in global company

    I'd like some feedback on what approach to take in configuring Net Service Names in OID for a global company. We have multiple sites with multiple groups of DBA's. We're weighing pro's and con's of a single domain within OID for Net service names vs a separate domain for each distinct group of DBA's that manage service names.
    To the best of my understanding, it is only possible to configure clients to look at a single domain in OID vs ldap.ora parameter default_admin_context. We have users who access databases across different DBA support areas, so we like the idea of a single domain so that any service name within the enterprise can be resolved by users without having service names entered at multiple levels in the directory.
    However, it is also my understanding that to segregate security of administering service names, it is only possible to do so by having different domains within the directory (it is not possible or at least practical to have different levels of security defined in a single flat domain). I also have concerns about the manageability of service names if they are all listed in a single domain. The list could get rather unwieldy to sort through.
    I would be very interested in opinion or feedback on what others are doing.
    Thanks,

    I'd like some feedback on what approach to take in configuring Net Service Names in OID for a global company. We have multiple sites with multiple groups of DBA's. We're weighing pro's and con's of a single domain within OID for Net service names vs a separate domain for each distinct group of DBA's that manage service names.
    To the best of my understanding, it is only possible to configure clients to look at a single domain in OID vs ldap.ora parameter default_admin_context. We have users who access databases across different DBA support areas, so we like the idea of a single domain so that any service name within the enterprise can be resolved by users without having service names entered at multiple levels in the directory.
    However, it is also my understanding that to segregate security of administering service names, it is only possible to do so by having different domains within the directory (it is not possible or at least practical to have different levels of security defined in a single flat domain). I also have concerns about the manageability of service names if they are all listed in a single domain. The list could get rather unwieldy to sort through.
    I would be very interested in opinion or feedback on what others are doing.
    Thanks,

  • Best Practice to Copy Table to Different Schema?

    This is what I tried:
    1. Create USER2 new user that uses TBS2 as default Tablespace
    2. Perform exp (resides on TBS1 Tablespace)
    3. Perform imp using fromuser and touser parameter to USER2
    What I noticed is that the table that was imported to USER2 still resides on the same tablespace which suppose to be on TBS2.
    Can someone help me perform this steps?

    mrp wrote:
    will this option recreate index?The best way to find answers of such questions would be to test it yourself.
    SQL> conn aman/aman
    Connected.
    SQL> create table test as select * from scott.dept;
    Table created.
    SQL> create index tidx on test(deptno);
    Index created.
    SQL> select object_name, object_type from user_objects;
    OBJECT_NAME
    --------------------------------------------------------------------------------OBJECT_TYPE
    TIDX
    INDEX
    TEST
    TABLE
    SQL> create table test2 as select * from test;
    Table created.
    SQL> column object_name format a40
    SQL> select object_name, object_type from user_objects;
    OBJECT_NAME                              OBJECT_TYPE
    TIDX                                     INDEX
    TEST                                     TABLE
    TEST2                                    TABLE
    SQL>So did the index get created for the 2nd table or not?
    Aman....

  • Best practice: self referencing table

    Hallo,
    I want to load a self referencing table to my enterprise area in the dwh. I am not sure how to do this. What if a record references to another record which is inserted later in the same etl process? There is now way to sort, because it is possible that record a references to record b and record b references to record a. Disabling the fk constraint is not a good idea, because this doesn't prevent that invalid references will be loaded? I am thinking of building two mappings, one without the self referencing column and the other only with it, but that would cause approx. twice as much running time. Any other solutions?
    Regards,
    Torsten

    Mind sharing the solution? Would be interested to hear your solution (high level).
    Jean-Pierre

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Is there any best practice or standard for database object naming ?

    Hi
    Thank you for reading my post
    is there any standard or best practice for databse objects naming ?
    for example how should we name columns of a table ? should it be like TOTAL_VOTE or TOTALVOTE and many other items.
    Thanks

    what does oracle suggest as a naming schema for tables , fields , views. indexes , tablespaces , ... If you look at the data dictionary you will see that not even Oracle keeps rigidly to any specific standard, although there are tendencies :)
    "The nice thing about standards is that there are so many of them to choose from."      
    -- Andrew Tannenbaum
    Cheers, APC

Maybe you are looking for

  • Dymo labels javascript (API) issue

    Hi all. APEX 4.2 11gR2 XE database. I'm trying to print labels directly from APEX via a JS library from DYMO (DYMO label framework for javascript). I would like to fetch the XML for the label layout dynamically from the database instead of fixed assi

  • Error in miro posting

    Sir, I have created on PO in which i have deducted contractor margin.now whn i m passing invoice this contractor margin which i have deducted is hitting gl code 515200 which is correct,but whn we simulate instead of hitting 515200,its hitting some ot

  • RF LM03/LM05 verification on batch number

    Hi All, LM03/LM05 (putaway/pick by TO) does not seem to have verification field for batch number. Do you know any standard SAP RF transaction that supports verification on batch (for batch managed material) during putway and pick? Many Thanks! Steven

  • Illustrator CS5.1 uneven stroke offset problem

    I'm getting a problem in Illustrator CS5.1 that I can't seem to solve. The attached image shows the issue I'm having. I've set a 1px stroke on three identical boxes. Top - stroke set to outside - stroke overlaps edge of shape unevenly, shape appears

  • Please help with Mail Syncing

    Ok I have my Mail app on Leopard set up to recieve my Gmail emails. Same thing on my iPhone. Now the question is: How can I set it up so that emails from my Gmail account are sent to both my iPhone and Mail app on my macbook? Same with sent mail, is