Best Practice - OSS IDs

We are trying to implement a good process for requesting, checking out, and assigning the OSS ID's.  Can you please let us know what you do or perhaps best practice of this?
Do you have SAP_ALL assigned to OSSUSER IDs?
Do you require approval from a manager prior to using?
Does your ID's expire in SU01 valid-to date?
How do you designate an OSS ID is for a particular message or ticket?
Do people request these before they log tickets or after when SAP wants this information?
Are you using Fire Fighter to track what they do?
Are you reviewing their activity? If so, how often?
We want to make the SOX auditors happy with our process. Any information will be helpful.
Thanks,
Carly

Hi Carly,
Please see my response below u2013
Do you have SAP_ALL assigned to OSSUSER IDs? u2013 SAP_ALL should be given to only select few (depending on role) & not to all.
Do you require approval from a manager prior to using? Yes, approval mechanism should be in place.
Does your ID's expire in SU01 valid-to date?- Yes. This is one of the best practices to follow.
How do you designate an OSS ID is for a particular message or ticket? OSS Id is for a SAP-user. It is used for posting messages to SAP, downloading OSS Notes/documents from SCN. For tickets, you have your regular SAP user ids. Both need to be different. (Not sure if I understood your question fully)
Do people request these before they log tickets or after when SAP wants this information? We can create OSS IDs for teams immediately after SAP is installed.
Are you using Fire Fighter to track what they do? - No
Are you reviewing their activity? If so, how often? u2013 Yes, fortnightly reviews.
Hope this helps.
Thanks,
Prashant

Similar Messages

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Syncing App IDs across servers -- Best Practice?

    This is prompted by a comment chrisstephens made in the thread at non-existent applications in non-existent workspaces reserving app id's
    Our developers are convinced that the application id's between our dev + staging
    + production environments need to be synchronized.Our team also keeps our dev, test, and prod server app IDs synchronized -- for instance, the Widget Reporting App is always app # 38 on all three servers. For us, it's not something we see as REQUIRED, but it is convenient, and a general sanity check. If the numbers didn't sync, it seems it would be all too easy to get values mixed up and accidentally field an app to the wrong place (possibly overwriting some other application).
    What is the community's opinions on this? Would you consider this an Apex Best Practice? Just a habit for some groups? Or overly rigid thinking?
    (I personally fall in the Best Practice group.)

    One good reason to keep them the same is so that there are no differences between what is tested in one environment and what is deployed in another. Case in point, just last week someone demonstrated that an application's authentication scheme failed when the application ID was changed from xxx to xxxxxxxxx (a longer string of digits). Of course this was due to a previously unknown bug, but that's what testing should reveal.
    Another good reason is to make it possible to export application components (pages, etc.) from one database (say, dev) and install them into an application in another database (say, prod). This is not possible if the application IDs are different.
    Scott

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • SAP Best Practice for Document Type./Item category/Acc assignment cat.

    What is the Best Practice for the document Type & Item category
    I want to use NB -  Item category  - B & K ( Blanket PO) , D ( Service)  and T( Text) .
    Is sap recommends to use FO Only for the Blanket Purchase Order.
    We want to use service contract (with / without service entry sheet) for all our services.
    We want to buy asset for our office equipments .
    Which is the best one to use NB or FO ?
    Please give me any OSS notes or reference for this
    Thanks
    Nick

    Thank you very much for your response. 
    I hope I can provide some clarity on how the accounting needs to be handle per FERC  Regulations.  The G/L balance on the utility that is selling the assets will be in the following accounts (standard accounts across all FERC Regulated Utilities):
    101 - Acquisition Value for the assets
    108 - Accumulated Depreciation Value for the assets
    For an example, there is Debit $60,000,000 in FERC Account 101 and a credit $30,000,000 in FERC Account 108.  When the purchase occurs, the net book value for the asset will be on our G/L in FERC Account 102.  Once we have FERC Approval to acquire the plant assets, we will need to enter the Acquisition Value and associated Accumulated Depreciation onto our G/L to FERC Account 101 and FERC Account 108 respectively with an offset to FERC Account 102.
    The method that I came up with is to purchase the NBV of the assets to a clearing account.  I then set up account assignments that will track the Acquisition Value and respective Accumulated Depreciation for each asset that is being purchased.  I load the respective asset values using t-code AS91 and then make an entry to the 2 respective accounts with the offset against the clearing account using t-code OASV.  Once my company receives FERC approval, I will transfer the asset to new assets that has the account assignments for FERC Account 101 and FERC Account 108 using t-code ABUMN or FB01.

  • Best practice for creating RFC destination entries for 3rd parties(Biztalk)

    Hi,
    We are on SAP ECC 6 and we have been creating multiple RFC destination entries for the external 3rd party applications such as Biz-talk and others using TCP/IP connection type and sharing the programid.
    The RFC connections with IDOC as data flow have been made using Synchronous mode for time critical ones(few) and majority through asynchronous mode for others. The RFC destination entries have been created for many interfaces which have unique RFC destinations with its corresponding ports defined in SAP. 
    We have both inbound and outbound connectivity.with the large number of RFC destinations being added we wanted to review the same. We wanted to check with others who had encountered similar situation and were keen to learn their experiences.
    We also wanted to know if there are any best practices to optimise on number of RFC destinations.
    Here were a few suggestions we had in mind to tackle the same.
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    I have done checks on SAP best practices website, sap oss notes and help pages but could not get specific information I was after.
    I do understand we can have as unlimited number of RFC destinations and maximum connections using appropriate profile parameters for gateway, RFC, client connections, additional app servers.
    I would appreciate if you can suggest the best architecture or practice to achieve  RFC destinations in an optimized manner.
    Thanks in advance
    Sam

    Not easy to give a perfect answer
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    -> be careful if you have multi cllients ( for example in acceptance) RFC's are client independ but ports are not! you could run in to trouble
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    -> could be the best solution... its easier to create partner profiles and the control record will contain the correct partner.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    -> consider this option 2.
    We send to you messagebroker with 1 RFC destination , sending multiple idoctypes, different partners , different ports.

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

    problem with using object id fields instead of PC object references in your
    object model is that it makes your object model less useful and intuitive.
    Taking to the extreme (replacing all object references with their IDs) you
    will end up with object like a row in JDBC dataset. Plus if you use PM per
    HTTP request it will not do you any good since organization data won't be in
    PM anyway so it might even be slower (no optimization such as Kodo batch
    loads)
    So we do not do it.
    What you can do:
    1. Do nothing special just use JVM level or distributed cache provided by
    Kodo. You will not need to access database to get your organization data but
    object creation cost in each PM is still there (do not forget this cache we
    are talking about is state cache not PC object cache) - good because
    transparent
    2. Designate a single application wide PM for all your read-only big
    things - lookup screens etc. Use PM per request for the rest. Not
    transparent - affects your application design
    3. If large portion of your system is read-only use is PM pooling. We did it
    pretty successfully. The requirement is to be able to recognize all PCs
    which are updateable and evict/makeTransient those when PM is returned to
    the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
    all managed object of a certain class) so you do not have stale data in your
    PM. You can use Apache Commons Pool to do the pooling and make sure your PM
    is able to shrink. It is transparent and increase performance considerably
    One approach we use
    "Gary" <[email protected]> wrote in message
    news:[email protected]...
    >
    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

  • Best Practice to generate UUIDs in a Cluster-Server Environment

    Hi all,
    I just need some inputs over the best practices to generate UUIDs in typical internet world where there are multiple servers/JVMs involved for load balancing or traffic distribution etc. I know JAVA is shipped with very efficient UUID generator API.
    But still that doesn't solve the issue in multiple server environment.
    For the discussion sake lets assume I need it to be unique over the setup than a near unique.
    How do you guys approach it?
    Thanks you all in advance.

    codeNombre wrote:
    jverd wrote:
    codeNombre wrote:
    Thanks jverd,
    So adding to the theory of "distinguishing all possible servers" in addition to UUID over each server would be the way to go.If you're unreasonably paranoid, sure.I think its a common problem and there is a big number of folks who might still be bugged about the "relative uniqueness" of UUID in long run. People who don't understand probability and scale, sure.
    Again coming back to my original problem in an "internet world", shouldn't the requirement like unique id between different servers be dealt with generating the UUID's at a layer before entering into the multi-server setup. Where would that be? I don't have the answer..Again, that is the POINT of the UUID class--so that you can generate as many IDs as you want and still be confident that nobody anywhere is in the world has ever generated any of those same IDs. However, if your requirements say UUID is not good enough, then you need to define what is, and that means having a lot of foresight as to how this system will evolve and how long it will live, AND having total control over some aspect of your servers, AND having a process that is so good that it's LESS LIKELY for a human to screw up and re-use a "unique" server ID than the probabilities I presented in my previous post.

  • Best Practice for CTS_Project use in a Non-ChARM ECC6.0 System

    We are on ECC6.0 and do not leverage Solution Manager to any extent.  Over the years we have performed multiple technical upgrades but in many ways we are running our ECC6.0 solution using the same tools and approaches as we did back in R/3 3.1. 
    The future vision for us is to utilize CHARM to manage our ITIL-centric change process but we have to walk before we can run and are not yet ready to make that leap.  Currently we are just beginning to leverage CTS_Projects in ECC as a grouping tool for transports but are still heavily tied to Excel-based "implementation plans".  We would appreciate references or advice on best practices to follow with respect to the creation and use of the CTS_Projects in ECC.
    Some specific questions: 
    #1 Is there merit in creating new CTS Projects for support activities each year?  For example, we classify our support system changes as "Normal", "Emergency", and "Standard".  These correspond to changes deployed on a periodic schedule, priority one changes deployed as soon as they are ready, and changes that are deemed to be "pre-approved" as they are low risk. Is there a benefit to create a new CTS_Project each year e.g. "2012 Emergencies", "2013 Emergencies" etc. or should we just create a CTS_Project "Emergencies" which stays open forever and then use the export time stamp as a selection criteria when we want to see what was moved in which year?
    #2 We experienced significant system performance issues on export when we left the project intersections check on.  There are many OSS notes about performance of this tool but in the end we opted to turn off this check.  Does anyone use this functionality?  Any reocmmendations?
    Any other advice would be greatly appreciated.

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • Best Practice for enhancing the SAP delivered standard WD ABAP application

    Hi,
    I am new to WebDypro ABAP.
    To enhance the SAP delivered Standard WebDynpro Component (complex component with Business objects & powl).
    Kindly let me know the best practice for enhancing the Standard WD ABAP from the below 1 or 2.
    1) To copy & create a "Z" of the component & make changes in that (or)
    2) to enhance directly on the same standard component without making "Z".
    Regards,
    NS

    Hi NS,
    If it is a standard component its better we go for enhancing the component rather than copying it into Z component.
    If there is any issue with in the standard component , SAP supports it through notes and OSS messages. If it is a Z component, SAP doesn't support it.
    If there is any up gradation of business packages, changes will be done to standard , but not the Z components, wherein we could miss it.
    Further, since it is a standard component it might have been used at many places, changes that has to done to reflect all changes might be difficult in this case if it is a z component.
    Regards,
    Harsha

  • Best Practice for BEX Query "PUBLISH to ROLE"?

    Hello.
    We are trying to determine the best practice for publishing BEX queries/views/workbooks to ROLEs. 
    To be clear of the process I am referring: from the BEX Query Designer, there is an option QUERY>PUBLISH>TO ROLE.  This function updates the user menu of the selected security role with essentially a shortcut to the BEX query.  It is also possible to save VIEWS/WORKBOOKS to a role from the BEX Analyzer menu.  We have found ROLE menus to be a good way to organize BEX queries/views/workbooks for our users. 
    Our dilemma is whether to publish to the role in our DEV system and transport to PROD,... or if it is ok to publish to the role directly in the PROD system.
    Publishing in DEV is not always possible, as we have objects in PROD that do not exist in DEV. For example, we allow power users to create queries directly in PROD.  We also allow VIEWS and WORKBOOKS to be created directly in PROD.  It would not be possible to publish types of objects in DEV. 
    Publishing in PROD eliminates the issues above, but causes concerns for our SECURITY team.  We would be able to maintain these special roles directly in PROD.
    Would appreciate any ideas, suggestions, examples of how others are handling this BEX publish-to-role process.
    Thank you.
    -Joel

    Hi Joel,
    Again as per the Best Practices.Nothing to be created in PRD,even if we create them in PRD for Power users its assumed as temprory and can be deleted at any time.
    So if there are already deviations then you can go for deviations in this case as well but it wont be the Best Practice.Also in few cases we have workbooks created in PRD as they cud nt be created in DEV due to various reasons...in such cases we did not think of Best Practice ,we had a raised an OSS on this aswell.
    In our Project,we have done everything in DEV and transported to PRD,in case there were any very Minor changes at query level we have done in PRD and immedialtely replicated the same in DEV so that they are in SYNC.
    rgds
    SVU

  • Code Set pattern or best practice?

    Hi all,
    I have what I would have thought to be a common problem: the best way to model and implement an organization's code sets. I've Googled, and I've forumed - without success.
    The problem domain is this: I'm redeveloping an existing application, which currently represents it's vast array of code sets using a seperate table for each set. There are currently 180+ of these tables. Not a very elegant approach at present. The majority of these code sets are what I would class as "simple" - a numeric value associated with a textual description - eg 1 = male, 2 = female, or 1 "drinks excessively", 2 "drinks sometimes" ... etc. Most of these will just be used to associate a value with a combo box selected value.
    There are also what I would class as "complex" code sets, which may have 1..n attributes (ie not just a numeric and text value pair). An example of this (not overly complex) is zip code, which has a unique identifier, the zip code itself (which may change - hence the id), a locality description, and a state value.
    Is there a "best practice" approach or pattern which outlines the most efficient way of implementing such code sets? I need to consider performance vs the ability to update the code set values, as some of them may change from time to time without notice at the discretion of government departments.
    I had considered hard coding, creating classes to represent each one, holding them in xml files, storing in the database etc, but it would seem that making the structure generic enough to cater to varying numbers of attributes and their associated datatypes will be at the cost of performance.
    Any suggestions would be greatly appreciated.
    Thanks.
    Paul C.

    Hi Saish,
    Thanks for your response. Yes, this approach is what
    I had considered - I'll be using Hibernate so these
    values will be cached etc.
    I guess my main concern is reducing the huge number
    of very small tables in use. I was thinking about
    this some more, and for the simple tables was
    thinking of 2 tables: 1 (eg "CODE_SET") to describe
    the code set (or ref table etc) in question, the
    second to hold the values. This way 80 odd tables
    would be reduced to 2. Not sure what's best here -
    simpler ER diagram or more performance!Tables...
    Enumeration
    - EnumerationId
    - EnumerationName
    - EnumerationAbbreviation
    EnumerationValues
    - EnumerationId
    - ValueIndex
    - ValueName
    - ValueAbbreviation
    The above allows the names to change.
    You can add a delete flag if values might be deleted but old records need to be maintianed.
    Convention: In the above I specifically name the second table with a plural because it holds a collection of sets (plural) rather than a single set.
    In the first table the id is the key. In the second the id and the index are the key. The ids are unique (of course). The enumeration name should be unique in the first table. In the second table the EnumerationId and value name should be unique.
    Conversely you might choose to base uniqueness on the abbreviation rather than the name.
    The Name vs Abbreviation are used for reporting/display purposes (long name versus short name).
    It is likely that for display/report purposes you will have to deal with each of the sets uniquely rather than a group. Ideally (strongly urged) you should create something that autogenerates a java enumeration (specific with 1.5 or general with 1.4) that uses the id values and perhaps the indexes as the values and the names are generated from the abbreviations. This should also generate the database load table for the values. Obviously going forward care must be taken in how this is modified.

  • SAP Business One 2007 - SQL Security best practice

    I have a client with a large user base running SAP Business One 2007. 
    We are concerned over the use of the sql sa user and the ability to change the password of this ID from the logon of SAP Business One.
    We therefore want to move to use Windows Authentication (ie Trusted Connection) from the SAP BO logon.  It appears however that this can only work by granting the window IDs (of the SAP users) sysadmin access in SQL.
    Does anyone have a better method of securing SAP Business One or is there a recommended best practice.  Any help would be appreciated.
    Damian

    See Administrators Guide for best practise.
    U can use SQL Authentication mode Don't tick Remember password.
    Also check this thread
    SQL Authentication Mode
    Edited by: Jeyakanthan A on Aug 28, 2009 3:57 PM

Maybe you are looking for

  • Using sequence in a multiuser environment

    We plan to use table wise sequences to generate incremental ID's for each Oracle table. Is this an issue in a multi user environment ? SOmebody said this could have contention issues and slow down performance. We expect this application to have in ex

  • Oracle 11g native web services

    Is there any formal explanation about how to configure and create Oracle 11g native web services and how to correctly secure these services? Since Oracle APEX now supports consumption of SOAP and REST web services, it makes sense to have more explana

  • Cluster change error

    Hi, I'm experiancing the following error, All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object. Simple maintanance of variables in session is sufficient? can anyone he

  • Everytime I open Firefox it asks me to download a new version and install new plugins.

    I do this and yet when I turn my computer on the next day, it asks me to do it again. Also clicking on Update the plug in produces a Adobe Security Bulletin webpage but it's just a black page. Also, everytime I sign in to Yahoo mail, it asks me to in

  • Garageband

    I can not update my garageband to the 4.1.2 It tells me that "an eligible GarageBand application was not found in/Applications." HELP