ESS MSS Best Practice Methods

When implementing employee self service and manager self service what are the best practices when creating ids? Do most use active directory? Or employee Ids and or generated numbers?
Or would it be beneficial to use Employee IDs so that ABAP programming could make updates automatically?
I would like to know what the best methods some may recommend.
Thanks.

Thanks for clarifying!
Most companies which I have observed use the AD alias, which is also the email address SMTP name and easier to associate to the IT 0105 pernr via the employee first name and last name, and that in the user master address data as well.
First 7 characters are last name, last character is first character of first name, etc => 'BUSSCHEJ'
But then again, if your AD name is a generated number or cryptic value, then why not call yourself S123456789 like here at SDN, or R2D2 for that matter.
Using the pesonnel number is another option, but you should first check where else it is used. Perhaps it is like the US Social Security Number, which is meant to be kept "top secret" like a password...

Similar Messages

  • Ess/mss best practice

    When implementing employee self service and manager self service what are the best practices when creating ids.  Do most use active directory. Or employee Ids and or generated numbers. 
    I would like to know what the best methods some may recommend.
    Thanks.

    Thanks for clarifying!
    Most companies which I have observed use the AD alias, which is also the email address SMTP name and easier to associate to the IT 0105 pernr via the employee first name and last name, and that in the user master address data as well.
    First 7 characters are last name, last character is first character of first name, etc => 'BUSSCHEJ'
    But then again, if your AD name is a generated number or cryptic value, then why not call yourself S123456789 like here at SDN, or R2D2 for that matter.
    Using the pesonnel number is another option, but you should first check where else it is used. Perhaps it is like the US Social Security Number, which is meant to be kept "top secret" like a password...

  • ESS/MSS best practices

    When implementing employee self service and manager self service what are the best practices when creating ids.  Do most use active directory. Or employee Ids and or generated numbers. 
    I would like to know what the best methods some may recommend.
    Thanks.

    Best Practice is LDAP if it is a stand alone Java Instance.
    If its a dual stack then you have no choice but to use ABAP.

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • ESS Implementation Best Practice

    I am implementing the latest version of Portal on Netweaver 7.0.  When the business package is loaded onto the server, I get a folder in PCD at Portal Content > Content Provided by SAP > End User Content > Employee Self-Service.
    I'm curious what the best practice is for implementing any changes to the content in that folder.  Should I copy the entire folder into my own folder in the PCD (by where all my BI objects are for example) and then make the changes I need?  If I do that and we later upgrade, will these objects be updated?  Or is the best thing to do just copy the ones I'm going to make slight changes to and have everything still referencing back to the "Content Provided By SAP" folder?
    Any help would be appreciated.

    Hi,
    Create a new folder and copy the PCD objcets you want as a delta link. This way you changes will be preserved without impacting the original objects. Then if you upgrade, the new objects will inherit the new features into your delta link.
    Check this link
    http://help.sap.com/saphelp_nw70/helpdata/EN/67/77913c49425438e10000000a114027/frameset.htm
    Regards
    Srini
    Edited by: Sinivasan Rajamani on Aug 3, 2009 5:20 PM

  • Ask: Training Scheduling Best Practice Methods

    Hello,
    Can anyone share what is the best method for training scheduling? I am talking about given days, given timeslot, material distribution, class/participants distribution, etc.
    I am on my second SAP implementation in OCM team so I would like to hear from you to make improvements.
    Thanks a lot.
    -Ilham A. Pratomo.

    Hi,
    I have handled some Knowledge Transfer phase of a AMS project and I feel that atleast 4 weeks is a sufficient time for any complete knowledge transfer from the existing vendor to the new vendor. The time period is mentioned keeping in mind the precision of planning done. Planning is the key to success in this kind of take over. Following points may help to plan things
    1. Spectrum of systems / modules involved / processes / level of customisation
    2. Availability of resources for providing training and taking the knowledge transfer
    3. The network and the other infrastructure to be made available
    4. *Stakeholder commitment in providing the knowledge transfer*. Generally this one is a toughest in hostile takeovers
    5. Language barriers during K.T.
    Once these things are clear, the plan should be done for all possible processes, process chains, integrations with other module, involvement of interfaces etc.
    The success of K.T. should also be evaluated by asking the trainees to provide a reverse K.T. every week on the topics taught to them.
    Finally   Documentation of all the processes, programs, interfaces  is most important.
    Even after this, there has to be a monitoring phase to ensure that knowledge has uniformly pervaded across all members of the team. In case of need, internal KTs need to be arranged within the team.
    By doing all this, we were able to be productive from 3rd month onwards into the Managed Services phase.
    Hope this approach helps you

  • Best Practice Method Signature

    One thing that has bugged me for a while is whether or not it is better to return a method result as a return type or alter the passed parameter.
    Here is an example
    SomeObject s = T.someFunction();
    SomeObject someFunction() {
    return new SomeObject("name");
    or
    SomeObject s = new SomeObject();
    T.someFunction( s );
    void someFunction( SomeObject s) {
    s.setName("name");
    Here I am assuming the outcome is the same s.name will be "name" but one returned the new object and one used the object reference passed by value to the method.
    What are the implications regarding heap and future maintainability?? Are there any or is it simply personal preference as to which is used?

    I agree with both of you, I prefer using the return.
    By doing things this way it tends to limit the responsibility of each method to performing one specific task. When doing things the other way there seems to be a tendency to make the method do more for example setting data on several passed parameters rather than splitting these tasks and returning the values to set on each parameter.
    heres what I mean
    doLots( ItemA a, ItemB b ) {
    a.setSomething("test");
    b.setSomethingElse("test2");
    instead of
    String doLittleToA() {
    return "test";
    String doLittleToB() {
    return "test2";
    and calling it like this
    a.setSomething( doLittleToA());
    b.setSomethingElse(doLittleToB());
    my reason for asking this question is that I have been working on some code that looks to have been ported from a C/C++ environment and wanted to get the opinion of a few Java developers as I think the approach needs changing to suit OO.
    Thanks guys
    Edited by: somethingfortheweekend on Dec 23, 2008 6:21 AM

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Best practice for database move to new disk

    Good morning,
    Hopefully this is a straight forward question/answer, but we know how these things go...
    We want to move a SQL Server Database data file (user database, not system) from the D: drive to the E: drive.
    Is there a best practice method?
    My colleague has offered "ALTER DATABASE XXXX MODIFY FILE" whilst I'm more inclined to use "sp_detach_db".
    Is there a best practice method or is it much of a muchness?
    Regards,
    Andy

    Hello,
    A quick search on MSDN blogs does not show any official statement about ALTER DATABASE – MODIFY FILE vs ATTACCH. However, you can see a huge number of article promoting and supporting
     the use of ALTER DATABASE on any scenario (replication, mirroring, snapshots, always on, SharePoint, service broker).
    http://blogs.msdn.com/b/sqlserverfaq/archive/2010/04/27/how-to-move-publication-database-and-distribution-database-to-a-different-location.aspx
    http://blogs.msdn.com/b/sqlcat/archive/2010/04/05/moving-the-transaction-log-file-of-the-mirror-database.aspx
    http://blogs.msdn.com/b/dbrowne/archive/2013/07/25/how-to-move-a-database-that-has-database-snapshots.aspx
    http://blogs.msdn.com/b/sqlserverfaq/archive/2014/02/06/how-to-move-databases-configured-for-sql-server-alwayson.aspx
    http://blogs.msdn.com/b/joaquint/archive/2011/02/08/sharepoint-and-the-importance-of-tempdb.aspx
    You cannot find the same about ATTACH. In fact, I found the following article:
    http://blogs.msdn.com/b/sqlcat/archive/2011/06/20/why-can-t-i-attach-a-database-to-sql-server-2008-r2.aspx?Redirected=true
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • HTTP/HTTPS on the same ACE VIP - best practice

    I currently have a VIP representing one server farm that contains two http servers:-
    class-map match-all VIP-HTTP-xxxxx.co.uk
    2 match virtual-address 10.79.18.10 tcp eq www
    class-map match-all VIP-SSL-xxxxx.co.uk
    2 match virtual-address 10.79.18.10 tcp eq https
    I have port 80 and 443 open on the VIP and SSL termination performed on the ACE (both http servers are the same and configured for default load balancing behaviour - I've also specified port 80 for ACE to server traffic). Having 80 and 443 on the same VIP (meaning the site can be accessed via one NAT'd external IP) came from a request from the business so the site can have one domain.
    The majority of the http server(s) web content is standard http but there is a specific sub-directory of interactive forms that requires https termination.
    I have a couple of queries with regards to URL re-writes:-
    1) Is the SSL URL re-write functionality limited to just the host part of the URL or can the ACE enforce https for specific sub-directories, i.e. can the ACE intercept and re-write a URL if a user tries to go to a particular https page/directory using http (by just deleting the s from the URL within their browser)? A possible example being:-
    ssl url rewrite location "www\.cisco\.com\secure-forms"
    2) Can the ACE re-direct users back to a standard http page if they try to 'secure' their session by changing http to https within their browser (basically the opposite of the above).
    Basically as I have 80 and 443 on the same VIP I'm interested in the best practice methods of enforcing http and https content segregation using just the ACE (as opposed to having Apache doing the re-writes, etc).
    Web services functionality (in terms of SSL and URL re-writes) has traditionally fallen within the domain of a dedicated web development team (who use Apache, Tomcat, etc.) but the introduction of the ACE as a load balancing appliance that is primarily managed by the networks team but with functionality that crosses traditional team boundaries has resulted in lots of questions from web development around what functionality can be moved from Apache, etc. and onto the ACE?
    Any advice or personal experiences would be gratefully received.
    Thanks
    Matthew

    Back again!
    Could someone possibly cast their eye over the following config?
    The only bit I'm not sure on (syntactically and whether it can even be done on the ACE) is how to specify a DO NOT match regular expression, i.e. how to capture https URLs that do not match my secure pages so I can re-direct the request back to the normal http URL (class-map type http loadbalance Non-Secure_Pages). What I'd like to avoid is re-directing requests that don't need to be, i.e. re-directing all requests that don't match /secure back to http when the majority will be correctly going to a normal http URL :-
    rserver host server1
    description *** HTTP server 1 ***
    ip address 10.100.194.2
    inservice
    rserver host server2
    description *** HTTP server 2 ***
    ip address 10.100.194.3
    inservice
    rserver redirect REDIRECT_TO_HTTPS
    webhost-redirection https://www.website.co.uk/%p 302
    inservice
    rserver redirect REDIRECT_TO_HTTP
    webhost-redirection http://www.website.co.uk/%p 302
    inservice
    class-map type http loadbalance Secure_Pages
    match http url /secure.*
    class-map type http loadbalance Non-Secure_Pages
    *** DO NOT *** match http url /secure.*
    class-map match-all VIP-HTTP-website.co.uk
    2 match virtual-address 10.79.18.10 tcp eq www
    class-map match-all VIP-SSL-website.co.uk
    2 match virtual-address 10.79.18.10 tcp eq https
    policy-map type loadbalance first-match VIP-LB-HTTP-website.co.uk
    class Secure_Pages
    serverfarm REDIRECT_TO_HTTPS
    class class-default
    serverfarm serverfarm-website.co.uk
    policy-map type loadbalance first-match VIP-LB-SSL-website.co.uk
    class Non-Secure_Pages
    serverfarm REDIRECT_TO_HTTP
    class class-default
    serverfarm serverfarm-website.co.uk
    serverfarm host serverfarm-website.co.uk
    failaction purge
    rserver server1 80
    probe PING_SERVER
    probe http-website.co.uk
    inservice
    rserver server2 80
    probe PING_SERVER
    probe http-website.co.uk
    inservice
    serverfarm redirect REDIRECT_TO_HTTPS
    rserver REDIRECT_TO_HTTPS
    inservice
    serverfarm redirect REDIRECT_TO_HTTP
    rserver REDIRECT_TO_HTTP
    inservice
    many thanks

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Building a best practice web application using ColdFusion and Jave EE

    I've been tasked with rewriting a software using ColdFusion.  I cannot seem to find a lot of information on best practice development in ColdFusion.  I am an experience Java developer who has never used ColdFusion before.  I want to build this application using a synergy of ColdFusion and Java EE technologies.  Can someone recommend me a book that outlines how to developer in ColdFusion?  Ideally this book assumes the reader is an experienced developer with no exposure to ColdFusion.  Ideally the methods outlined in the book are still "best practice" methods.

    jaisheela wrote:
    Hello Friends,
    I am also in the same situation.
    I am a building a new web application using JSF and AJAX.
    Requirement is I need to use IBM version of DOJO and JSF but I need to develop the whole application using Eclipse 3.3,2 and Tomcat 5.5.
    With IBM version of DOJO and JSF, will Eclipse and Tomcat help to speed up the development or do you suggest me to go for Rational Application Developer and WebSphere Application Server.
    If I need to go with RAD and WAS, then I am new to RAD and WAS, is it easy to use RAD and WAS for this kind of application and implement web applicaiton fast.
    Any feedback will be great help.Those don't sound like requirements of the system to me. They sound more like someone wants to improve their CV/resume
    From what I've read recently, if it's just fast you want, look at Ruby on Rails

Maybe you are looking for

  • Report output in character mode

    Hi, I have a group by report with sub totals after each grouping. I need this report output in a text file in character mode. The output looks fine, but the sub totals are not being displayed in character output format. When I run it to the screen th

  • Help with Oracle Database XE edition tutorial

    Hi, I am a student and our instructor has told us to install 11g and now has us running through the 10g tutorial and I've run into a number of problems. There is one issue that I cannot find the solution for and was hoping that somebody could give me

  • BUG: Minimize button doesn't work correctly

    Hi, Using WL 8.1 SP4 Portal: When you minimize a portlet, the title will change. It will take the title of the portlet that is above the minimized portlet. regards, hummin

  • File to MS-Access database

    Hello All, I have been trying with scenario where in File content needs to create records in MS-Access database. As it is MS-Access there would not be any specific driver installation and i configured everything as specified and required but still i

  • Need to Create Service Orders from Hand Held Devices

    Hi, Could any one please provide me the Function Module for creation of service orders from Hand Held devices? Thanks and Regards, Gopinath Addepalli.