Ask: Training Scheduling Best Practice Methods

Hello,
Can anyone share what is the best method for training scheduling? I am talking about given days, given timeslot, material distribution, class/participants distribution, etc.
I am on my second SAP implementation in OCM team so I would like to hear from you to make improvements.
Thanks a lot.
-Ilham A. Pratomo.

Hi,
I have handled some Knowledge Transfer phase of a AMS project and I feel that atleast 4 weeks is a sufficient time for any complete knowledge transfer from the existing vendor to the new vendor. The time period is mentioned keeping in mind the precision of planning done. Planning is the key to success in this kind of take over. Following points may help to plan things
1. Spectrum of systems / modules involved / processes / level of customisation
2. Availability of resources for providing training and taking the knowledge transfer
3. The network and the other infrastructure to be made available
4. *Stakeholder commitment in providing the knowledge transfer*. Generally this one is a toughest in hostile takeovers
5. Language barriers during K.T.
Once these things are clear, the plan should be done for all possible processes, process chains, integrations with other module, involvement of interfaces etc.
The success of K.T. should also be evaluated by asking the trainees to provide a reverse K.T. every week on the topics taught to them.
Finally   Documentation of all the processes, programs, interfaces  is most important.
Even after this, there has to be a monitoring phase to ensure that knowledge has uniformly pervaded across all members of the team. In case of need, internal KTs need to be arranged within the team.
By doing all this, we were able to be productive from 3rd month onwards into the Managed Services phase.
Hope this approach helps you

Similar Messages

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • Dynamic Scheduling Best Practice -- IS-U-BF-PS E1DY E2DY

    I have been tasked with resolving several long standing issues with My Companies Meter Reading Schedules. My question originates out of the desire to implement the eventual corrections I make as close to a best practice standard as possible.
    Near the end of 2009 I extended the Dynamic Schedule Records out to the end of 2010 with transaction E1DY
    At the beginning of 2010 I reported a program error which resulted in [Note 1411873|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=0001411873&nlang=E&smpsrv=https%3a%2f%2fwebsmp103%2esap-ag%2ede&hashkey=9D07D6F4306CBF2AF0B69DEE0022142E| Schedule record: Previous period end in dynamic scheduling]
    I requested clarification of the comment:
    "In certain operational scenarios that are not explicitly forbidden (but which are strongly advised against), the end of the previous period of the first schedule record of a series of a year may not be calculated correctly in the dynamic scheduling"
    & was advised:
    it means such cases where you don't have a full sequence of MRU.
    The standard process of dynamic scheduling is designed that you have for every day several readings (consilidated in meter reading units/ MRU).
    There was no further clarification other than the confirmation that the configuration existing in our system did not match this ideal condition.
    The Current Design of Dynamic schedules is as follows:
    1. No Budget Billing implemented at all. All Portions defined with a parameter record Without Budget Billing configured
    2. Several Groups of Monthly Portions allocated to Calendar Z3
    2a.     21 Monthly Portions
    2b.     21 Monthly Portions
    2c.     21 Monthly Portions
    2d.     21 Monthly Portions
    2e.     21 Monthly Portions
    2f.     20 Monthly Portions
    2g.     1 Monthly Portion
    2h.     1 Monthly Portion
    -Please note, that this results in day 21 of 2a-2e not including day 1 of the 2f Monthly Portions as is intended. this results in manual movement of the 20 Monthly portions in transaction E2DY one by one.
    -Please note, for portions in group 2d, & 2e there is a "gap" in the config where the factory Calendar is not Assigned for day 12 & 13 in the series. resulting in a gap in the schedule record creation.
    3. Many Meter Reading Units are configured for each portion.
    My intended changes to the configuration are as follows:
    4. No change to Budget Billing
    5. All Groups of 21 Monthly Portions (2a - 2e) share the same configuration, so change all Meter reading units for (2a - 2e) to 2b (least change)
    6. 2g is configured the same as day 14 of groups 2a - 2e, move to 2b equivalent
    7. 2h is configured the same as day 15 of groups 2a - 2e, move to 2b equivalent
    8. 2f is configured on Calendar Z3, so update configuration to Calendar ZL
    9. Generate schedule records for Calendars Z3 & ZL
    Having read all the above, can anyone expert in the design & implementation of Dynamic scheduling think of any issues which may arise from updating the configuration as described.
    If anything is unclear or stupid let me know, I'm definitely interested in feedback to help ensure the corrections are made smoothly, & to clarify what was the "operational scenarios that are not explicitly forbidden (but which are strongly advised against)" as mentioned in the SAP Note.
    Also as a final question, how feasible would it be to delete the unused portions after these changes are migrated?
    regards
    Daniel
    Edited by: Daniel McCollum on Sep 9, 2010 7:12 AM

    I have started on point 8 first:
    after moving all 2f portions to calendar ZL & reentering the Meter Reading Units to resync the calenadar configuration, I used E3DY to delete schedules on calendar ZL from a future date.
    This has eliminated the offending schedules on these portions from the Z3 Calendar.
    Point 9:
    using E1DY to generate the schedules & E2DY to "merge" them with the end of the older schedules still on Calendar Z3 has resulted in the expected 20 day cycle.
    I am now dealing with the portions still on the Z3 calendar by regenerating them via E1DY & moving to the correct dates via E2DY to verify the schedules.

  • Crystal Reports - scheduling best practice

    Hello,
    I would like to get some clarity on the best practice for scheduling Crystal Reports in a managed life cycle model. We are on XI 3.1 SP3.
    We have public reports that are scheduled as recurring instances for different daily/monthly reports. Is it ideal that power-users schedule these reports in DEV and then when we move over objects to PROD have the schedule settings carried over? This way in PROD the same reports will have preserved schedule settings. However, what I would like to know is that for this to work I will need to migrate historical instances via Import Wizard which also would bring over any historical instances (with static Dev data) to Prod.
    The other option is to move over just the report objects (schedule settings come over automatically) without the instances, which would mean users will have to set up similar recurring schedules in PROD again?
    Thanks!

    @Guru - so in that case what would the plan be for any future migration that happen from Dev to Prod for the same report(s)? When future migrations overwrite a report object, will the recurring schedule set up in Prod get affected, possibly disappear? Also, how would you recommend handling a situation where the same report has different scheduling properties in both environments -- how to avoid overwriting of schedule properties in Prod with those from Dev when migration the Crystal Report obejcts?
    Thanks!

  • ESS MSS Best Practice Methods

    When implementing employee self service and manager self service what are the best practices when creating ids? Do most use active directory? Or employee Ids and or generated numbers?
    Or would it be beneficial to use Employee IDs so that ABAP programming could make updates automatically?
    I would like to know what the best methods some may recommend.
    Thanks.

    Thanks for clarifying!
    Most companies which I have observed use the AD alias, which is also the email address SMTP name and easier to associate to the IT 0105 pernr via the employee first name and last name, and that in the user master address data as well.
    First 7 characters are last name, last character is first character of first name, etc => 'BUSSCHEJ'
    But then again, if your AD name is a generated number or cryptic value, then why not call yourself S123456789 like here at SDN, or R2D2 for that matter.
    Using the pesonnel number is another option, but you should first check where else it is used. Perhaps it is like the US Social Security Number, which is meant to be kept "top secret" like a password...

  • Best Practice Method Signature

    One thing that has bugged me for a while is whether or not it is better to return a method result as a return type or alter the passed parameter.
    Here is an example
    SomeObject s = T.someFunction();
    SomeObject someFunction() {
    return new SomeObject("name");
    or
    SomeObject s = new SomeObject();
    T.someFunction( s );
    void someFunction( SomeObject s) {
    s.setName("name");
    Here I am assuming the outcome is the same s.name will be "name" but one returned the new object and one used the object reference passed by value to the method.
    What are the implications regarding heap and future maintainability?? Are there any or is it simply personal preference as to which is used?

    I agree with both of you, I prefer using the return.
    By doing things this way it tends to limit the responsibility of each method to performing one specific task. When doing things the other way there seems to be a tendency to make the method do more for example setting data on several passed parameters rather than splitting these tasks and returning the values to set on each parameter.
    heres what I mean
    doLots( ItemA a, ItemB b ) {
    a.setSomething("test");
    b.setSomethingElse("test2");
    instead of
    String doLittleToA() {
    return "test";
    String doLittleToB() {
    return "test2";
    and calling it like this
    a.setSomething( doLittleToA());
    b.setSomethingElse(doLittleToB());
    my reason for asking this question is that I have been working on some code that looks to have been ported from a C/C++ environment and wanted to get the opinion of a few Java developers as I think the approach needs changing to suit OO.
    Thanks guys
    Edited by: somethingfortheweekend on Dec 23, 2008 6:21 AM

  • Best practices Struts for tech. proj. leads

    baseBeans engineering won best training by readers of JDJ and published the first book on Struts called FastTrack to Struts.
    Upcoming class is live in NYC, on 5/2 from 7:30 AM to 1:00PM. We will cover db driven web site development, process, validation, tiles, multi row, J2EE security, DAO, development process, SQL tuning, etc.
    We will teach a project tech lead methods that will increase the productivity of his team and review best practices, so that they can benchmark their environment.
    Sign up now for $150, the price will be $450 soon as we get closer to the date (price goes up every few days). The web site to sign up on is baseBeans.net* .
    You will receive a lab/content CD when you sign up.
    Contact us for more details.
    ·     We preach and teach simple.
    ·     We use a very fast DAO DB Layer – with DAO side data cache
    ·     We use JSTL
    ·     We use a list backed Bean w/ DAO helper design for access to any native source and to switch out DAO.
    ·     We use J2EE security, container managed declarative authorization and authentication. (no code, works on any app. server).
    ·     Struts based Content Management System. A Struts menu entry like this:
    <Item name="About_Contacts"      title="About/Contacts"
    toolTip="About Us and Contact Info" page="/do/cmsPg?content=ABOUT" />
    passes to the action the parm of “about” which the DAO populates.
    You can peak at the source code at sourceforge.net/projects/basicportal or go to our site baseBeans.net. (16,000 downloads since Oct. 2002)
    Note that the baseBeans.net is using the Content Management System (SQL based) that we train on. (our own dog food)
    Note: We always offer money back on our public classes.
    Vic Cekvenich
    Project Recovery Specialist
    [email protected]
    800-314-3295
    <a href =”baseBeans.net”>Struts Training</a>
    ps:
    to keep on training, details, best practice, etc. sign up to this mail list:
    http://www.basebeans.net:8080/mailman/listinfo/mvc-programmers
    (1,000 + members)

    Hi,
    We use only Stateful release modes for application modules, defined in the action mappings in struts-config.xml exactly the same way as in your example. Stateful mode releases the module instance back to the pool and it can be reused by other sessions as well. However, all the code that uses the app modules and view objects, etc, must be written with the assumption that the module or the view object the code is operating on can be a different instance from the one in the previous request in the same session.
    The concept of BC4J is that this recycling of modules should be transparent for the users of the app modules, but this is not exactly the case. Some things are not passivated in the am's snapshots and are not activated in case of recycling, for example, custom view object properties or entries in the userData map (or at least were not in 9.0.5, I doubt this is changed in 10.1.2.) These are things that you have to manually passivate and activate if you use them to store some information that is relevant to a particular user session.
    All chances are that these strange things that you experience only occur in sessions that use recycled application modules, that is, there was passivation and subsequent activation of vo and am states. I have found it useful as a minimum to test the application with only 1 application module in the pool and at least 2 user sessions, constantly recycling this one am instance. Many of the problems that will surface in a real application usage only when there is a high load can be experienced in this artificial setup.

  • Transports:  Best practice in reversing old transports not moved to PRD

    Hello Guru's,
    I have a lot of old transports moved from DEV to QAS that don't work or are irrelevant that need to be reversed.  These transports were made by a consultant no longer with the company and there is no documentation of what was changed.
    What is the best method in reversing these changes? 
    I appreciate your help!

    Next time, don't ask for the best practice. You would likely get more response if you simply ask "What would you do..?"
    Rob

  • Best Practices for Remote Data Communication?

    Hello all
    I am developing a full-fledged website in Flex 3.4 and Zend Framework, PHP. I am using the Zend_AMF class in Zend framework for communicating the data to remote server.
    I will be communicating to database in the following way...
    get data from server
    send form data to server
    send requests to server to get data in response
    Right now I have created just a simple login form which just sends two fields username and password in the method in service class on remote server.
    Here is a little peek into how I did that...
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
      <mx:RemoteObject id="loginService" fault="faultHandler(event)" source="LoginService" destination="dest">
        <mx:method name="doLogin" result="resultHandler(event)" />
      </mx:RemoteObject>
      <mx:Script>
        <![CDATA[
          import mx.rpc.events.ResultEvent;
          import mx.controls.Alert;
          private function resultHandler(event:ResultEvent):void
            Alert.show("Welcome " + txtUsername.text + "!!!");
        ]]>
      </mx:Script>
      <!-- Login Panel -->
      <mx:VBox>
        <mx:Box>
          <mx:Label text="LOGIN"/>
        </mx:Box>
        <mx:Form>
          <mx:FormItem>
            <mx:Label text="Username"/>
            <mx:TextInput id="txtUsername"/>
          </mx:FormItem>
          <mx:FormItem>
            <mx:Label text="Password"/>
            <mx:TextInput id="txtPassword" displayAsPassword="true" width="100%"/>
          </mx:FormItem>
          <mx:FormItem>
          <mx:Button label="Login" id="loginButton" click="loginService.doLogin(txtUsername.text, txtPassword.text)"/>
          </mx:FormItem>
        </mx:Form>
      </mx:VBox>
    </mx:Application>
    This works fine. But if I create a complicated form which has many fields then it would be almost unbearable to sent each fields as an argument of a function.
    Another method that can be used is using HttpService which supports XML like request and response.
    I want to ask what are best practices in Flex when using remote data communication on a large scale? Like may be using some classes or objects which store data? Can somebody guide me on how to approach data storing?
    Thanks and Regards
    Vikram

    Oh yes, I have done study about Cairngorm, haven't really applied it though. I thought that it helps in separating the data models, presentation and business logic into various layers.
    Although what I am looking for is something about data models may be?
    Thanks and Regards
    Vikram

  • GUI Design Best Practices

    Still learning Java and OOP. The question is, what is the best practice method for designing the user interface? For instance, I have a main_GUI that has all the basic stuff and then I need to have 5 or so different displays. A couple of them need to be modal so I'm creating those as jDialog. Is this correct or should you not use those.
    Actually, all of the additional screens I want to be modal so I don't see another way of creating a normal jFrame that is modal which is why I'm asking before creating every screen as a jDialog.
    Also, I have one screen that could have 3 different functions. Is it acceptable to create the screen with all of the components and just hide the ones that I don't need at any given time or should I actually create 3 different screens that all look roughly the same?
    Thanks for any input/advice.

    ShosMeister wrote:
    So what's the difference, or more importantly, what's wrong with using jFrames? If you are creating a stand-alone, non-web app, then you will of course create a JFrame, and place your app in it, but I'm suggesting that the app not extend JFrame but rather you simply create a JFrame when you need it, and place a JPanel in this JFrame's contentPane, pack it and display it. The reasons for not extending a JFrame are multiple but mostly boil down to a general preference to avoiding extending classes with inheritance and using instead composition. There are many blogs dedicated to discussing this paradigm which can be found with Google, and here are two decent articles from the first page of my search:
    [JavaWorld: Inheritance versus composition: Which one should you choose?|http://www.javaworld.com/javaworld/jw-11-1998/jw-11-techniques.html]
    [Object Composition vs. Inheritance|http://brighton.ncsa.uiuc.edu/~prajlich/T/node14.html]
    One error caused by extension of a Swing class via inheritance that I saw in a recent thread in the Swing forum involved a class that overrode JLabel and held x and y int variables. The class had setX(int i), setY(int i), and getX(), getY() methods and thereby unknowingly overrode JComponents own similar methods completely messing up the JLabels ability to be positioned correctly.
    Are you saying I should only have one "window" and swap the data constantly in that window? Nope. I am saying that you should emulate other windows-like programs that you use. Most use a combination of panel swapping, modal and non-modal dialogs, ... whatever works best for the situation. But most importantly, you should write your code so that it is easy to change from one to the other with a minimal change in your code. Aim for flexibility of use.
    Would that require that all the jPanels were the same size to be able to display correctly?If you swapped with a CardLayout and created your JPanels to be flexible with sizing, the CardLayout would take care of this mostly for you.
    So if I create a main jPanel, set it up with all of the components that I want, when I run the program and main() is called, it would create the jFrame and drop the jPanel into itself? Not sure I've seen any examples of that so I'll have to look that one up. Yes, you'd drop the JPanel into the JFrame's contentPane. There are plenty of examples here, but it may take some digging to find them.
    Unless of course I've completely misunderstood you which is possible since, as I've mentioned, I'm just learning Java.I think you are understanding what we suggest here. You are asking the right questions, so I predict that you should learn Java quickly.
    Thanks!!!!Welcome!

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

  • Best Practice for link to WebdynPro page in welcome page

    Hi Experts,
        I am new in SAP Portal. I need some guidance from you guys. I have a requirement to create welcome page which is JSP and has a link to a WebdynPro page. I have to put the url in JSP file. So i do not know what kind of URL i should put in the JSP. 
    The problem is if i put the url which i can see in the address bar like 'http://DevServer/WebDynPro/ApplcationA', when i transport it to another server ,for example Production,. The real url might be change to 'http://ProdServer/WebDynPro/ApplcationA'. It may cause the link in JSP can not be worked.
        I would like to ask you the best practice for this case. What url? What configuration?
    Thank you in advance,
    Noppong Jinbunluphol
    P.S. For the JSP, i create it in portal application dc.

    Dear Noppong,
    You can do it with multiple ways like :-
    1. Get the current host name and make complete URL with using host name for the webdynpro iview.
    request = (IPortalComponentRequest) this.getRequest();
    HttpServletRequest req = request.getServletRequest();
    StringBuffer strURL = req.getRequestURL();
    2. Create the KM Document or Link for webdynpro Iview OR Create the WPC Web Page for the webdynpro ivew
    Refer to [http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm]
    [http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/en/ff/681a4138a147cbabc3c76bde4dcdbd/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/ff/681a4138a147cbabc3c76bde4dcdbd/content.htm]
    Hope it will helps
    Best Regards
    Arun Jaiswal

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Best practice for database move to new disk

    Good morning,
    Hopefully this is a straight forward question/answer, but we know how these things go...
    We want to move a SQL Server Database data file (user database, not system) from the D: drive to the E: drive.
    Is there a best practice method?
    My colleague has offered "ALTER DATABASE XXXX MODIFY FILE" whilst I'm more inclined to use "sp_detach_db".
    Is there a best practice method or is it much of a muchness?
    Regards,
    Andy

    Hello,
    A quick search on MSDN blogs does not show any official statement about ALTER DATABASE – MODIFY FILE vs ATTACCH. However, you can see a huge number of article promoting and supporting
     the use of ALTER DATABASE on any scenario (replication, mirroring, snapshots, always on, SharePoint, service broker).
    http://blogs.msdn.com/b/sqlserverfaq/archive/2010/04/27/how-to-move-publication-database-and-distribution-database-to-a-different-location.aspx
    http://blogs.msdn.com/b/sqlcat/archive/2010/04/05/moving-the-transaction-log-file-of-the-mirror-database.aspx
    http://blogs.msdn.com/b/dbrowne/archive/2013/07/25/how-to-move-a-database-that-has-database-snapshots.aspx
    http://blogs.msdn.com/b/sqlserverfaq/archive/2014/02/06/how-to-move-databases-configured-for-sql-server-alwayson.aspx
    http://blogs.msdn.com/b/joaquint/archive/2011/02/08/sharepoint-and-the-importance-of-tempdb.aspx
    You cannot find the same about ATTACH. In fact, I found the following article:
    http://blogs.msdn.com/b/sqlcat/archive/2011/06/20/why-can-t-i-attach-a-database-to-sql-server-2008-r2.aspx?Redirected=true
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

Maybe you are looking for

  • What's the format of the datasource tag in jbosscmp-jdbc.xml

    Hi, I made a CMP when I use Lomboz to generate the classes, and xml files, but I still don't know how to relate my CMP to datasource. I thought may be I should configure the jbosscmp-jdbc.xml, because when I look in this file and found: <defaults> <d

  • Info about internationlizaion.....................

    hello, i want info about internationlizaion...............that means how should i implement on jsp platform........is it suitable for larger applicaion................

  • Problem with saving file

    Hi, I get no error while compiling, but the file is empty after trying to save data to it. Heres the code: ButtonClick:     private void jMenuItem3ActionPerformed(java.awt.event.ActionEvent evt) {                                                   Met

  • Frame 3 objects loading in Frame 1 - don't know why?

    I'm working on a mock project (no real client involved) and just sent the site out to some friends to view and give feedback. All of them saw what I was seeing and said it looked great. One person however, said the site looked like it was loading in

  • E-mail program won't open PDF attachments automatically with Adobe Reader

    When I get e-mail with PDF attachment I get a message saying I must open only from trustworthy source.  Then I click it open, but it won't open.  So I click save, and it is saved to desktop where I can then open it with Adobe Reader.  I prefer that O