CMS Tracks - best practice?

Hi,
we are developing our product with CVS right now and want to move over to DTR. The basic concepts are clear and I did a test migration yet which is successful.
But I am unclear on the chamge management piece:
Let's say we develop a version 1.0
Now this version has service packs 1.0 SP1, SP2, SP3 and so on. These service packs also have to be maintained, they might contain bugs, so you could have something like
1.0 SP1 Patch 1, 1.0 SP1 Patch 2 and so on.
How do I handle this with CMS tracks? Whats the best practice? Do I setup a track for every major version and for every support package in that version? I.E. i will have a track 10SP0, 10SP1, 10SP2, 10SP3 and so on? Will this work?
Right now we have a lot of CVS tags and branches to make this work... but how do you do that in DTR? I need to be able to jump back to a specific version and SP and fix bugs in there if a customer needs it.
In CVS the concept is that I will develop in HEAD and bugfix in branches (which is all in the same repository / "workspace"). But in DTR how do I do it? Is there something analog to this? Or do I always just use the track with the highest versio number as the "HEAD"?
Any input is appreciated.
Thanks
Bruno

Hello Bruno,
For each state of your product that you wish to maintain, you must create a track. So in your case, you will have a track structure as follows:
Track1.0
Track1.0_SP1
Track1.0_SP2
DTR does not support tags (yet), so the state that you wish to retain for possible future fixes must be isolated in a workspace of a given track. That is, "Track1.0_SP1" will contain the workspaces that represent the SP1 state, and a fix for SP1 must be done in this track.
And you must develop on the Main Release track ("Track1.0") and do the bugfixes in the track for the approrpriate SP. You should set up a transport connection of type "Repair" from each SP track to the Main Release track, so the fixes you make in the SP track are automatically back-transported to the Main Release track. (This connection can be setup in the "Track Connections" tab in the CMS Landscape Configurator.)
Also note that the DTR version graph represents a global version history, so for any file you will be able to view the changes made in the different tracks (workspaces) from the Version Graph view (in the DTR Perspective of the SAP NetWeaver Developer Studio).
Regards,
Manohar

Similar Messages

  • Best Practices Used in CMS

    Hi,
    Can anyone share the best practices used in CMS transport.
    Basically, why I need this is that, we want to have a track of all the transport that are done in QA/Prod
    Regards,
    Sreenivas

    Hi
    U can try checking this document to know info about CMS
    (How To… Transport XI Content Using CMS)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f85ff411-0d01-0010-0096-ba14e5db6306
    Also...
    How to configuring the CMS for XI?
    Business System Groups - CMS

  • Import files into bridge from a CMS - best practice

    Hi All,
    i am trying to come up with the simplest solution for the following:
    we have a CMS in which our content is managed and edited.
    the content editor might like to edit video files that reside in our asset management system.
    our editing tool is Adobe Premiere.
    My thought was to transfer the video file along with its metadata and edit instructions to an FTP server from our asset repository and then transfer these files into the video editing local environment using bridge. after editing the file - it should be transferred back into the asset repository via FTP server or such.
    it seems the way to this would be to extend the bridge functionality using the java script SDK, but i would love to know if there is a best practice solution before we start detailed design and development.
    your help is highly appreciated and thanks in advance,
    Deena

    Not sure what advice you have been given and how you have interpreted it.
    You are of course referring to im6 here since you can't drag projects from im08 to iDVD.
    I don't make many DVD's anymore and may well be wrong here but I'm not aware that dragging an im6 project to iDVD is any different than using share/iDVD and I'm wondering if the advice relates to exporting from im6 and then importing the exported movie into iDVD which is indeed a different workflow.

  • Best practice to lock a track ?

    I have a ESS track that I want to lock from any code modifications.
    How to do this ?
    Lock the track ?
    CLOSE the buildspace ?
    Is there a best practice ?

    Hi Henrik,
    The Best way is lock the track. and give display only access to all users
    Please look into below documents.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f341af-e86e-2910-3e8a-d9e3c227d938
    http://help.sap.com/saphelp_nw70/helpdata/en/c0/5a1b42d10b5633e10000000a155106/frameset.htm
    Regards
    Praveen

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • SAP Best Practice for Document Type./Item category/Acc assignment cat.

    What is the Best Practice for the document Type & Item category
    I want to use NB -  Item category  - B & K ( Blanket PO) , D ( Service)  and T( Text) .
    Is sap recommends to use FO Only for the Blanket Purchase Order.
    We want to use service contract (with / without service entry sheet) for all our services.
    We want to buy asset for our office equipments .
    Which is the best one to use NB or FO ?
    Please give me any OSS notes or reference for this
    Thanks
    Nick

    Thank you very much for your response. 
    I hope I can provide some clarity on how the accounting needs to be handle per FERC  Regulations.  The G/L balance on the utility that is selling the assets will be in the following accounts (standard accounts across all FERC Regulated Utilities):
    101 - Acquisition Value for the assets
    108 - Accumulated Depreciation Value for the assets
    For an example, there is Debit $60,000,000 in FERC Account 101 and a credit $30,000,000 in FERC Account 108.  When the purchase occurs, the net book value for the asset will be on our G/L in FERC Account 102.  Once we have FERC Approval to acquire the plant assets, we will need to enter the Acquisition Value and associated Accumulated Depreciation onto our G/L to FERC Account 101 and FERC Account 108 respectively with an offset to FERC Account 102.
    The method that I came up with is to purchase the NBV of the assets to a clearing account.  I then set up account assignments that will track the Acquisition Value and respective Accumulated Depreciation for each asset that is being purchased.  I load the respective asset values using t-code AS91 and then make an entry to the 2 respective accounts with the offset against the clearing account using t-code OASV.  Once my company receives FERC approval, I will transfer the asset to new assets that has the account assignments for FERC Account 101 and FERC Account 108 using t-code ABUMN or FB01.

  • Best Practice for Managing Cookies in an Enterprise Environment

    We are upgrading to IE11 for our enterprise. One member of the team wants to set a group policy that will delete all cookies every time the user exits IE11.  We have some websites that users access that use cookies to track progress in training,
    but are deleted when the user closes the browser.  What is the business best practice regarding deleting all history, temp internet files and, especially cookies when closing a browser.
    If you can point me to a white paper on this topic, that would be helpful.
    Thanks
    Bill

    Hi,
    Regarding cookie settings, we could manage IE privacy settings using Administrative templates for IE 11:
    Administrative templates and Internet Explorer 11
    Delete and manage cookies
    The Administrative templates for IE 11, we could download from here:
    Administrative Templates for Internet Explorer 11
    Hope this may help
    Best regards
    Michael Shao
    TechNet Community Support

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • Best Practice for Portable Home Directories

    What are the 'best practice' directories to sync for Portable Homes - at login and in the background. I want to make my user experience a little better than it is now.
    Login and logout take about 2 minutes - even over ethernet 100Mb, and longer using Airport, and 'background' home directory syncing seems to always suck all of my network bandwidth - making apps like Safari unusable - even though I have barely changed anything in the folders I am syncing.
    My personal home directory is 1.5Gb, and I keep my Music, Pictures and Movies on the network - as Apple suggest.

    I generally recommend the following for the least impact on user experience:
    1. Put your server and clients that will use mobile accounts and portable homes on a Gigabit Ethernet switch. It's a small price to pay for much more customer satisfaction.
    2. Put more RAM in the server, especially if you're dealing with a few users with large homes or several users with moderately-sized (less than 1.0GB) ones. This will also let you employ server-side tracking (for 10.5 server).
    3. Only sync and login/logout. Use Workgroup Manager to define all portable preferences. Choose to manage login/logout sync, and specify the items to sync; for the whole home, use "~". Omit things like ~/.Trash. Choose to manage the background sync, but remove all items from the "sync these items" list. Choose to manage the background sync interval by setting it to manual. This way, the user doesn't accidentally configure a background sync: we've told it to sync nothing only we say it can.
    --Gerrit

  • Best practice for Tags

    Hello,
    In packaged applications Tags are used in most of the Apps. Eg. in Customer Tracker App, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I have predefined tags for Properties (Real Estate) in a lookup table called TAGS . Eg, Full floor, Furnished, Fitted, Duplex, Attached... What is the best Practice to tag the properties:
    1- To store these tags in a varchar column in PROPERTIES table using Shuttle box.
    OR
    2- To store them in a third table Eg, PROPERTIES_TAGS (ID PK, PROPERTY_ID FK , TAG_ID FK ), Then use LISTAGG function to show the tags in one line in the Properties Report.
    OR
    Do you have a better option ??
    Regards,
    Fateh

    Fateh wrote:
    Hello,
    In packaged applications Tags are used in most of the Apps. Eg. in Customer Tracker App, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I have predefined tags for Properties (Real Estate) in a lookup table called TAGS . Eg, Full floor, Furnished, Fitted, Duplex, Attached...These appear to me to be two different use cases. In the packaged applications the tags allow end users to attach free-form metadata to data for their own purposes (these are sometimes called "folk taxonomies"). Users may use tags for different purposes, or different tags for the same purpose. For example, I might add "Monday", "Thursday" or "Friday" tags to customers because those are the days they receive their deliveries. For the same purpose you might tag the same customers "1", "8", and "15" using the route numbers of the trucks making the deliveries. You might use "Monday" to indicate that the customer is closed on Mondays...
    In your application you are assigning known, predefined attributes to the properties. This is a standard 1:M attribute model. Displaying them using the tag metaphor does not make them equivalent to free-form user tags.
    What is the best Practice to tag the properties:
    1- To store these tags in a varchar column in PROPERTIES table using Shuttle box.If you do this, how do you:
    <li>Efficiently search for furnished duplex properties?
    <li>Globally change "fitted" to "built-in"?
    <li>Report the number of properties, broken down by full floor, duplex, fitted...
    OR
    2- To store them in a third table Eg, PROPERTIES_TAGS (ID PK, PROPERTY_ID FK , TAG_ID FK ), Then use LISTAGG function to show the tags in one line in the Properties Report.As Why to use Look up Table, this the correct way to do this. It enables the data to be indexed for efficient retrieval, and questions like those above should be handled simply using joins and grouping.
    You might want to investigate the possibility of eliminating the ID PK and using an index organised table for this.
    OR
    Do you have a better option ??I'd also look carefully at your data model. Ensure you're not flirting with the EAV anti-pattern. Should some/all of these values not simply be attributes on the property?

  • Best practice for mouseless ADF applications

    I am developing an ADF application where the users do not want to use the mouse.
    So I would like to know if there are a best practice for this?
    I am already using the accessKey functionality and subforms defaultCommand
    But I have had problems setting focus to objects on a page like tables. I would like a button to return the focus to the table after it has made the command like delete.
    I have implemented a solution where I have found inspiration several threads and other webpages (see below).
    Is this solution okay?
    Are there any problems with it?
    I would also like to know if there are better pathways to go like
    out of the box solutions,
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    in advance thanks
    Inspiration webpages
    https://blogs.oracle.com/jdevotnharvest/entry/how_to_programmatically_set_focus
    http://technology.amis.nl/2008/01/04/adf-11g-rich-faces-focus-on-field-after-button-press-or-ppr-including-javascript-in-ppr-response-and-clientlisteners-client-side-programming-in-adf-faces-rich-client-components-part-2/
    how to Commit table by writting Java code in Managed Bean?
    Table does not refresh and getting error as UIComponent is Null
    A short description of the solution:
    (jdeveloper version 11.1.1.2.0)
    --- Example where I use onSetFocus in jsff page
    <af:commandButton text="#{hrsusuiBundle.FOCUS}" id="cb10"
    partialSubmit="true" accessKey="f"
    shortDesc="Alt+Shift+F"
    actionListener="#{managedBean_clientUtils.onSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    </af:commandButton>
    --- Examples where I use doTableActionAndSetFocus in jsff page
    --- There have to be a binding in the jsff page to delete, commit and rollback
    <af:commandButton text="#{hrsusuiBundle.DELETE}" id="cb4"
    accessKey="x"
    shortDesc="Alt+Shift+X"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Delete"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.COMMIT}" id="cb5"
    accessKey="s" shortDesc="Alt+Shift+S"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Commit"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.ROLLBACK}" id="cb6"
    accessKey="z" shortDesc="Alt+Shift+Z"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}"
    immediate="true">
    <af:resetActionListener/>
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Rollback"/>
    </af:commandButton>
    --- This is the java class I use
    --- It is published in adfc-config.xml as a request scope managedbean
    public class ClientUtils {
    public ClientUtils() {
    public void doTableActionAndSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String actionToDo = (String)rcb.getAttributes().get("actionField");
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(focusOn);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    if ( "Delete".equals(actionToDo) || "Commit".equals(actionToDo) || "Rollback".equals(actionToDo) ){
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    OperationBinding operationBinding = bindings.getOperationBinding(actionToDo);
    Object result = operationBinding.execute();
    AdfFacesContext.getCurrentInstance().addPartialTarget(component);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    public static String onSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String clientId = null;
    if (focusOn.contains(":")) {
    clientId = focusOn;
    } else {
    clientId = findComponentsClientIdInRoot(focusOn);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    return null;
    private static void writeJavaScriptToClient(String script) {
    FacesContext fctx = FacesContext.getCurrentInstance();
    ExtendedRenderKitService erks = null;
    erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
    erks.addScript(fctx, script);
    public static void makeSetFocusJavaScript(String clientId) {
    if (clientId != null) {
    StringBuilder script = new StringBuilder();
    //use client id to ensure component is found if located in
    //naming container
    script.append("var textInput = ");
    script.append("AdfPage.PAGE.findComponentByAbsoluteId");
    script.append ("('"+clientId+"');");
    script.append("if(textInput != null){");
    script.append("textInput.focus();");
    script.append("}");
    writeJavaScriptToClient(script.toString());
    public static String findComponentsClientIdInRoot(String id) {
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(id);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    return clientId;
    }

    Hi,
    I am developing an ADF application where the users do not want to use the mouse. So I would like to know if there are a best practice for this?
    Well HTML (and this is the user interface you see) follows a tab index navigation that you follow with "tab" and "shift+tab". Anything else is a short cut for which you use mnemonics (as you already do) or shortcuts (explained in http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html). There is a distinction to make between non-web environments (which I think you and your users have abackground in) and client desktop environments. Browsers block some keyboard functionality for their own purpose. So you may have to find a list of keys first that work across browsers. Unlike desktop clients, which allow you to "press a button" without the button to take focus, this cannot be done on the web. So you need to be clever here, avoiding buttons at all.
    The following paper is about JavaScript in ADF and explains the basics for what Chris Muir explains in : http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf
    It has the outline for how to register short cut keys that perform a specific action (e.g. register ctrl+d to delete the current row you are on, or press F11 to execute a query (similar to Oracle Forms frmres files)). However, be aware that this includes some code you have to write (actually quite some code to be honest).
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    Actually these are implementations as they come with example code for you to use and customize, do they? So what is this question asking for more ? Also note that global buttons don't quite have anything in common with the question you asked. I assume you want to see it as an implementation of the Forms toolbar that operates on the form or table the focus is in. This however does not work for the web as there is nothing that keeps track of which component has a focus and to what iterator (data block) it belongs. This would involve even more coding (though possibly doable)
    Frank

  • Best Practice for Acquisition of Utility Plant Assets from another utility

    My company is located in the United States and will be taking on an initiative of purchasing the Utility Plant assets from another company.  We are governed by Federal Energy Regulatory Commission (FERC) Accounting Standards.  In the guidance for the accounting where one utility purchases the assets of another utility, it states that the purchasing company must account for the Utility Plant Assets in FERC Account 102 at a net book value until FERC approval is received on the sale of the other utility.  Depreciation must be calculated based on the Gross Book Value and applied to this same FERC Account.  What must happen in order to track the assets is that the asset's APC Value and the Depreciation Value must transfer from the utility selling the Plant Assets balance sheet to the utility purchasing the Plant Assets balance sheet respectively.
    As an example:
    Utility ABC is selling their plant assets to Utility XYZ.  The NBV of the plant assets is $60,000,000.  It is broken down to Debit $80,000,000 for the APC Value (in FERC account 101 on Utility ABC's balance sheet) and credit $20,000,000 associated to Depreciation Value (in FERC account 108 on Utility ABC's balance sheet).  When the sale is pending FERC approval the NBV is accounted for on Utility XYZ's balance sheet in FERC account 102.  This amount will be processed to the selling Utility on a PO for the purchase.
    I have configured the Fixed Asset module of SAP to account for the APC Value and the Depreciation Value in separate sub accounts of FERC account 102, that are se-up as reconciliation accounts on the G/L, in the account assignment of the respective asset classes.  we track our assets based on asset class that pertain to the FERC Primary Plant Accounts.
    I am trying to load the assets to the Fixed Asset module having the APC Value and the Depreciation Value reported respectively.  If the NBV amount is processed on the PO, what would be the best practice to load the  APC Value and Depreciation Value to the respective assets?
    My first thought would be to process the PO for the NBV of the assets against a generic FERC Account 102, that is not set-up as a reconciliation account.  I would then process an asset transaction using t-code ABSO with transaction type 158 and use the generic FERC account 102 as the Offsetting Account in the entry and using Document Type AA.
    I would like to follow best practice in this scenario.
    You help on this subject would be greatly appreciated.
    Wayne
    Edited by: Wayne Rochon on Mar 31, 2011 9:19 PM

    Thank you very much for your response. 
    I hope I can provide some clarity on how the accounting needs to be handle per FERC  Regulations.  The G/L balance on the utility that is selling the assets will be in the following accounts (standard accounts across all FERC Regulated Utilities):
    101 - Acquisition Value for the assets
    108 - Accumulated Depreciation Value for the assets
    For an example, there is Debit $60,000,000 in FERC Account 101 and a credit $30,000,000 in FERC Account 108.  When the purchase occurs, the net book value for the asset will be on our G/L in FERC Account 102.  Once we have FERC Approval to acquire the plant assets, we will need to enter the Acquisition Value and associated Accumulated Depreciation onto our G/L to FERC Account 101 and FERC Account 108 respectively with an offset to FERC Account 102.
    The method that I came up with is to purchase the NBV of the assets to a clearing account.  I then set up account assignments that will track the Acquisition Value and respective Accumulated Depreciation for each asset that is being purchased.  I load the respective asset values using t-code AS91 and then make an entry to the 2 respective accounts with the offset against the clearing account using t-code OASV.  Once my company receives FERC approval, I will transfer the asset to new assets that has the account assignments for FERC Account 101 and FERC Account 108 using t-code ABUMN or FB01.

  • Best Practices - Telco - PM

    Dear All,
    I am in need of some u201Cbest practicesu201D in asset management in Telecommunications industry.
    On of my LE clients would like to implement asset management. The concentration will be PM - equipment tracing & tracking in the system. A u201Cbest-practiceu201D asset management system in their SAP ECC is the idea.
    An insight into best practices of Plant Maintenance and equipment trace & tracking in Telecommunications Industry needed.
    u2022     Asset coding/naming design in Telecommunications  industry (especially in u201Cnetwork assetsu201D u2013 if there is some kind of a best practice how to name the assets (hierarchies, naming conventions etc)).
    u2022     Insight into plant maintenanceu2019s core functionality in Telco
    u2022     Any tracing & tracking system proposal u2013 barcode and/or RFID technologies for telecommunication asset management u2013 any insight into partners working for this purpose.
    They are specifically interested in Deutsche Telekom
    Best regards
    Yavuz Durgut - SAP Turkey

    Hi,
    You have a good start.  What you need to do is 1.) Find out what the requirements are -- what does your user want.... if this is a fact finding mission (e.g., they want to see what's in the system) then your requirement becomes load the data in R/3, so figure out what they configured in PM and use those definitions as your requirements.   2.) Use those requirements to find data in the fields listed in the MultiProvider, InfoCubes, or DataStore Objects sections in your first link ... in other words, now that you know what data to look for, look for it in the Data Targets (MP's, Cubes and DSO's).  If you find some of the data you want, then trace back the infosources and determine what datasources in R/3 load the data you are looking for.  After all that, check those datasources for any additional fields you may need and add them in.
    So, if your company doesn't maintain equipment costs or maintenance costs for equipment, then you don't have to worry about 0PM_MP04.  Use this type of logic to whittle down to what your really need and want, then activate those object only.
    Good Luck,
    Brian

Maybe you are looking for

  • Macbook wont start after software update

    I ran the software update on my white macbook and now when the computer restarts it just spins. I have tried the hardware tests and it detects no problems. The computer will not restart it just spins and the fan goes for hours. Any help is appreciate

  • How much RAM can I install?

    Alright.  This seems simple but I don't want to have to go through the rigamarole of buying the wrong one and having to return it.  I have a Mid-2010 MacBook Pro running Lion 10.7.3, currently with the 4gb of RAM that it came with.  I want to upgrade

  • Overset text will not go into the text box that is part of the master for the next page

    I am trying to (first time user) set up a simple trade-paperback sized book with some photos.  Have set up several masters for chapter title pages, chapter inner pages, etc.  When I copy text out of Word into InDesign, (copy/paste) the text overset b

  • Down Load from itunes match to ipod classic

    I recently purchased 5c iphone and to link all devices I use iTunes match but since doing so I cant download updates to my library on my ipod classic.  Help

  • ACS v5 best practice w/ access policies.

    Hello, I am in the process of deploying a ACS v5 appliance with 2 network devices talking through it to MS Active Directory via LDAP. It works great but I have a design question. Our current access policy has one AD group match, one AD attribute matc