Best Practice for Tranport request Naming

Hi,
We are using SolMan 4.0 during implementation of ECC 6.0.
We have placed the blueprint and we are in configuration phase.
We have a IMG project created in the DEV system and was assinged in Solution Manager project under  System Landscape->IMG Projects.
Now that consultants are going to dev system and customizing they are creating their transport requests.
Is there any best practice for the naming convention or the transport requests..
By creating one IMG project for entire implementation is that going to create any problem..!!
Please sgugest.
Thanks & Regards
Mrutyunjay

As per MSFT best practices(Mentioned by Scott) keep it short as much as possible. You can use SP for SharePoint-SUBSite
also check this blog for best practices.
http://www.networkworld.com/community/blog/simple-naming-conventions-improve-end-user-experience-sharepoint-sites
also one more thing you should consider, never use the reserved words into the SharePoint URLs. you will able to create the site/lis/library/folder but when you browse get the 404 errors.
check this blog:
http://www.sharepointblog.cz/2012/04/reserved-words-in-sharepoint-url.html
http://techtrainingnotes.blogspot.com/2012/03/names-you-cant-use-for-sharepoint.html
Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

Similar Messages

  • SAP Best practice for Material request form

    Hai Sap gurus,
    Do we have any sap best practice for material request form? If so please help me to find this best practice provided by the SAP. I searched through sap help but i was unable to find one.
    Same way i also need to find the sap best practice for Change request form too...
    Thanking you all in advance.

    Hi,
    Check these links
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2b50ac90-0201-0010-d597-8d833833f9e0
    and using the service market place link to down load the best practices
    http://www.sap.com/services/bysubject/servsuptech/index.epx

  • Best practice for OID Net Naming Configuration in global company

    I'd like some feedback on what approach to take in configuring Net Service Names in OID for a global company. We have multiple sites with multiple groups of DBA's. We're weighing pro's and con's of a single domain within OID for Net service names vs a separate domain for each distinct group of DBA's that manage service names.
    To the best of my understanding, it is only possible to configure clients to look at a single domain in OID vs ldap.ora parameter default_admin_context. We have users who access databases across different DBA support areas, so we like the idea of a single domain so that any service name within the enterprise can be resolved by users without having service names entered at multiple levels in the directory.
    However, it is also my understanding that to segregate security of administering service names, it is only possible to do so by having different domains within the directory (it is not possible or at least practical to have different levels of security defined in a single flat domain). I also have concerns about the manageability of service names if they are all listed in a single domain. The list could get rather unwieldy to sort through.
    I would be very interested in opinion or feedback on what others are doing.
    Thanks,

    I'd like some feedback on what approach to take in configuring Net Service Names in OID for a global company. We have multiple sites with multiple groups of DBA's. We're weighing pro's and con's of a single domain within OID for Net service names vs a separate domain for each distinct group of DBA's that manage service names.
    To the best of my understanding, it is only possible to configure clients to look at a single domain in OID vs ldap.ora parameter default_admin_context. We have users who access databases across different DBA support areas, so we like the idea of a single domain so that any service name within the enterprise can be resolved by users without having service names entered at multiple levels in the directory.
    However, it is also my understanding that to segregate security of administering service names, it is only possible to do so by having different domains within the directory (it is not possible or at least practical to have different levels of security defined in a single flat domain). I also have concerns about the manageability of service names if they are all listed in a single domain. The list could get rather unwieldy to sort through.
    I would be very interested in opinion or feedback on what others are doing.
    Thanks,

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best practice for mouseless ADF applications

    I am developing an ADF application where the users do not want to use the mouse.
    So I would like to know if there are a best practice for this?
    I am already using the accessKey functionality and subforms defaultCommand
    But I have had problems setting focus to objects on a page like tables. I would like a button to return the focus to the table after it has made the command like delete.
    I have implemented a solution where I have found inspiration several threads and other webpages (see below).
    Is this solution okay?
    Are there any problems with it?
    I would also like to know if there are better pathways to go like
    out of the box solutions,
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    in advance thanks
    Inspiration webpages
    https://blogs.oracle.com/jdevotnharvest/entry/how_to_programmatically_set_focus
    http://technology.amis.nl/2008/01/04/adf-11g-rich-faces-focus-on-field-after-button-press-or-ppr-including-javascript-in-ppr-response-and-clientlisteners-client-side-programming-in-adf-faces-rich-client-components-part-2/
    how to Commit table by writting Java code in Managed Bean?
    Table does not refresh and getting error as UIComponent is Null
    A short description of the solution:
    (jdeveloper version 11.1.1.2.0)
    --- Example where I use onSetFocus in jsff page
    <af:commandButton text="#{hrsusuiBundle.FOCUS}" id="cb10"
    partialSubmit="true" accessKey="f"
    shortDesc="Alt+Shift+F"
    actionListener="#{managedBean_clientUtils.onSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    </af:commandButton>
    --- Examples where I use doTableActionAndSetFocus in jsff page
    --- There have to be a binding in the jsff page to delete, commit and rollback
    <af:commandButton text="#{hrsusuiBundle.DELETE}" id="cb4"
    accessKey="x"
    shortDesc="Alt+Shift+X"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Delete"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.COMMIT}" id="cb5"
    accessKey="s" shortDesc="Alt+Shift+S"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Commit"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.ROLLBACK}" id="cb6"
    accessKey="z" shortDesc="Alt+Shift+Z"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}"
    immediate="true">
    <af:resetActionListener/>
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Rollback"/>
    </af:commandButton>
    --- This is the java class I use
    --- It is published in adfc-config.xml as a request scope managedbean
    public class ClientUtils {
    public ClientUtils() {
    public void doTableActionAndSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String actionToDo = (String)rcb.getAttributes().get("actionField");
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(focusOn);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    if ( "Delete".equals(actionToDo) || "Commit".equals(actionToDo) || "Rollback".equals(actionToDo) ){
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    OperationBinding operationBinding = bindings.getOperationBinding(actionToDo);
    Object result = operationBinding.execute();
    AdfFacesContext.getCurrentInstance().addPartialTarget(component);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    public static String onSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String clientId = null;
    if (focusOn.contains(":")) {
    clientId = focusOn;
    } else {
    clientId = findComponentsClientIdInRoot(focusOn);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    return null;
    private static void writeJavaScriptToClient(String script) {
    FacesContext fctx = FacesContext.getCurrentInstance();
    ExtendedRenderKitService erks = null;
    erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
    erks.addScript(fctx, script);
    public static void makeSetFocusJavaScript(String clientId) {
    if (clientId != null) {
    StringBuilder script = new StringBuilder();
    //use client id to ensure component is found if located in
    //naming container
    script.append("var textInput = ");
    script.append("AdfPage.PAGE.findComponentByAbsoluteId");
    script.append ("('"+clientId+"');");
    script.append("if(textInput != null){");
    script.append("textInput.focus();");
    script.append("}");
    writeJavaScriptToClient(script.toString());
    public static String findComponentsClientIdInRoot(String id) {
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(id);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    return clientId;
    }

    Hi,
    I am developing an ADF application where the users do not want to use the mouse. So I would like to know if there are a best practice for this?
    Well HTML (and this is the user interface you see) follows a tab index navigation that you follow with "tab" and "shift+tab". Anything else is a short cut for which you use mnemonics (as you already do) or shortcuts (explained in http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html). There is a distinction to make between non-web environments (which I think you and your users have abackground in) and client desktop environments. Browsers block some keyboard functionality for their own purpose. So you may have to find a list of keys first that work across browsers. Unlike desktop clients, which allow you to "press a button" without the button to take focus, this cannot be done on the web. So you need to be clever here, avoiding buttons at all.
    The following paper is about JavaScript in ADF and explains the basics for what Chris Muir explains in : http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf
    It has the outline for how to register short cut keys that perform a specific action (e.g. register ctrl+d to delete the current row you are on, or press F11 to execute a query (similar to Oracle Forms frmres files)). However, be aware that this includes some code you have to write (actually quite some code to be honest).
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    Actually these are implementations as they come with example code for you to use and customize, do they? So what is this question asking for more ? Also note that global buttons don't quite have anything in common with the question you asked. I assume you want to see it as an implementation of the Forms toolbar that operates on the form or table the focus is in. This however does not work for the web as there is nothing that keeps track of which component has a focus and to what iterator (data block) it belongs. This would involve even more coding (though possibly doable)
    Frank

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best Practice for SSL in Apache/WL6.0SP1 configuration?

    What is the best practice for eanbling SSL in an Apache/WL6.0SP1
    configuration?
    Is it:
    Browser to Apache: HTTPS
    Apache to WL: HTTP
    or
    Browser to Apache: HTTPS
    Apache to WL: HTTPS
    The first approach seems more efficient (assuming that Apache and WL are
    both in a secure datacenter), but in that case, how does WL know that the
    browser requested HTTPS to begin with?
    Thanks
    Alain

    A getScheme should return HTTPS if the client is using HTTPS or HTTP if it
    is using HTTP.
    The option for the plug-in to use HTTP or HTTPS when connecting to Weblogic
    is up to you but regardless the scheme of the client will be passed to
    WebLogic.
    Eric
    "Alain" <[email protected]> wrote in message
    news:[email protected]..
    How should we have the plug-in tell wls the client is using https?
    Should we have the plugin talk to wls in HTTP or HTTPS?
    Thanks
    Alain
    "Jong Lee" <[email protected]> wrote in message
    news:3b673bab$[email protected]..
    The apache plugin tells wls the client is using https and also pass on
    the
    client
    cert if any.
    "Alain" <[email protected]> wrote:
    What is the best practice for eanbling SSL in an Apache/WL6.0SP1
    configuration?
    Is it:
    Browser to Apache: HTTPS
    Apache to WL: HTTP
    or
    Browser to Apache: HTTPS
    Apache to WL: HTTPS
    The first approach seems more efficient (assuming that Apache and WL
    are
    both in a secure datacenter), but in that case, how does WL know that
    the
    browser requested HTTPS to begin with?
    Thanks
    Alain

  • Best practice for hierarchical DTOs?

    Hi!
    Can someone tell me the best practice for hierarchical DTOs?
    Use case: I've got a User object which holds one Folder object, which in turn holds a Set of Folder objects (children).
    class User {
      Folder rootFolder;
    class Folder {
    Set children;
    }Normally, I'd fetch the user data with the help of a DAO from the database, copy the requested properties into a User-DTO, which will be transfered to the view. But what about the Folder objects?
    Should I create a Folder-DTO class and copy each Folder property into the respective DTO object - in other words: rebuild the whole hierarchy? Or is there a better solution?
    Thanks a lot!
    Walter

    Normally, I'd fetch the user data with the help of a
    DAO from the database, copy the requested properties
    into a User-DTO, which will be transfered to the view.
    But what about the Folder objects?
    Should I create a Folder-DTO class and copy each
    Folder property into the respective DTO object - in
    other words: rebuild the whole hierarchy? Or is there
    a better solution?It isn't recursive right?
    So this is just a standard association.
    How you handle it depends on usage.
    As a guess perhaps you are thinking that you can only have one User DTO class. That isn't true. You can have several. For example one that contains the association and one that does not.
    Or you have just one. And either it contains the association or you provide another mechanism that returns just the association given a specific instance of a User DTO (or some other identifier to the specific User.)

  • Best Practice for link to WebdynPro page in welcome page

    Hi Experts,
        I am new in SAP Portal. I need some guidance from you guys. I have a requirement to create welcome page which is JSP and has a link to a WebdynPro page. I have to put the url in JSP file. So i do not know what kind of URL i should put in the JSP. 
    The problem is if i put the url which i can see in the address bar like 'http://DevServer/WebDynPro/ApplcationA', when i transport it to another server ,for example Production,. The real url might be change to 'http://ProdServer/WebDynPro/ApplcationA'. It may cause the link in JSP can not be worked.
        I would like to ask you the best practice for this case. What url? What configuration?
    Thank you in advance,
    Noppong Jinbunluphol
    P.S. For the JSP, i create it in portal application dc.

    Dear Noppong,
    You can do it with multiple ways like :-
    1. Get the current host name and make complete URL with using host name for the webdynpro iview.
    request = (IPortalComponentRequest) this.getRequest();
    HttpServletRequest req = request.getServletRequest();
    StringBuffer strURL = req.getRequestURL();
    2. Create the KM Document or Link for webdynpro Iview OR Create the WPC Web Page for the webdynpro ivew
    Refer to [http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm]
    [http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/en/ff/681a4138a147cbabc3c76bde4dcdbd/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/ff/681a4138a147cbabc3c76bde4dcdbd/content.htm]
    Hope it will helps
    Best Regards
    Arun Jaiswal

  • Best Practice for Customization of ESS 50.4

    Hi ,
    We have implemented ESS 50.4 on EP 6.0 SP 14 and R3 4.6C . I want to know what is the best practice for minor modification of ESS transaction . For eg : I need to hide the change button in Personal information screen .
    Pls let me know .
    PS : Guaranteed award points
    Aneez

    @Aneez
       "Best Practice" is just going to be good ole' ITS custom development. All the "old" ESS services are all ITS based. What can not be done through config is then done by developing custom version of the ESS services. For what you describe (ie. the typical "hide a button" scenario) it is simply a matter of:
    (1) create custom version(ie. "Z" version) of the standard service. The service file will still call the same backend transaction via the ITS parameter ~transaction.
    (2) Since you are NOT making changes that require anything changed on the backend transaction (such as adding new fields, changing business logic, etc) you are lucky to ONLY have to change the web templates. Locate the web template in your new custom service file that corresponds to the screen in the transaction where the "CHANGE" button appears. The ITS naming convention for web templates is <sapprogramname>_<screennumber>.
    (3) After locating the web template that corresponds to your needed screen, simply locate in the HTMLb where the "CHANGE" button code is and comment it out. Just that easy!
    (4) Publish your new customized service and test it out directly through ITS. ie. via the direct URL to it: http://<yourdomain>/scripts/wgate/<yourservice>!
    (5) once you see that it works, you can then make an iView for it in your portal (or simply change the iView you have to now point to your custom ITS service.
    LOTS and LOTS more info on ITS development all around this site and in the ITS sepcific forum.
    Hope this helps!
    Award points or save them...I really don't care. I think the points system here is one of the dumbest ideas since square wheels. =)

  • Best practice for a site with a lot of images?

    I am working on a site that will have over a hundred images
    and I wanted to see what is the best practice for designing a site
    like this. Should a go with xml(please give examples or
    explanation), a text file or just loadMovie("image1project1.jpg",
    "bottomsec") with named external images that will stay the same.
    Any help is appreciated on staying up to date with this kind of
    site.
    Thanks,
    Randy

    ok I am new please be nice - I think I want to set it up like
    this
    <project1>
    <section>Architecture</section>
    <name>New Building for CREATiVENESS</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project1.jpg</thumb>
    <img1>images/project1img1.jpg</img1>
    <img2>images/project1img2.jpg</img2>
    <img3>images/project1img3.jpg</img3>
    <img4>images/project1img4.jpg</img4>
    </project1>
    <project2>
    <section>Interiors</section>
    <name>New Building for Me</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project2.jpg</thumb>
    <img1>images/project2img1.jpg</img1>
    <img2>images/project2img2.jpg</img2>
    <img3>images/project2img3.jpg</img3>
    <img4>images/project2img4.jpg</img4>
    </project2>
    <project3>
    <section>Architecture</section>
    <name>New Building for You</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project3.jpg</thumb>
    <img1>images/project3img1.jpg</img1>
    <img2>images/project3img2.jpg</img2>
    <img3>images/project3img3.jpg</img3>
    <img4>images/project3img4.jpg</img4>
    </project3>
    <project4>
    <section>Interiors</section>
    <name>New Building for that guy</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project4.jpg</thumb>
    <img1>images/project4img1.jpg</img1>
    <img2>images/project4img2.jpg</img2>
    <img3>images/project4img3.jpg</img3>
    <img4>images/project4img4.jpg</img4>
    </project4>
    but I am not sure of the way to create the way to run through
    it to find if it is in a section to put it in the menu and then to
    call the images and text once they are in a project area. I dont
    know if the
    this.firstChild.nextSibling.childNodes[0].childNodes[2]
    is the best way to call things in the file. Any help is
    appreciated. Please let me know what are the best practices and
    easiest way to work with a large xml file.
    Thanks,
    Randy

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

Maybe you are looking for

  • After migrating to new mac, email folders are lost.

    I used Migration Assistant to transfer files and data from my old MacBook Air to my new MacBook Air. Everything seems to have transferred just fine, except for my email folders. I have tried plugging in Time Machine to restore them from my backups on

  • Getting Dump while using PYXX_READ_PAYROLL_RESULT FM

    I m trying to get the gross salary and ESI contribution amount for an indian based employee. While using the FM 'PYXX_READ_PAYROLL_RESULT' i m getting the following dump. *******************************************************************************

  • Iplanet 4.1 SP14 perflib problems

    i'm having problems running iPlanet 4.1 SP14 on WinNT SP6a...can't start server an event ID 1008 shows up on Event Viewer with this error: Perflib     Error     1008     The Open Procedure for service "https4.1 " in DLL "C:\Netscape\Server4\bin\https

  • ADF certification,

    Hi, I want to know is there any model question bank available to refer for Oracle ADF certification? Thank You

  • IPod being filled with 2.4GB of Other?

    Hi, I have a second generation 8 GB iPod touch. Recently, I've been trying to sync my iPod, but it always says "iTunes did not sync... [the app I wanted the most] because there is not enough room". I look at the bar at the bottom, and discover it's f