Information on Best Practice usage of PopulateAttribute

Using JDeveloper 11.1.1.4.0
I have a requirement where I use the BC layer as my data service for my UI. But this BC is not connected to the database but rather relying on a WebService.
Everything is returning well based on NO-DB Transaction and Programmatic Vo's implementation based on SteveMeunch not yet documented, but to further enhance or optimize my implementation, I'm trying to utilize the populateAttribute as in my scenario, I do insert the attribute programmatically thus I also handle the validation before-hand.
Based on definition populateAttribute allows me to set the value without triggering any validations. So far this is correct as none of my BusinessRules nor any LOVModel accessor validation are getting fired. My question mainly focus on the difference between <b>populateAttribute</b> and <b>populateAttributeAsChanged</b>.
1. Which of the two methods should be used on certain scenarios (close to best practice :) )?
2. In the documentation, there is a warning about "Primary key attributes should not be set using populateAttribute apis", in what case will this affect my rows?
3. In a click-to-edit table, I am able to prepopulate my ViewObject without triggering my accessor (LOVModel) validation, but when I start activating the rows (click), it seems to still be firing the accessor validation. (Which is fairly not expensive calls but uneccessary).
Thanks.
-Marvin

Hi Julian,
I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
1. It is hard to enforce rules on package assignments
2. With multiple developers on the project and limited time we didn't have time to review package assignment
3. Devopelers would click away warnings that an object was already part of another project and just continue
4. After go-live the maintenance partner didn't care.
So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
Roy

Similar Messages

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • [More information] 'SAP Best Practices Baseline package for Brazil V3.607'

    Hi.
    When I study 'SAP Best Practices Baseline package for
    Brazil V3.607', I wonder somthing.
    I want solution of problem.
    ---------Problem---------
    In '100: SAP Best Practices Installation' document on point 3.4 Define Tax Jurisdiction Code it says
    Enter the Jurisdiction Codes according to the document SMB41_J_1BTXJURV_B020_NFE.TXT.
    I have search the internet for this document and the only hit is the actual Best practice document.
    Does anybody knows where to get this document?
    ASAP, reply for me.
    Thanks.

    Dear Dimitry,
    the Best Practice baseline content is freely available to anyone w/o any charge.
    You find the whole content about it at:
    SAP Best Practices package for Russia V3.607 (English)
    SAP Best Practices package for Russia V3.607 (Russian)
    Kind Regards,
    Jan

  • Best practice usage of system users for different RFC functions execution

    Hello experts,
    Could you guys share your thoughts on RFC type 3 - system userid usage:
    Would it be recommended to use ONE RFC destination (type 3) for execution of various functions?
    Is it recommended using the different RFC destinations (type 3) (with different system user ids) depending on the functions execution?
    Thanks in advance for your thoughts.
    Thanks
    Himadama

    Thanks Julius for your information.
    Option 1: System user id / password:
    Password is sent over the network. But SNC control would take care of this.
    1 destination = 1 system userid
    If the useru2019s password is compromised the risk would be limited to that destination function group (RFC_NAME in S_RFC auth object)
    System users only need RFC authorizations (S_RFC).
    Only monitoring System users would be enough.
    Better password management if required.
    Option 2: u201CCurrent Useru201D option:
    One user may need access for more than one function groups access ( RFC_NAME).
    Compromising user's password would result more damage than the system user. Since this user has broad access to execute multiple function groups.
    Monitoring these users would be overloaded job in this case becoz of increased users numbers. 
    The management of authorizations (roles) to these users may require strict approval process.
    Option 3: Trusted system option:
    u2022          RFC_SYSID :                                                             
    u2022          RFC_CLIENT:                                                          
    u2022          RFC_USER  : ' '                                                                      
    u2022          RFC_EQUSER: Y (for Yes)                                                              
    u2022          RFC_TCODE : *                                                              
    u2022          RFC_INFO  :                                                          
    u2022          ACTVT     : 16                                                             
    Seems user requires both the auth objects S_RFCACL and S_RFC in this case.
    Compromising user's password would result more damage than the system user. Since this user has broad access to execute multiple function groups.
    Monitoring these users would be overloaded job in this case becoz of increased users numbers. 
    The management of authorizations (roles) to these users may require strict approval process.
    Would you say considering the system userid/password is better option than other methods with SNC control in place? Please share your thoughts.
    Thanks
    Himadama

  • Looking for information on best practices using Live Upgrade to patch LDOMs

    This is in Solaris 10. Relatively new to the style of patching... I have a T5240 with 4 LDOMS. A control LDOM and three clients. I have some fundamental questions I'd like help with..
    Namely:
    #1. The Client LDOMS have zones running in them. Do I need to init 0 the zone or can I just +zoneadm zone halt+ them regardless of state? I.E. if it's running a database will halting the zone essentially snapshot it or will it attempt to shut it down. Is this even a nessessary step.
    #2. What is the reccommended reboot order for the LDOMs? do I need to init 0 the client ldoms and the reboot the control ldom or can I leave the client LDOM's running and just reboot the control and then reboot the clients after the control comes up?
    #3. Oracle. it's running in several of the zones on the client LDOM's what considerations need to be made for this?
    I am sure other things will come up during the conversation but I have been looking for an hour on Oracle's site for this and the only thing I can find is old Sun Docs with broken links.
    Thanks for any help you can provide,
    pipelineadmin+*                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Before you use live upgrade, or any other patching technique for Solaris, please be sure to read http://docs.oracle.com/cd/E23823_01/html/E23801/index.html which includes information on upgrading systems with non-global zones. Also, go to support.oracle.com and read Oracle Solaris Live Upgrade Information Center [ID 1364140.1]. These really are MANDATORY READING.
    For the individual questions:
    #1. During the actual maintenance you don't have to do anything to the zone - just operate it as normal. That's the purpose of the "live" in "live upgrade" - you're applying patches on a live, running system under normal operations. When you are finisihed with that process you can then reboot into the new "boot environment". This will become more clear after reading the above documents. Do as you normally would do before taking a planned outage: shut the databases down using the database commands for a graceful shutdown. A zone halt will abruptly stop the zone and is not a good idea for a database. Alternatively, if you can take application outages, you could (smoothly) shutdown the applications and then their domains, detach the zones (zoneadm detach) and then do a live upgrade. Some people like that because it makes things faster. After the live upgrade you would reboot and then zoneadm attach the zones again. The fact that the Solaris instance is running within a logical domain really is mostly besides the point with respect to this process.
    As you can see, there are a LOT of options and choices here, so it's important to read the doc. I ***strongly*** recommend you practice on a test domain so you can get used to the procedure. That's one of the benefits of virtualization: you can easily set up test environments so you cn test out procedures. Do it! :-)
    #2 First, note that you can update the domains individually at separate times, just as if they were separate physical machines. So, you could update the guest domains one week (all at once or one at a time), reboot them into the new Solaris 10 software level, and then a few weeks later (or whenever) update the control domain.
    If you had set up your T5240 in a split-bus configuration with an alternate I/O domain providing virtual I/O for the guests, you would be able to upgrade the extra I/O domain and the control domain one at a time in a rolling upgrade - without ever having to reboot the guests. That's really powerful for providing continuous availability. Since you haven't done that, the answer is that at the point you reboot the control domain the guests will lose their I/O. They don't crash, and technically you could just have them continue until the control domain comes back up at which time the I/O devices reappear. For an important application like a database I wouldn't recommend that. Instead: shutdown the guests. then reboot the control domain, then bring the guest domains back up.
    3. The fact that Oracle database is running in zones inside those domains really isn't an issue. You should study the zones administration guide to understand the operational aspects of running with zones, and make sure that the patches are compatible with the version of Oracle.
    I STRONGLY recommend reading the documents mentioned at top, and setting up a test domain to practice on. It shouldn't be hard for you to find documentation. Go to www.oracle.com and hover your mouse over "Oracle Technology Network". You'll see a window with a menu of choices, one of which is "Documentation" - click on that. From there, click on System Software, and it takes you right to the links for Solaris 10 and 11.

  • Building a best practice web application using ColdFusion and Jave EE

    I've been tasked with rewriting a software using ColdFusion.  I cannot seem to find a lot of information on best practice development in ColdFusion.  I am an experience Java developer who has never used ColdFusion before.  I want to build this application using a synergy of ColdFusion and Java EE technologies.  Can someone recommend me a book that outlines how to developer in ColdFusion?  Ideally this book assumes the reader is an experienced developer with no exposure to ColdFusion.  Ideally the methods outlined in the book are still "best practice" methods.

    jaisheela wrote:
    Hello Friends,
    I am also in the same situation.
    I am a building a new web application using JSF and AJAX.
    Requirement is I need to use IBM version of DOJO and JSF but I need to develop the whole application using Eclipse 3.3,2 and Tomcat 5.5.
    With IBM version of DOJO and JSF, will Eclipse and Tomcat help to speed up the development or do you suggest me to go for Rational Application Developer and WebSphere Application Server.
    If I need to go with RAD and WAS, then I am new to RAD and WAS, is it easy to use RAD and WAS for this kind of application and implement web applicaiton fast.
    Any feedback will be great help.Those don't sound like requirements of the system to me. They sound more like someone wants to improve their CV/resume
    From what I've read recently, if it's just fast you want, look at Ruby on Rails

  • Best Practice: SAPGUI Version and Patch Upgrades

    Hello -
    Does anyone have some thoughts/information on best practices relating to SAPGUI version and patch upgrades.
    Obviously, sometimes upgrades are forced upon us (e.g. 7.10 for Vista) and in other cases it may just be considered "nice to have".
    Either way - it always signifies regression test and deployment effort.  How do we balance the benefit and cost?
    Thanks, Steve

    Hi Steve,
    you're right for the first part, yes, we (usually) patch twice a year.
    Now for the rest
    An uninstall will only happen on release changes (6.20->6.40->7.10), i. e. about every 4-5 years as SAP releases them.
    Patches are applied to the installation server and the setup on the client will only update changed program parts. For example, upgrading 6.40->7.10 took about 10 minutes (incl. uninstall), applying patch 1 less than 5 minutes.
    I recommend, you read the "SAP Frontend Installation Guide - 7.10" which you find at SMP alias sapgui. Navigate to  Media Library - Literature. It explains setup of the installation server (sounds like a big thing, but ain't much more than creating a share), creating packages, applying updates etc.
    Peter
    Points always appreciated

  • Best practice to work with Sybase 12.5.3 database version

    Hi all,
    Is there any document or information about best practices to access a Sybase 12.3 version from universe on BOXIR2?
    Thanks.

    Hi Marlon,
    Have a look to Product Guide of BOXIR2 for [Data access|http://help.sap.com/businessobject/product_guides/boexir2/en/xir2_data_access_guide_en.pdf] see if it helps.:
    Regards,
    Shweta

  • Best Practices Data Extract from Essbase

    Hi there,
    I have been looking for information on Best Practices to extract data from Essbase (E).
    Using MDX I initially wanted to bulk extract data from E but apparently the process was never ending.
    As a 2d choice, I went for a simulation of an interactive interaction and got ODI generating MDX queries requesting smaller data sets.
    At the moment more than 2000 mdx queries are generated and sequentially sent to E. I takes some times ....
    Has anyone be using other approaches ?
    Awaiting reaction.
    regards
    JLD

    What method are you using to extract, what version are you on including patch
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • BizTalk monitoring best practice(s)

    I am looking for information on best practices to monitor BizTalk environment (ideally using Tivoli monitoring tools).  Specifically I am looking for insight into what should be monitored, how one analyzes performance profile.  Thanks

    While setting up monitoring agents/products for BizTalk server (or for any server/application for this matter), there are two ways to start:
    If available, import/install application specific monitoring packages i.e. Import/install prebuild monitoring rules, alerts and actions specific to the application.
    Or create the rules/alerts and actions from scratch for the application.
    For monitoring products like SCOM, management packs for BizTalk server are available as pre-build, as readymade packages. For a non-Microsoft product like Tivoli check with
    the vendor/ IBM for any such pre-build monitoring packages for BizTalk. If available purchase it and install it in Tivoli. This would be the best option to start, instead of spending time and resource in building the rules, alerts and actions from scratch.
    If pre-build monitoring package is not available, then start by creating rules to monitor any errors or warnings from event logs of the BizTalk and SQL servers. Gradually,
    you can update/add more rules based on your needs.
    And regarding analysing performance profile, most of the monitoring product now-a-days comes with prebuild alerts for monitoring the server performances, CPU utilization
    etc. I’m sure renowned product like Trivoli shall have prebuild alerts for monitoring the server performances. Same can be configured to monitor the BizTalk’s performances. And also monitoring event log entries would also pickup any performance related issues.
    Moreover, Tivoli has got detail user guide document for setting alerts for BizTalk server. Check this
    document here.
    Reading best practices, links provided by MaheshKumar shall help you.
    Key point to remember is no-monitoring product is perfect; you can’t create a fool-proof monitoring alerts and actions on day one. It would get mature over the time in your
    environment.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful.

  • CRM  - how to work with Best Practices

    Hi All,
    we will start with the implementation of mySAP CRM during the next weeks.
    I'm a bit confused - how should I work with Best Practices, and what are the differences between the 3 ways for Best Practices
    1) we have a 'Solution Manager' System where I can use Best Practices for CRM
    2) Best Practices on help.sap.com: http://help.sap.com/bp_crmv250/CRM_DE/html/index_DE.htm (Buliding Blocks!)
    3) Best Practices DVD to install on the CRM System
    Are the 3 ways exchangeable? Is there some information provided by SAP?
    We have already installed the Best Practices DVD, but now I don't know how to use this Add-On: Is there a special transaction-code to use them or a extension for the IMG?
    regards
    Stefan

    Hi Stefan Kübler,
    If the solution manager is in place, then the suggested (also the best) method is to use the best practices on it.
    If you want to install and use the best practices with CRM system then the procedure is given in the best practice CD/DVD. Also you can download the installation procedure from the below link : http://help.sap.com/bp_crmv340/CRM_DE/index.htm. Click on ‘Installation’ on left and then ‘Quick Guide’ on right. Download the document.
    Though the best practices give you a way to start with, but they can’t replace your requirement. You have to configure the system as per your exact business requirement.
    I never installed best practices before, but extensively used them as reference in all my projects.
    Follow the below thread for additional information on  best practices :
    Also refer to my past thread :
    Do not forget to reward if helps.
    Regards,
    Paul Kondaveeti

  • APO reporting - Best practice

    Hi,
    I am looking for information regarding best practice and business content for APO reporting (especially within demand planning)in SAP BI. I have found some business content on help.sap.com but it is for example only 2 queries and the project I am working on definitely needs more but they still want to stick to standard/best practices.
    Anyone who can help me or have any ideas?
    regards,
    Malin

    Hi,
    there are 4 data sources for PP/DS:
    0APO_PPDS_OPERATION_01
    0APO_PPDS_ORDER_01
    0APO_PPDS_PROD_CUST_01
    0APO_PPDS_RESCAPREQ_01
    Best regards
    Thomas

  • IPv6 hardening Best Practices?

    Hi,
    I have been searching everywhere for information about best practices to harden Cisco devices when IPv6 is implemented, I have found many documents showing possible threats however some of them are more than 3 years old and don't give a good example of how to implement the best practices
    Anyone has information or a guide on how to harden your devices when IPv6 is in place?
    Thank you

    Hi,
    You will find some good references in the Design Zone for IPv6. Many of the documents there have been updated recently.
    http://www.cisco.com/en/US/netsol/ns817/networking_solutions_program_home.html
    For example see below the IPv6 Campus Security Section
    http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/CampIPv6.html#wp390569
    One of the best older references out there is IPv6 Security, 2008
    http://www.ciscopress.com/bookstore/product.asp?isbn=1587055945
    See also IPv6 for Enterprises, 2011
    http://www.ciscopress.com/bookstore/product.asp?isbn=1587142325
    If you keep track of the Cisco Press ebook deals of the day you can purchase them at a heavily discounted rate.
    http://www.ciscopress.com/deals/
    Don't forget to rate posts that are helpul.

  • WLI Best Practices

    Hi all!
    I have been giving the task of constructing the pilot workflow
    project in my company. We have decided to use WLI 2.1 as our
    workflow platform. I am bit new to this particular product.
    I am a bit perplexed on how best to
    organize my workflows. I was hoping to gain a few "best practices" from those
    who have traveled this road before.
    Originally, I only had to deal with one company, so I was going to define each
    "department" as an organization and have separate roles defined with each. I could
    then define workflows for each organization and simply have them interact. This
    seems to fined grain and does not give me the ability to track a workflow across
    organizations very easily.
    I have just been told that I now need to incorporate two of our offices. So I
    guess that determines the organizations that I need. How do I now represent the
    Departments, and the roles within them. Do I use a standard naming schema for
    the roles, i.e., HumanResources_Manager, etc.
    So what I am asking for is for some pointers to any information or best practices
    that will help me determine the best way to handle a "company-department-role"
    hierarcht with the WLI product.
    Thanks in advance to all those whom take the time to share their wisdom.
    Peter Giesin

    U can get it in BEA website.
    Geetha

  • Best practice for CPU and memory usage?

    I find my AIR application takes a lot of memory -- usually
    >170M. And what is strange is that the memory usage is
    increasing (about 4K/s) even when the application is simply sitting
    there and do nothing. The CPU usage is supposed to be 0% when the
    application is doing nothing, but it's not (usually ~5%). So I
    wonder if there is any article about best practice on CPU/Memory
    usage.

    Those numbers indicate that your application is in fact doing
    something. Perhaps you have a timer still running, or work being
    done on an enterFrame event?

Maybe you are looking for

  • Help: Connecting Tomcat to CA-IDMS Using JDBC Type 4 Drivers (JNDI)

    Hi there, I have a rather interesting / complex problem......creating a connection to CA-IDMS from Tomcat using JDBC type 4 drivers (CA provide the type 4 driver). We have a zSeries 9 IBM mainframe running CA-IDMS r16.1, and I need to connect to the

  • TCS3 Index ToolsPro conditional text problem

    I'm using Index ToolsPro to handle the index in my current project. This creates the conditional text setting, which is then imported into the RH project as a conditional build tag. This is used by default in RH to hide the Index ToolsPro index entri

  • Problem in installing oralce 10.2.0 in windows xp

    when im installing 10.2.0 enterprise edition in windows xp professional... after the database name and password given im getting this message like " please wait this will take a moment" after that the installation got disappeared. plz povide me solut

  • Create DFF using OA Framework

    Hi, I have a requirement where i have to create a DFF usng OA Framework. When i checked the Developer's Guide they have mentioned that first we have to create the DFF in APPS and then use the same in OA Page using Flex item. My question is without cr

  • Satellite Pro A30: CD/DVD drive not working - How to restore OS?

    I want to run the recovery discs to restore my laptop to original condition but the cd/dvd rom is not working. Does any one have an idea how I can do this either by connecting my laptop to another pc or is there a programme on the web I can access to