Best practice: Computations or Source

Hi,
Question about what's the best use.
If you write a SQL query to get a value out of a database for a field on your page. What is the best way to do this.
- By setting a computation (with the query) on that field
- Or write te query in the source attribute of that field
Both will have the same result.
But what's the best practice?

Hello Davy,
>> Both will have the same result.
That’s not entirely correct.
If you are using the SQL query in the item’s Source value or expression field, the retrieved value will not set Session State. Computation, on the other hand, will set Session State.
Also, bear in mind that firing the Source content depends on how you set the Sourced Used field. If you set it to Only when … it will be fired only when the value of the item in session state is null.
>> But what's the best practice?
There isn’t any. It actually depends on what you need to achieve. If, for example, you need to set Session State, you only have one option.
Regards,
Arie.
♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
♦ Author of Oracle Application Express 3.2 – The Essentials and More

Similar Messages

  • What is the best practice for package source locations?

    I have several remote servers (about 16) that are being utilized as file servers that have many binaries on them to be used by users and remote site admins for content. Can I have SCCM just use these pre-existing locations as package sources, or is this
    not considered best practice? 
    Or
    Should I create just one package source within close proximity to the Site Server, or on the Site Server itself?
    Thanks

    The primary site server is responsible for grabbing the source data and turning it into packages for Distribution points.  so while you can use ANY UNC to be a source location for content, you should be aware of where that content exists in regards
    to your primary site server.  If your source content is in Montana but your primary server is in California ... there's going to be a WAN hit ... even if the DP it's destined for is also in Montana.
    Second, I strongly recommend locking down your source UNC path so that only the servers and SCCM admins can access it.  This will prevent side-loading of content  as well as any "accidental changing" of folder structure that could cause
    your applications/packages to go crazy.
    Put the two together and I typically recommend you create a DSL (distributed source library) share and slowly migrate all your content into it as you create your packages/applications.  You can then safely create batch installers, manage content versions,
    and other things without fear of someone running something out of context.

  • Best Practice Computer Upgrading

    I recently replaced my beloved circa 2006 MacBook Pro. with a new 13' MBP. I used Migration Assistant to migrate my apps to the new computer. This did not move my iPhoto Library. I'm hesitant to do a Time Machine restore to the new computer because I would like to get a "fresh start" so to speak. The new computer is soooooo much better than my old one and I'm afraid doing the TM back up would just put a lot of unwanted junk on it.
    What would be the best way to get items like my iPhoto Library off the old computer? What about my iTunes Library? I do use iTunes Match, so I can access all my matched music, but should I keep it on a hard drive somewhere? I mostly use Dropbox for my documents, so I'm not too worried about those.
    I'd appreciate any suggestions from the group!
    Thanks.

    Using Migration Assistant, you should be able to move your user's accounts, too, incuding everything in the iPhoto and iTunes libraries. If I were you, I'd either just use it again, or, since you've a Time machine backup, only move those folders that you need into your admin folder.
    Just my 2¢...
    Clinton

  • Best Practice for Multiple source.

    Hello gurus:
    The scenario is the following
    I have a material which can be purchase from Supplier A o from Supplier B using a 60/40 distribution..
    What set of steps should i follow from here to make sure every time a purchase requisition it is triggered from MRP a quota arrangement will be suggested to buy the goods from A and B?
    Is it possible to change the quota arrangement once it is confirmed by the MRP?
    Some times we will decide the source determination at the time of converting the Purchase Order, how to handle this if there is no master data like quota arrangement?
    Regards,

    What set of steps should i follow from here to make sure every time a purchase requisition it is triggered from MRP a quota arrangement will be suggested to buy the goods from A and B?
    Create the quota arrangement and in the source list enter 1 for MRP field
    Is it possible to change the quota arrangement once it is confirmed by the MRP?
    quota arrangement can not be changed once it is assigned by MRP except you delete the reqs generated by MRP
    Some times we will decide the source determination at the time of converting the Purchase Order, how to handle this if there is no master data like quota arrangement?
    if oyu don't have quota then system will sugget the source from source list (if you have not make Fix sourcelist in source list)

  • Best practice deployment - Solaris source and target

    Hi,
    What is the recommended deployment guide for an ODI instance under Solaris. I have a sybase source and an Oracle target, both of which are on Solaris. I plan to put my ODI master and work repository on another Oracle DB on the solaris target machine. Now where does my agent sit, since my source and target are solaris ?? I plan to administer ODI from my windows clients but :-
    Where and how do I configure my agent so that I can schedule scenarios. It would make most sense to be able to run the agent on my target solaris machine , is this possible ?? If not then do I have to have a separate windows server that is used to run the agent and schedule the jobs ??
    Thanks for any assistance,
    Brandon

    thanks for the reply. I cant find anything in the installation guide about Solaris specifically but it mentions to follow the instructions for "Installing the Java Agent on iSeries and AS/400" where the download o/s is not supported.
    So it seems I just need make some directories on the solaris host and to manually copy files into s these directories and as long as java sdk/runtime is there I can use the shell scripts (eg. agentshceduler.sh ) to start and stop the agent.
    So my question I guess is since the os supported downloads are only windows and Linux, where do I copy the files from, the Linux ones ?? Is it right to say that since these are java programs I should be able to copy the Linux ones and use them under Solaris ??
    I dont have the Solaris environment at hand to test this just yet.... hence the questions....
    thanks again

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Best practice to deal with computer or departed employee

    Dear All,
        I would like to inquire about the best practice to deal with computer and computer account of a departed employee. should be disabled, reset, deleted, or just kept as it is until it is needed by another user?
    Regards
    hiam
    Hiam

    Ultimately your needs for their identities and equipment after they leave are what dictate how you should design this policy.
    First off, I recommend disabling the account immediatly following the employee's departure. This prevents the user from using their credentials to log on again. Personally I have a "Disabled Users" OU in Active Directory. When I disable accounts
    I move them here for easy future retrieval.
    It is possible the user may return, or if they have access to certain systems you may need the account again. I would keep the accounts for a specific amount of time (e.g. 6 months, but this depends on your needs) and then delete them after this period of
    time.
    If the employee knows the passwords to any shared accounts (not a good idea though many organizations have these) or has accounts in other systems that do not use Active Directory authentication, immediately change the passwords to these accounts again following
    the employee's departure.
    If the employee had administrative access to their computer (not a good idea, though is the reality in most cases) you should disable the computer account and remove it from the network. This will prevent the employee from remotely accessing the machine
    until you are able to rebuild or inspect it for unapproved changes.
    Ask the user's manager, team members, and subordinates if there are any files that the employee would have stored on their computer. Back these up as necessary.
    Most likely you will reuse the computer for another employee. For best results you should use an image so you can re-image their machine and not have to worry whether they had installed any unwanted software (backdoors, viruses, illegal software, etc).
    Hope this helps.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • Best Practice / Solutions for using 11g DB+x86 or Small Computer to build iaas/paas?

    My customer wants to build their own iaas/paas using Oracle 11g DB, plus x86 or other small computer, running Linux or Solaris or Unix OS.
    Oracle Exadata is not feasible for them to use currently.
    Customer wants to know whether there are other customers have implemented their cloud solution based on these or not?
    If yes, would like to share the experience, presentation slides, best practices etc.
    Is there an Oracle email DL for asking this kind of question?
    Thanks,
    Boris

    Like Rick, I'm not aware of a specific "cloud implementors forum". Internally, Oracle has lots of material on implementing cloud, using any platform at all, although obviously we feel Engineered Systems are the most cost-effective solution for many customers. Are you interested in IaaS i.e. virtualised hardware, or PaaS i.e. DBaaS? They should not be confused, neither is required for the other, in fact, using IaaS to implement "DBaaS", as the OpenStack trove API attempts to do, is probably the most counter-productive way to go about it. Define the business-visible services you will be offering, and then design the most efficient means of supporting them. That way you gain from economies of scale, and set up appropriate management systems that address issues like patching, security, database virtualisation and so on.

  • Looking For Guidance: Best Practices for Source Control of Database Assets

    Database Version: 11.2.0.3
    OS: RHEL 6.2
    Source Control: subversion
    This is a general question aimed at database professionals, however, it is not specific to any oracle version, etc.  Its a leadership question for other Oracle shops regarding source control.
    The current trunk, in my client's source control, is the implementation of a previous employee who used ER Studio.  After walking the batch scripts and subordinate files , it was determined that there would be no formal or elegant way to recreate the current version of the database from our source control - the engineers who have contributed to these assets are no longer employed or available for consulting.  The batch scripts are stale, if you will.
    To clean this up and to leverage best practices, I need some guidance on whether or not to baseline the current repository and how to move forward with additions of assets; tables, procs, pkgs, etc.  I'm really interested in how larger oracle shops organize their repository - what directories do you use, how are they labeled...are they labeled with respect to version?
    Assumptions:
    1. repository (database assets only) needs to be baselined (?)
    2. I have approval to change this database directory under the trunk to support best practices and get the client steered straight in terms of recovery and
    Knowns:
    1. the current application version in the database is 5.11.0 (that's my client's application version)
    2. this is for one schema/user of a database (other schemas under the database belong to different trunks)
    This is the layout that we currently have and for the privacy of the
    client I've made this rather generic.  I'd love to have a fresh
    start...how do I go about doing that...initially, I like using
    SqlDeveloper's ability to create sql scripts from a connected target. 
    product_name
      |_trunk
         |_database
           |_config
           |_data
           |_database
           |_integration
           |_patch
           |   |_5.2A.2
           |   |_5.2A.4
           |   |_5.3.0
           |   |_5.3.1
           |
           |_scripts
           |   |_config
           |   |_logs
           |
           |_server
    Thank you in advance.

    HiWe are using Data ONTAP 8.2.3p3 on our FAS8020 in 7-mode and we have 2 aggregates, a SATA and SAS aggregate. I want to decommission the SATA aggregate as I want to move that tray to another site. If I have a flexvol containing 3 qtrees CIFS shares can I use data motion (vol copy) to move the flex vol on the same controller but to a different aggregate without major downtime? I know this article is old and it says here that CIFS are not supported however I am reading mix message that on the version of data ONTAP we are now on does support CIFS and data motion however there will be a small downtime with the CIFS share terminating. Is this correct? Thanks

  • Any Best Practices (i.e. DOs and Don'ts) for source to target MAPPINGs?

    Hi Experts,
    Any Best Practices (i.e. DOs and Don'ts) for source to target MAPPINGs?
    I will appreciate any hints on this.
    Thanks

    Hi,
    I am assuming that you are asking about transformation mapping between source and target..
    1) One to One mapping
    2)Avoid using complex calculations
    3) if any calculations required use routine instead of formulas
    4)if possible avoid Using field routine (you can do start or end routine )
    5)Do not map unwanted fields ( unnecessary process time,database occupant, also problem while activation of DSO data )
    6)Avoid using master data read mapping option instead you can use routine to fetch the master data
    7) no need of using  infosurce
    8)Use standard time conversions for time fields
    Generally these things we need to consider while mapping..
    Regards,
    Satya

  • Capturing deletes in the source -best practices

    What are best practices to capture deletes in the source (10g)? I need to bring the data into the data warehouse. Asynchronous CDC can do the job, but is there anything I should be aware of? Can somebody speak of the best practices of how to implement this? Other options?
    Thanks in advance.
    Edited by: Rinne on Sep 23, 2010 11:05 AM

    Rinne wrote:
    Deletes don't happen often at all. Just about 10 records in a month. But I do need to track them daily. I have a daily job that goes against the source and gets the data out. Currently, I'm relying on a timestamp, but I need to change this to get the deletes.If you can afford (i.e. you have control over the application that uses the source database), you may want to only mark the records as "DELETED" (for e.g. add a flag to the table that is set to indicate that record is deleted). That way you change a DELETE to an UPDATE.

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

Maybe you are looking for

  • Error when using 64-bit host in send port

    Guys, I am trying out a simple message routing scenario. I have created a send pipeline and have used the MIME/SMIME component in the encoding stage of the pipeline. My application has a receive and a send port with receive port configured to receive

  • Shuffle not working properly

    My Itunes library is not working properly. The main problem is the shuffle function. Clicking the shuffle icon to turn it blue and shuffle the play order of the songs does not work. If I sort by name, some of the songs are shuffled where as others wi

  • Saving scanned files

    When trying to save a  recently scanned document,  I get message "The file may be read-only, or........"  Adobe 9 Standard; Vista

  • Half of my town has FFTC enabled, half does not.

    I work on the internet, and I have to transfer files a fair amount so I rely on my internet and I've been keeping track of FTTC's rollout heading my way for the last year.  Since late 2011 it's said on the BT Infinity site and on Samknows that FTTC w

  • Adobe Premiere PRO CS4... Installation wird vorbereitet

    Hallo! Ich habe mir vorgestern die Testversion des o.g. Programmes runter geladen... Allerdings kommt mein bester Rechner, nicht weiter als bis zum "Installation wird vorbereitet" Fenster. Dann bleibt er dort undes passiert nichts mehr. Das Programm