Best Practice Exception Handling.

Hi,
Please consider two scenarios:
Scenario 1:
DECLARE
  l_emp   scott.emp.ename%TYPE;
  l_dname scott.dept.dname%TYPE;
BEGIN
  BEGIN
    SELECT ename INTO l_emp FROM emp WHERE empno = 7934;
  EXCEPTION
    WHEN no_data_found THEN
      dbms_output.put_line('No big deal');
      NULL;
    WHEN too_many_rows THEN
      dbms_output.put_line('It is a big deal');
      RAISE;
    WHEN OTHERS THEN
      RAISE;
  END;
  BEGIN
    SELECT dname INTO l_dname FROM dept WHERE deptno = 1;
  EXCEPTION
    WHEN no_data_found THEN
      dbms_output.put_line('It is a big deal');
      RAISE;
    WHEN too_many_rows THEN
      dbms_output.put_line('It is a big deal');
      RAISE;
    WHEN OTHERS THEN
      RAISE;
  END;
EXCEPTION
  WHEN OTHERS THEN
    RAISE;
END;
Scenario 2:
DECLARE
  l_point_of_error NUMBER;
  l_emp            scott.emp.ename%TYPE;
  l_dname          scott.dept.dname%TYPE;
BEGIN
  l_point_of_error := 1;
  now write some implicit cursors-
  SELECT ename INTO l_emp FROM emp WHERE empno = 7934;
  l_point_of_error := 2;
  SELECT dname INTO l_dname FROM dept WHERE deptno = 1;
EXCEPTION
  WHEN no_data_found THEN
    CASE l_point_of_error
      WHEN 1 THEN
        dbms_output.put_line('No big deal');
        NULL;
      WHEN 2 THEN
        dbms_output.put_line('It is a big deal');
        RAISE;
      ELSE
        dbms_output.put_line('I have an idea which block this errored out on...but I want or HAVE TO raise the error');
        RAISE;
    END CASE;
    NULL;
  WHEN too_many_rows THEN
    CASE l_point_of_error
      WHEN 1 THEN
        dbms_output.put_line('It is a big deal');
        RAISE;
      WHEN 2 THEN
        dbms_output.put_line('It is a big deal');
        RAISE;
      ELSE
        dbms_output.put_line('I have an idea which block this errored out on...but I want or HAVE TO raise the error');
        RAISE;
    END CASE;
    NULL;
  WHEN OTHERS THEN
    RAISE;
END;
/What do you think is the right approach?
The one thing I can think of using Scenario 2, it will be a nightmare to handle the 'case' statements in the final exception catcher when the number of blocks to handle are more.
Also, Scenario 2 also uses ONE more variable to be assigned (more processing time maybe??).
I am also told that Scenario 2 is used extensively used by Oracle Applications PL/SQL Api's.
But, if you can suggest which do you think is the best practice is? I would really appreciate it.
Thank you,
Rahul

My bad 3360:
I didn't mention the error logging think in OP and I use it in my code all the time and forgot to put it in OP.
Anyway, I use Tom Kyte's who_am_i and who_called_me in my error logging to figure out the line number of the error.
* One failure was caused by the error log table running out of space because no one ever looked at it and didn't realize it had hundreds of thousands of exceptions in it. Yes, you are absolutely right. Yes somebody(the developer(s) ) have to take care of the error logging table (try to see what errors out and why).
P.S: I am gonna stick with Scenario 1 (with the error logging of course :) ) according to Todd Barry's comments and I have no idea why Oracle Applications uses Scenario 2. Can anybody tell me why (maybe it's just bad coding on Oracle Apps part?).
Thank you,
Rahul.

Similar Messages

  • What is the best practice to handle JPA methods in JSF app?

    I am building a JSF-JPA web app(No EJB).
    I have several methods that has JPA QL inside.
    Because I have to put those methods inside JSF beans to inject EntityManagerFactory (am I right about this?).
    And I do want to separate those methods from regular JSF beans which are used by page authors.
    And I may need to use them in different JSF managed beans.
    My question here is that what is the best practice to handle that?
    I. write a or a few separate JSF Beans and inject them into regular Beans?
    II. write a or a few separate JSF Beans and access them into regular Beans using FacesContext?
    III. others?
    Waiting to hear from you opinions.

    You can create named queries on your Entities themselves then just call entityMgr.createNamedQuery("nameOfQuery");
    Normally, we put these named queries in the class of the entity which will be returned. This allows for all information pertaining to a given entity and all ways of accessing that entity (except em.find() and stuff, of course) to be in one place. As long as the entity is defined in your persistence.xml file, any named queries which reside on that entity will be available through the EntityManager.
    As for the EntityManagerFactory, we normally create an application scope bean which holds the factory itself (because this is a heavy-weight object) and then just get all EntityManager instances from that by injecting this bean into whatever needs it. For example, I might have:
    //emfBB is the injected app scope bean which holds the entity manager factory.
    private EmfBB emfBB;
    private void lookupSomeData()
    EntityManager em = this.getEmfBB().getEmf()
    I hope this answered your question?
    ~Zack
    Edited by: zmarr on Nov 6, 2008 1:29 PM

  • Best Practice for Handling Timeout

    Hi all,
    What is the best practice for handling timeout in a web container?
    By the way, we have been using weblogic 10.3
    thanks

    Are you asking about this specific to web services, or just the JEE container in general for a web application?
    If a web application, Frank Nimphius recently blogged a JEE filter example that you could use too:
    http://thepeninsulasedge.com/frank_nimphius/2007/08/22/adf-faces-detecting-and-handling-user-session-expiry/
    Ignore the tailor fitted ADF options, this should work for any app.
    CM.

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Best practice to handle contents greater than 1 TB

    Hello All,
    I am using Sharepoint 2010 and I need to know whats the best practice to handle contents greater than 1 TB
    Specifics
    1) Contents will be collection of images (Jpeg format) and collectively the sizes can go above 1 TB till 10 TB or more
    2) Image will be uploaded to sharepoint though webservice
    So any of below option suitable? if not, then any other option?
    - Document Library
    - Document Center
    - Record Center
    - Asset Library
    - Picture Library
    Thanks in advance ...

    Theres several aspects to this.
    Large lists:
    http://technet.microsoft.com/en-gb/library/cc262813%28v=office.14%29.aspx
    A blog summarising large databases here:
    http://blogs.msdn.com/b/pandrew/archive/2011/07/08/articles-about-scaling-sharepoint-to-large-content-database-capacity.aspx
    Boundaries and limits:
    http://technet.microsoft.com/en-us/library/cc262787%28v=office.14%29.aspx#ContentDB
    If at all possible make your web service clever enough to split content over multiple site collections to allow you to have smaller individual databases.
    It can be done but you need to do a lot of reading on this to do it well. You'll also need a good DBA team to maintain the environment.

  • Best practice on handling a datacontrol based on a changing webservice

    Is there any best practice on how to handle changes to a datacontrol, when a webservice changes ?, it seems the information on
    portnumbers, servernames etc. is placed in a number of files, an optimal solution seems to be that Jdeveloper would have functionality
    to regenerate all relevant files based on changes to the WSDL, but this does not seem to be supported in Jdeveloper 11g
    Regards
    Ole Spielberg

    Hi,
    I think in this case you would better use a WS proxy, wrap this in a POJO and create a data control from this. This allows you to set the port and host programmatically. I agree that there should be a better option to do the same in the WS data control
    Frank

  • Best practice for handling errors in EOIO processing on AEX?

    Hi,
    I'm looing for resources and information describing the best practice on how to handle processing errors of asynchronous integrations with Exactly-Once-In-Order QoS on an AEX (7.31) Java only installation. Most information I've found so far are describing a the monitoring and restart jobs on AS ABAP.
    Situation to solve:
    multiple different SOAP messages are integrated using one queue with an RFC receiver. On error the message status is set to Holding and all following messages are Waiting. Currently I need to manually watch over the processing, delete the message with status holding and restart the waiting ones.
    Seems like I can setup component based message alerting to trigger an email or whatever alert. I still need to decide on how to handle the error and resolve it (ie. delete the errornous message, correct the data at sender and trigger another update). I still need to manually find the oldest entry with status waiting and restart it. I've found a restart job in Background jobs in configuration and monitoring home, but it can be only scheduled in intervals of 1 or more hours.
    Is there something better?
    Thank you.
    Best regards,
    Nikolaus

    Hi Nikolaus -
    AFAIK - For EOIO, you have to cancel the failed message and then process the next message in the sequence manually..
    Restart job only works the messages which are in error state but not in holding state.. So you have to manually push the message... So there is no other alternative.
    But it should not be that difficult to identify the messages in a sequence..
    How to deal with stuck EOIO messages in the XI ... | SCN
    Though it is for older version, it should be the same.. you should be able to select the additional columns such as sequence ID from the settings..

  • Best practices for handling elements and symbols (including preloading)

    I am trying to learn Edge Animate and I have not seen enough animations to know how this is typically handled and I searched the forum and have not found an answer either.
    If you have many different elements and symbols for a project, what is the best practice for having them appear, disappear, etc. on the timeline? I ask this question not only from a performance based perspective, but also keeping in mind the idea of preloading. This is a 2 part question:
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.

    Hi, escargo-
    Good questions!
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    I would recommend that you set your visibility to "off" instead of simply changing the opacity.  The reason I suggest this is that when your visibility is set to off, your object's hit points also disappear.  If you have any type of interactivity, having the object still visible but with 0 opacity will interfere with anything you have underneath it in the display order.
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.
    No, none of this has any impact on page load.  As you already noticed, all of the assets of your project will load before it displays.  If you want only part of your composition to load, you may want to do what we call a multi-composition project.  There's a sample of that in the Edge Animate API in the Advanced section, and plenty of posts in the forums (and one in the team's blog) explaining how to do that.
    http://www.adobe.com/devnet-docs/edgeanimate/api/current/index.html
    https://blogs.adobe.com/edge/
    Hope that helps!
    -Elaine

  • Expert opinion needed: Best practices to handle huge rowsets on UI

    Hi All,
    I need to know what are the best practices from Oracle to handle huge rowsets on the UI.
    My ADF 11g app is a custom monitoring cum reporting tool for a highly active integration solution.
    The user can give me a selection criteria say show transactions between yesterday and tomorrow and our highly active transactional system may return upto 5000 records.
    I am showing these records in a tabular format and since pagination is not there we are depending on auto scrolling which is kind of slow.
    So please advice me what options come to your minds for showing/informing users of such rowsets.
    I am aware ideally UI should not have more that a couple hundred records but our use case does not adhere to that.
    Thanks

    since pagination is not there I'm not sure what you mean by this, the ADF Faces table does pagination when you scroll - so if your business service has 5000 records but the rows property of your table is set to 25 - you'll just fetch 25 records to the client.
    When you scroll down you'll fetch another 25.
    This type of thing is automated for ADF BC data controls - and you can control the range set.
    We also generate the code needed for EJB Facades to do this with JPAs.
    If you have your own Java class as a data source you'll need to implement this pagination on the business service side see exapmle 37 here: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

  • Best Practices for Handling queries for searching XML content

    Experts: We have a requirement to get the count of 4 M rows from a specific XML tag with value begins with, I have a text index created but the query is extremely slow when I use the contains operator.
    select count(1) from employee
    where
    contains ( doc, 'scott% INPATH ( /root/element1/element2/element3/element4/element5)') >0
    what is oracle's best practices recommendation to query / index such searches?
    Thanks

    Can you provide a test case that shows the structure of the data and how you've generated the index? Otherwise, the generic advice is going to be "use prefix indexing".

  • Best practice to handle the class definitions among storage enabled nodes

    We have a common set of cache servers that are shared among various applications. A common problem that we face upon deployment is with the missing class definition newly introduced by one of the application node. Any practical approach / best practices to address this problem?
    Edited by: Mahesh Kamath on Feb 3, 2010 10:17 PM

    Is it the cache servers themselves or your application servers that are having problems with loading classes?
    In order to dynamically add classes (in our case scripts that compile to Java byte code) we are considering to use a class loader that picks up classes from a coherence cache. I am however not so sure how/if this would work for the cache servers themselves if that is your problem!?
    Anyhow a simplistic cache class loader may look something like this:
    import com.tangosol.net.CacheFactory;
    * This trivial class loader searches a specified Coherence cache for classes to load. The classes are assumed
    * to be stored as arrays of bytes keyed with the "binary name" of the class (com.zzz.xxx).
    * It is probably a good idea to decide on some convention for how binary names are structured when stored in the
    * cache. For example the first tree parts of the binary name (com.scania.xxxx in the example) could be the
    * "application name" and this could be used as by a partitioning strategy to ensure that all classes associated with
    * a specific application are stored in the same partition and this way can be updated atomically by a processor or
    * transaction! This kind of partitioning policy also turns class loading into a "scalable" query since each
    * application will only involve one cache node!
    public class CacheClassLoader extends ClassLoader {
        public static final String DEFAULT_CLASS_CACHE_NAME = "ClassCache";
        private final String classCacheName;
        public CacheClassLoader() {
            this(DEFAULT_CLASS_CACHE_NAME);
        public CacheClassLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public CacheClassLoader(ClassLoader parent, String classCacheName) {
            super(parent);
            this.classCacheName = classCacheName;
        @Override
        public Class<?> loadClass(String className) throws ClassNotFoundException {
            byte[] bytes = (byte[]) CacheFactory.getCache(classCacheName).get(className);
            return defineClass(className, bytes, 0, bytes.length);
    }And a simple "loader" that put the classes in a JAR file into the cache may look like this:
    * This class loads classes from a JAR-files to a code cache
    public class JarToCacheLoader {
        private final String classCacheName;
        public JarToCacheLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public JarToCacheLoader() {
            this(CacheClassLoader.DEFAULT_CLASS_CACHE_NAME);
        public void loadClassFiles(String jarFileName) throws IOException {
            JarFile jarFile = new JarFile(jarFileName);
            System.out.println("Cache size = " + CacheFactory.getCache(classCacheName).size());
            for (Enumeration<JarEntry> entries = jarFile.entries(); entries.hasMoreElements();) {
                final JarEntry entry = entries.nextElement();
                if (!entry.isDirectory() && entry.getName().endsWith(".class")) {
                    final InputStream inputStream = jarFile.getInputStream(entry);
                    final long size = entry.getSize();
                    int totalRead = 0;
                    int read = 0;
                    byte[] bytes = new byte[(int) size];
                    do {
                        read = inputStream.read(bytes, totalRead, bytes.length - totalRead);
                        totalRead += read;
                    } while (read > 0);
                    if (totalRead != size)
                        System.out.println(entry.getName() + " failed to load completely, " + size + " ," + read);
                    else
                        System.out.println(entry.getName().replace('/', '.'));
                        CacheFactory.getCache(classCacheName).put(entry.getName() + entry, bytes);
                    inputStream.close();
        public static void main(String[] args) {
            JarToCacheLoader loader = new JarToCacheLoader();
            for (String jarFileName : args)
                try {
                    loader.loadClassFiles(jarFileName);
                } catch (IOException e) {
                    e.printStackTrace();
    }Standard disclaimer - this is prototype code use on your own risk :-)
    /Magnus

  • Best practice for handling data for a large number of indicators

    I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
    I was curious what others have done in similar circumstances.
    Bill
    “A child of five could understand this. Send someone to fetch a child of five.”
    ― Groucho Marx
    Solved!
    Go to Solution.

    I can certainly feel your pain.
    Note that's really what is going on in that png  You can see the Action Engine responsible for updating the display to the far right. 
    In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified.  So I worked it this way from no choice of mine.  I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway.  Defer Panel Updates was my very good friend.  The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
    (the GUI did scale poorly though!  That is a lot of wires!  I was greateful to Jack for the Idea to make align and distribute work on wires)
    Jeff

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • Best practice for handling original files once movie is complete?

    So I'm taking movies from my Canon S5IS (and other cameras in the past) and making projects in iMovie, sharing in the Media Browser, importing into iDVD, and burning to DVD.
    I can't help but wonder if I might need the original footage one day. Do most people keep their original files for future media (replacement for DVD) which I realize would require recreation of the movies that were created in 2008 with iMovie (with title screens, transitions, etc.)? Or do most people delete the originals with the feeling that DVD will be a suitable way to watch home movies for the foreseeable future?
    I just can't figure out what to do. I don't want to burn dozens of DVDs of raw footage, only to have keep up with them in a safe deposit box and have to deal with the anxiety of having to recreate movies one day (which is daunting enough now...unbelievably daunting to think about the exponential growth as time progresses).
    Hope this make sense. Reading that DVD movies are not suitable for editing due to the codec has made me realize I need to think through this before destroying all these originals as I'm finished with them.
    Thanks in advance!
    -John

    If any of your cams are miniDV, then you simply need to keep the original tapes and tape is still the safest long term archiving solution, when stored properly.
    Other cams that use flash memory, hard drives, even DVD cams, do not offer the security that tape does. If you are wanting to save those types of files, the best option would be to store them on one or two external hard drives, bearing in mind those drives could fail anytime. Back up to your back up in that case.
    Another nice thing about miniDV cams is that you can export your finished movie back to a tape also, using iMovie HD6, and have safe copies of original and finished material.
    Message was edited by: Forest Mccready

  • Best practice for handling local external assets in Air?

    When setting up a project (as3 mobile not flex framework ideally), where and how might one place their runtime-loaded application assets?
    Especially, does anyone have example code for referencing local files such that it works across android, iOS and the local debugger/local playback?
    Thanks,
    Scott

    Just have a folder to collect your assets and call with an relative path. Because you'r going to attach the files and folder while packaging which is refered by your app.

Maybe you are looking for