Best practice for if/else when one outcome results in exit [Bash]

I have a bash script with a lot of if/else constructs in the form of
if <condition>
then
<do stuff>
else
<do other stuff>
exit
fi
This could also be structured as
if ! <condition>
then
<do other stuff>
exit
fi
<do stuff>
The first one seems more structured, because it explicitly associates <do stuff> with the condition.  But the second one seems more logical because it avoids explicitly making a choice (then/else) that doesn't really need to be made.
Is one of the two more in line with "best practice" from pure bash or general programming perspectives?

I'm not sure if there are 'formal' best practices, but I tend to use the latter form when (and only when) it is some sort of error checking.
Essentially, this would be when <do stuff> was more of the main purpose of the script, or at least that neighborhood of the script, while <do other stuff> was mostly cleaning up before exiting.
I suppose more generally, it could relate to the size of the code blocks.  You wouldn't want a long involved <do stuff> section after which a reader would see an "else" and think 'WTF, else what?'.  So, perhaps if there is a substantial disparity in the lengths of the two conditional blocks, put the short one first.
But I'm just making this all up from my own preferences and intuition.
When nested this becomes more obvious, and/or a bigger issue.  Consider two scripts:
if [[ test1 ]]
then
if [[ test2 ]]
then
echo "All tests passed, continuing..."
else
echo "failed test 2"
exit
fi
else
echo "failed test 1"
fi
if [[ ! test1 ]]
then
echo "failed test 1"
exit
fi
if [[ ! test2 ]]
then
echo "failed test 2"
exit
fi
echo "passed all tests, continuing..."
This just gets far worse with deeper levels of nesting.  The second seems much cleaner.  In reality though I'd go even further to
[[ ! test1 ]] && echo "failed test 1" && exit
[[ ! test2 ]] && echo "failed test 2" && exit
echo "passed all tests, continuing..."
edit: added test1/test2 examples.
Last edited by Trilby (2012-06-19 02:27:48)

Similar Messages

  • Best Practice for Multiple iTunes and One Account?

    My wife and I share our account for our iPhone Applications and Songs in iTunes.
    Can someone point me to a topic that covers a best practice for multiple iTunes syncing?
    I travel a lot and keep my iPhone synced to my Macbook Pro while she uses the workstation at home to sync to. We keep the addresses and contacts and calendars synced through MobileMe, however, I'm trying to find the best way of pushing applications and songs I've bought over to the other PC so they're still around while I'm traveling for her to sync with.
    Thoughts?

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Best practice for repositories during configuration - one or several DBs?

    Establishing my 11.1.2 dev box, we are in 9.3.1 in Production. Reading through documentation it states that one database is the repository for the Shared Services, Business Rules, Essbase, etc.
    Since I came to this new job with 9.3.1 installed not sure if this was verbiage that is the standard from version 9.3 or this is something new for 11.1.x
    So...what is the best practice? is it better to lump all foundation type activity into one DB (I realize Planning apps have their own db) or is it better to have a db for biplus, db for shared services, etc...
    JTS

    Here is what Oracle have to say
    "For ease of deployment and simplicity, for a new installation, you can use one database for all products, which is the default when you configure all products at the same time. To use a different database for each product, perform the “Configure Database” task separately for each product. In some cases you might want to configure separate databases for products. Consider performance, roll-back procedures for a single application or product, and disaster recovery plans."
    I would say in a development environment then there is no harm in using one db/schema for products, remember some products require separate databases/schemas e.g. Planning application.
    In production environment I tend to promote keeping them separate as it helps with troubleshooting and recovery.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best practices for saving files when emailing to the media?

    I'm pretty much an Ai imposter. I have Illustrator cs4 (14.0.0) and the way I use it is to cobble my designs together from iStock vector files and a lot of trial and error. Occasionally my husband throws me a lifeline because he is skilled with PhotoShop. As a small business owner, I am dying to find the time to take a course but right now I am just trying to float around the forums and learn what I can. Basically I flail about near the computer and the design sloooowly and somewhat mysteriously comes together. It's getting faster and is kind of fun, unless I get stuck.
    This week I had a problem with an ad I designed for a very small community orchestra's printed programs. I send them a pdf first, and when they view it on their screens using inDesign they see the whole image but when it printed it omitted an element (a sunburst in the background). They thought it had something to do with that element being in color rather than greyscale (though there were other elements that survived and they were the exact same color so I was skeptical). I sent a greyscale file, no luck. I sent them the Ai file, but that apparently "crashed" their iD and now they believe I've sent a corrupted file. They aren't very adobe-savvy, either.
    I've designed and emailed no fewer than 9 other ads to other print & online media organizations this year and never had a problem. The file looks fine to me in all versions I open/upload/email to myself.
    You can see it here if you like: http://www.scribd.com/doc/27776704/RCMA-for-CSO-Greyscale
    So here are my specific questions:
    1. How SHOULD I be saving this stuff? Is pdf the mark of a rookie?
    2. What settings should I be looking at when/before saving? I read about overprint, for example, and did try that with one of the versions I sent them to no avail. I don't really know what that does, so I was just trying a hail mary there anyway.
    Thanks for your time!

    PDF is the modern way of sending files  but what you might want to do is in this case select the art in question and go to Object>Flatten Transparency
    then save it as a pdf or as an ai file when sending the file to them zip it if they have windows based computers or stuffit if they have Mac actually zip is good for both.
    It is safer to send it as an archive then as an ai file.

  • Best practice for JavaFX lifecycle when not used as a normal application

    Suppose there was a Java app that runs in a terminal with its own command line interface. Certain commands typed in would show a Window that contains a Stage/Scene. In order to accomplish this, I would call Application.launch( App.class, null );. The problem is this Window may be closed by the User and then a future command may bring up another Window. As we know Application.launch may only be called one. Also the launch method blocks the calling thread.
    Naively I would create an adapter that spawns a thread on the initial call to call Application.launch for the first window and bypass this for future windows.
    Though if someone has an understanding of the best way to go about spawning arbitrary Scenes from an app where the app lifecycle is at a larger scale than javafx.application.Application that would be most appreciated.

    In this situation, there isn't really a "primary" stage but an initial one. I can hook up a thread to call Application.launch and instrument it so the Stage it creates (and calls start on) is handled and additional stages will bypass Application.launch and create new Stages.
    But this feels like a kludge to me.
    I wonder if there is a different way to get the Toolkit operating in the background. This would be similar, I suppose, to how calling setScene on JFXPanel works.
    Ideas anyone?

  • Using Liquid, what is the best practice for handling pagination when you have more than 500 items?

    Right now I can only get the first 500 items of my webapp, and don't know how to show the rest of the items.
    IN MY PAGE:
    {module_webapps id="16734" filter="all" template="/Layouts/WebApps/Applications/dashboard-list-a.tpl" render="collection"}
    IN MY TEMPLATE LAYOUT:
    {% for item in items %}
    <tr>
    <td class="name"><a href="{{item.url}}">{{item.name}}</a></td>
    <td class="status">Application {{item.['application status']}}</td>
    </tr>
    {%endfor%}

    <p><a href="{{webApp.pagination.previousPageUrl}}">Previous Page</a></p>
    <p>Current Page: {{webApp.pagination.currentPage}}</p>
    <p><a href="{{webApp.pagination.nextPageUrl}}">Next Page</a></p>

  • Best practice for migration to new hardware

    Hi,
    We are commissioning new hardware for our Web Server. Our current webserver is version 6.1SP4, and for the new server we've decided to stay with 6.1 but install SP7.
    Is there a best practice for migrating content from one physical server to another?
    What configuration files should I watch out for?
    Hopefully the jump from SP4 to SP7 won't cause too many problems.
    Thanks,
    John

    unfortunately, there is no quick solution for migrating from 1 server to other. you will need to carefully reconstruct
    - acl rules
    - server hostname configurations
    - any certificates that have been created on the old machine

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best Practice for setting bind variable when application loads

    I am using JDeveloper 11.1.2.3.
    When my application loads, the first unbounded page has a table populated by a named query.
    I would like to set the parameter used by the named query when the page loads, to populate the initial data that is displayed.
    What is the best practice for a solution to this issue?

    user6003393 wrote:
    I am using JDeveloper 11.1.2.3.
    When my application loads, the first unbounded page has a table populated by a named query.
    I would like to set the parameter used by the named query when the page loads, to populate the initial data that is displayed.
    What is the best practice for a solution to this issue?Hi,
    You can set the bind variable on VO by overriding prepareSession() method in Application Module check this http://docs.oracle.com/cd/E37975_01/web.111240/e16182/bcservices.htm#sthref357
    Setting bind variable on runtime http://docs.oracle.com/cd/E37975_01/web.111240/e16182/bcquerying.htm#CHDECJHD
    Zeeshan

  • What is the best practice for localization?One .rpt for all/each language?

    Hi All,
    I have a question :
    What is the best practice for localization?One .rpt for all language or one for each language? I
    Thanks for your response,
    jz

    Well, speaking of best practices, see the [Rules of Engagement|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/rulesofEngagement]
    Step 2 Asking Your Question; Provide Enough Information
    Next, make sure you search these forums before posting. Your question may just be already answered, thus giving you quicker resolution. For example, these threads come up just searching for "localization":
    Multiple language support
    Crystal Reports localization issue
    English resource files
    Next, (assuming you are working with CR 2008), see the developer help files:
    http://help.sap.com/businessobject/product_guides/boexir31/en/crsdk_net_dg_12_en.chm
    http://help.sap.com/businessobject/product_guides/boexir31/en/crsdk_net_apiRef_12_en.chm
    https://www.sdn.sap.com/irj/boc/sdklibrary
    In the Crystal Reports 2008 .NET SDK developer Help file, search for "Localization".
    Ludek

  • What is the best practice for localization? One .rpt for all language

    Hi all,
    I have a question: what is the best practice for localization? One .rpt for all language or one .rpt for each language?
    Thanks for any response,
    Jz

    What would be best would depend on workflow.
    Sincerely,
    Ted Ueda

  • Best Practice for Servlets

    I guess I'm asking for most peoples' input on what I'm planning to do here ....
    Here's what I want to do, and know a bit about.
    o I want to make a servlet that serves only XML.
    o After that, I want to transform the XML into web pages, RSS feeds etc.. using XSLT.
    Here's what I'm not so sure about...
    o How should I implement the interface to the web-based aspects? Should the servlet be coded to display HTML pages on "GET" requests? Or should I use a pile of HTML files to make forms?
    o What do I use to perform the XSLT transformations? Where should the set of solutions be placed relative to my servlet? Would a user then access this solution rather than the servlet itself?
    o How do I code the servlet on one machine, and then test it on another without breaking the libraries? How do I set up any libraries I might have to use (like for XSLT transformations) on the server?
    Any other advice here? I'm sure this is done often, but I can't find a resource that explains the best practices for it all.
    I know this sounds like a lot of stupid questions. I've had lots of programs working with Java before, but I'm at a loss as to how I'm supposed to package libraries I use in my programs - moreso with a servlet. To make matters worse, I plan on using MySQL as the database.
    If there's some wizard on the forums here who's willing to say more than just "RTFM" (of which there is none to answer my questions together as one), I'd be very very happy ":^)

    Let me re-pose my question so as to be specific
    enough to not be picked apart in my answer.
    I want to FIRST AND FOREMOST, create a servlet that
    serves up XML based on parameters given to it (how?
    who cares.).What does "serves up XML" mean?
    Let's be precise. Do you intend the servlet to send the XML back to the client? Or is the XML an intermediate step in your processing? (Yes, it matters.)
    Then, I want to create interfaces (HTML, RSS,
    boogledeedoo) to this XML data by having either JSP,
    another servlet or insert something else here,
    transform the XML into whatever the desired format
    is."interface" is a loaded term in Java. What do you mean by it?
    >
    My assumption is that I'll make the servlet that is
    capable of outputting my desired XML data and then
    create another servlet that will poke it for data as
    needed to transform the XML into HTML. This servlet
    would also likely serve as the web site itself and
    would manage user logins etc...(persistance yaddy
    yadda)You're not thinking about this properly.
    "yadda yadda" == muddled thinking.
    My other assumption is that I'll make another servlet
    that will poke the XML servlet and transform that
    into RSS or anything else I can dream up.How does "poke the XML servlet" fit into the request/response protocol that is HTTP? Please explain.
    -=-!REASONING!-=-
    Previously, when I was working with PHP, I liked to
    make scripts that would display interfaces and post
    to themselves. OK, now I see. "interface" == GUI in a browser to you. Very good.
    You can create a JSP that is an interface. You can have that JSP submit the HTTP POST or GET request to itself. No problem there, as long as "itself" knows what to do with the request.
    It was a nice way of creating a
    complete little package. Everything for one function
    was encapsulated nicely under one roof. No excessive
    HTML files all over the place to nurture.A simple problem, a simple solution. You can do that with a JSP.
    Look. Part of my inability to describe this well is
    because I DO feel like I'm in a lot of directions at
    once. Or you don't understand the technology very well.
    But I have to be in order to pull together
    some sort of plan for myself. I understand many
    concepts and have just finished studying object
    oriented design etc..."Just finished"? How long did it take?
    I know things about how Tomcat does connection
    pooling for SQL connections.Great. Not much to understand there. It's harder to figure out how to do n-tier apps with more than one page well.
    I do know how to use Google, probably a lot better
    than most. But rest assured, I've yet to find a
    little guide as complete as any of the "LAMP" books
    there are out there. Which by the way, I have never
    purchased.That's because Java Enterprise Edition isn't intended for little problems. LAMP is. Maybe the limitation is that you are used to "little" problems and not bigger ones.
    If JEE seems scattered and complex, it's because it is. It encompasses more than LAMP.
    I'm confident in good guidance, and not a heartfelt
    smackdown. I'm still waiting for some clear suggestions.I gave you one, you just didn't know it: go read about Spring.
    http://www.springframework.org
    It'll help you structure complex apps from the user interface to the database in the back.
    You're welcome.
    %

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Best Practices for Data Access

    Good morning!
    I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application?
    I'm curious if I embed into my desktop application the server address (IP, or Domain or whatever) and allow the users to provide their own usernames and passwords when using the application, if there was anything "wrong" with that? Where-in my
    application collects the username and password from the user, connects to a server with that username and password, retrieves the data and uses it in-app.
    I'm petrified of security issues and I would hate to start using a SQL database with this setup only to find out that anyone could download x, y or z and connect to the database and see everything.
    Assuming I secure all of the users with limited permissions, is there anything wrong with exposing a SQL server to the web for my application to use? If so, what and what would be a reasonable alternative?
    I really appreciate any help and feedback!

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

Maybe you are looking for