Order Process Best Practice Suggestions?

Hey CF World,
I have to revamp an online order process. The process is broken into 4 steps.
The app as it exists today was built by a different developer and for the life of me, I have wasted about 5 hours trying to figure out exactly what the person is doing in the code just so I can make some basic tweaks to the process.
Could anyone offer what might be considered today's best practice for a step by step order process?
The thought is, if the user could complete step 1, upon clicking next the data elements of the form would be validated and then they would be taken to step 2, etc, etc... until the end where upon submission, the order would then be written to the database and next process triggered internally.
Should I have one page that upon submit of step 1 cycles back to itself, processes the data and then loads a separate div of info for step 2 or...?
Any suggestions would be great.  Thank you so much in advance for your help, I sincerely appreciate it.
Ciao'
D.

Hello,
Thank you so much for that. Let me qualify a few things as I probably should have in the first place. (my apologies)
Coldfusion 8
SQL Server  2005
There is no payment or credit card information being provided.
The user comes online, goes through a basic order process for some work to be done. As mentioned, it is a multi step process for gathering their information.
Once the entire order is in and all the fields validated along the way to ensure they were populated where required, the order is to be written into the pending orders table and an email is sent to the branch closest to the customer notifying them of the new order with a link into the details. The branch then calls them directly to confirm the details of the order before activating it.
So, the code I received, is next to impossible to follow through, for the life of me I can not figure out what the former developer has done. I need to make some changes to the process and if I can not even follow the flow to figure out where to make my changes, that could pose a problem.
I have not coded too much in Coldfusion for the past two years but did so quite extensively before that. I totally agree on the CFTransaction suggestion. I guess what I was looking for is, are there any best practices for coding that I should be aware of, especially considering what I want to accomplish? Previously we used the "fusebox" concept of coding and had most of our code in CustomTags in a very reusable and easy to follow structure and flow.
Any thoughts/suggestions would be great! Thank you very much!
D.

Similar Messages

  • Best Practice Suggestions?

    Hey CF World,
    I have to revamp an online order process. The process is broken into 4 steps.
    The app as it exists today was built by a different developer and for the life of me, I have wasted about 5 hours trying to figure out exactly what the person is doing in the code just so I can make some basic tweaks to the process.
    Could anyone offer what might be considered today's best practice for a step by step order process?
    The thought is, if the user could complete step 1, upon clicking next the data elements of the form would be validated and then they would be taken to step 2, etc, etc... until the end where upon submission, the order would then be written to the database and next process triggered internally.
    Should I have one page that upon submit of step 1 cycles back to itself, processes the data and then loads a separate div of info for step 2 or...?
    Any suggestions would be great.  Thank you so much in advance for your help, I sincerely appreciate it.
    Ciao'
    D.

    I'm not going to attempt to answer the user interface side, that's not my area of expertise.
    In terms of validation, ideally this should occur at three levels
    1) Client-side - immediate response. Traditionally this is Javascript. For instance, if the user tries to enter a letter into a numeric field, they get feedback as soon as they press the button.
    1a) Client side - on submit. Any extra validation (blank mandatory fields, comparison of fields) that doesn't require a trip to the server. Also traditionally Javascript.
    2) Application level. Assume that the user had Javascript disabled, and none of your previous validation had happened. Also, there are tools such as Firebug that let them edit your HTML before running it: adding extra items to a SELECT, for instance. Redo all previous validation!
    This is also where you check things against your database - and parameterise any database interface. SQL injection is a Bad Thing. Do as much as you can via stored procedures called by CFCs, and if your code only needs read access, use a datasource that only has read access.
    3) Database level. Assume they've somehow got in via a route other than your application: maybe a malicious or careless employee using command-line SQL. Enforce all business rules and all data integrity constraints at database level: constraints, triggers, whatever your database provides.
    Sounds horribly paranoid, doesn't it? But that trick of editing the SELECT is done by 13-year-olds hacking games, so if you're dealing with real money and adults, it's the sort of thing you have to allow for.

  • Deadline Branche in Correlation Process - Best Practice

    Hello,
    I have an integration process with a correlation - there is a asynchronous send step which activates a correlation and afterwards an asynchronous receive step that uses that correlation.
    Furthermore I have a deadline branch to cancel the process after 24 hours.
    My question now is:
    There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received message will block the inbound queue as no active correlation can be found anymore. Is this correct? How can I avoid this situation, I guess a blocked queue would also block other messages that are sent to the integration process?
    What would be best practice to handle such a scenario? I could leave the process intance open for 1 month, however this might have a significant impact on system performance.....
    Thank you for your advice.

    There could be (rare) cases where a message arrives later than 24 hours, so according to my understanding the received
    essage will block the inbound queue as no active correlation can be found anymore
    No correlation found error will occur only when the BPM instance is running and the message tries to enter into the relevant receive step (not the first one)
    However when you say the process is cancelled you need not worry about the message going into the queue and blocking the BPM queue.
    Regards,
    Abhishek.

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Localization: Best practice suggestions Apps with mixed UI and Content languages?

    I am trying to write a simple Universal app that can be easily localized to different UI languages. But the app also needs to display content that is determined by user settings. For example I would like the app UI to display in the users region (English,
    Russian, etc.) while at the same time having fields on the page whose strings are coming from other resources (Latin "la"? , Spanish, etc.).
    The samples are pretty good about how to setup resources with respect to the UI ( e.g. Strings/en-us/Resources.resw ) but not what to do if you want to also be able to draw strings from a different language. When the words in the content fields show in Latin
    I don't want the UI to also be in Latin.
    Suggestions on best way to do this?
    Thanks,
    -Tom19

    Hi Tom19,
    I did not receive the email notification on my mailbox for your reply, that's weird. Sorry for the late response.
    Basically we have the best practice documentation for you:
    Creating and retrieving resources in Windows Store apps also
    Quickstart: Using string resources, take a look at the documentations to see if these helps.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Re engineering of existed process / Best Practice (customization)

    Hi all of you,
    We are implementing SAP ECC 6.0 for one of our clients. Client is asking us to compare their existed business process with Best Practice / standard process and based on the result, asking to prepare a GAP analysis between the existed and best practice for his business.
    SAP itself is a best practice in the respective domains / business processes. By implementing SAP ERP,  the client will have best practice for his business processes as I know. But thing is, how can I explain to the client that SAP has given the best practice and based on which, client will consider the SAP practice as Best Practice for the business??
    Please give me a solution
    Regards,
    Ramki

    f l,
    I'm not sure deleting keys from the registry is ever a best practice, however Xcelsius has listings in:
    HKEY_CURRENT_USER > Software > Business Objects > Xcelsius
    HKEY_LOCAL_MACHINE > SOFTWARE > Business Objects > Suite 12.0 > Xcelsius
    The current user folder holds temporary settings, such as how you've modified your interface.
    The local machine folder holds more important information.
    As always, it's recommended that you backup the registry and/or create a restore point before modifying or deleting any keys.
    As for directories, the only directory Xcelsius uses is the one you install to.  It also places some install logs in the temp directory, but they have no effect on the application.

  • Multiple IPs and Outbound IP on 2008, best practice suggestion...

    Hello,
    I need a suggestion on an issue;
    I have a Windows 2008 R2 SP1 Std. Ed. I have 3 IPs for that server, each of them uses the same gateway. By design the IP which is closest to the gateway is the default outbound IP on W2K8_R2_SP1_SE.
    I want to choose any other IP out of other 2 assigned IPs as default outbound one.
    example:
    GATEWAY: 10.0.0.1
    IP1: 10.0.0.2 (default outbound by design)
    IP2: 10.0.0.3 (the one I want it to be default outbound)
    IP3: 10.0.0.4 (not important)
    There are basically 2 choices available to me doable right now. Can you please take a moment and suggest one of the solutions below or state if you know the best practice for such a case? Thank you very much in advance =)
    First Solution:
    apply this command: Netsh int ipv4 add address 12 10.0.0.1 255.x.x.x skipassource=true
    then apply these 3 hotfixes:
    IP addresses are still registered on the DNS servers even if the IP addresses are not used for outgoing traffic on a computer that is running Windows 7 or Windows Server 2008 R2
    http://support.microsoft.com/kb/2386184
    The "skipassource" flag of IP addresses is cleared after you use the GUI to change IP settings of a network adapter in Windows 7 or in Windows Server 2008 R2
    http://support.microsoft.com/kb/2554859
    FIX: IIS Manager does not display IP addresses that are assigned to the network adapter together with the skipassource flag
    http://support.microsoft.com/kb/2551090
    Second Solution:
    Simply create 2 interfaces. Use the first one with the IP that I want to be as outbound default, dump all other IPs to the second interface. 2 interfaces will have the same gateway but Windows will assume the first one as the outbound default.

    I believe you want to set the metric on the interfaces.
    You can do this by altering your routing table with
    route.exe or alternatively, you can change the interface metric in the TCP/IP advanced properties for your network adapter (via Control Panel). By default it uses an automatic metric (i.e. Windows chooses which interface to use).
    For your reference (and the reference of anyone else facing a similar challenge), the metric is a weighted value Windows will use to determine which interface to use for a particular endpoint. Here is the definition from the route.exe documentation:
    metric   Metric   : Specifies
    an integer cost metric (ranging from 1 to 9999) for the route, which is used when choosing among multiple routes in the routing table that most closely match the destination address of a packet being forwarded. The route with the lowest metric is chosen. The
    metric can reflect the number of hops, the speed of the path, path reliability, path throughput, or administrative properties.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • Any best practice/suggestion on giving Id's for UI Component

    Hi,
    I came to know that for better performance, id's on naming containers shall be less than 7 characters in length.
    What about UI Components other than container components?
    Is there any best practice available for giving Id's for UI Components and its length?
    Do we face any issue if we give ids with more than 7 characters (just to make the id meaningful one)?
    Thanks in Advance
    Raguraman

    a quotation from
    Oracle® Fusion Middleware Performance and Tuning Guide book
    11g Release 1 (11.1.1)
    E10108-02
    >
    The "id" attribute should not be longer than 7 characters in length. This is
    particularly important for naming containers. A long id can impact
    performance as the amount of HTML that must be sent down to the
    client is impacted by the length of the ids.

  • Filming to film editing process - best practice?

    Hey everyone,
    This is an odd question, and I'm not really sure where to start looking. But basically, I've been doing a lot of film editing for my company. They send out 2 people to do the filming, then hand me the ra footage to edit. I am given a rough storyboard as well.
    Now, my problem is this - because the filming is done by easiest access at the time (so it wont necessarily be in the order of the story board) and I'm not involved in the actual filming (so I don't know roughly when things are filmed) it's taking me so much longer to go through all the footage, and figure out what I need.
    So does anyone have any good links or advice on how best to bridge these two processes? Or any experience in the industy to help me out?
    I get very tight deadlines, and don't really have time to troll through hours of video. Which then I forget and get lost with it anyway, so I have to search through it again.
    Thank you for you help

    Steven L. Gotz wrote:
    Or, learn to use Adobe Prelude. I don't use it, but those who do might chime in here with their own opinions on the subject.
    It's been a little while since I've worked extensively in Prelude so some of the features and details may have changed but the workflow is the same so I'll give it a shot...
    Prelude is an app specifically designed to log and manage your footage before (and during...) the editing process. There's a lot you can do with it to edit metadata, add markers or selectively ingest footage, but at its moist basic level it's good for marking things up into subclips and then ordering them as a sort of rough cut to bring directly in PrPro. But of course that all takes time too, so depending on what your needs and proficiency level are, it still might be better to just use bins and things to organize it all in PrPro and then just edit directly from that.
    Any way you cut it, 'logging' is a huge job, big enough that we developed a whole app just to help people do it effectively. That's also wht production houses often have dedicated logging pro's for that reason (but it still often falls to the editor in a big way). The production people in the field could certainly help your plight by trying to organize things a little as they go, but that sort of work comes with its own heavy pressures and priorities so that probably won't happen.

  • Af:dialog model-restore / cancel-button processing best practice ?

    Using JDev 11.1.1.3; if I have an af:dialog running in an af:popup which contains auto-submit components (for cross-component enablement, validation etc.). My question is what are the preferred ways of discarding the model submitted changes made through popup processing if/when the af:dialog cancel button is pressed by user ? Figured that using a task flow for the content that is the popup could be an option, and using the task flow savepoint restore feature, but that looks more like database restore than model restore. I want to be able to restore the model content to the way it looked before the popup executed, without necessitating a submit to the database. How is this most commonly and best achieved ?
    Thanks,

    Taskflow savepoints are not database savepoints. Transactional BTF can be configured to issue automatic savepoints at TF entry and eventually to "rollback" to them at the TF exit. The internal implementation uses the ApplicationModule's passivation/activation mechanism to passivate the AM state at the TF entry and eventually to activate the AM state at the TF exit back to the passivated state at the entry. In this way it is simulated that you have not made any modifications in ADF BC, so your model layer will be restored to the state before TF entry. (Of course, you must not perform any DB commits durring the lifetime of this TF). I have used successfully this mechanism for the same goal you are asking about.
    Also there are savepoints managed by the ADF Controller, but I could be of little help here because I have never used them. I suspect that this mechanism could be what you need, so you may have a look here for more details:
    Adding Save Points to a Task Flow
    and in this thread:
    {thread:id=2128956}
    Dimitar

  • PL/SQL After submit process - best practice?

    I have after submit process which fires PL/SQL procedure. In this PL/SQL procedure I do some updates and would also like to generate some XML output and send it to browser so that user can save it in file. What I'm asking is, what is proper way to handle this.
    I realize that starting procedure from "after submit process" is too late. If I understand correctly, the page is already rendered at that time so htp.p output from PL/SQL procedure in not showing (but procedure is executed). So I create branch to PL/SQL procedure (after button is pressed). That way procedure actualy creates new window and I can use htp.p functions. Altough now I have trouble closing window but I hope I could manage this.
    Is there some other, better way to do export? Maybe javascript popup and calling procedure from there? Any suggestions?
    Thanks!
    Marko

    How should I send this content to user so that his browser recognize this as a file (for opening or saving)?
    Put that code in a onLoad process similar to how Scott shows at http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html
    With this in place, when you issue a show request on that page, your generated content will be offered by the browser using a open/save dialog box.

  • Ibook to desktop syncing best practices

    trying to keep my ibook in sync with my g5 desktop. Client projects in addition to the entourage data files, etc.. I've come across numerous scenarios and recommendations. Anyone with any best practice suggestions, ie: software, syncing scenarios, automation, etc., would be greatly appreciated.

    Hello Hugh
    The settings that you are looking for are in Itunes. you can choose to sync only unlistened podcasts.
    If you go to the podcast section in itunes, there is a filed that says keep and you can choose from the following options:
    All episodes
    All unplayed episodes
    Most Recent episode
    Then when you connect your ipod you will then see an optionto only sync the unlistened podcasts and you should be all set.

  • IPS Tech Tips: IPS Best Practices with Cisco Remote Management Services

    Hi Folks -
    Another IPS Tech Tip coming up and this time we will be hearing from some past and current Cisco Remote Services members on their best practice suggestions. As always these are about 30 minutes of content and then Q&A - a low cost high reward event.
    Hope to see you there.
    -Robert
    Cisco invites you to attend a 30-45 minute Web seminar on IPS Best   Practices delivered via WebEx. This event requires registration.
    Topic: Cisco IPS Tech Tips - IPS Best Practices with Cisco Remote Management   Services
    Host: Robert Albach
    Date and Time:
    Wednesday, October 10, 2012 10:00 am, Central Daylight Time (Chicago,   GMT-05:00)
    To register for the online event
    1. Go to https://cisco.webex.com/ciscosales/onstage/g.php?d=203590900&t=a&EA=ralbach%40cisco.com&ET=28f4bc362d7a05aac60acf105143e2bb&ETR=fdb3148ab8c8762602ea8ded5f2e6300&RT=MiM3&p
    2. Click "Register".
    3. On the registration form, enter your information and then click   "Submit".
    Once the host approves your registration, you will receive a confirmation   email message with instructions on how to join the event.
    For assistance
    http://www.webex.com
    IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and   any documents and other materials exchanged or viewed during the session to   be recorded. By joining this session, you automatically consent to such   recordings. If you do not consent to the recording, discuss your concerns   with the meeting host prior to the start of the recording or do not join the   session. Please note that any such recordings may be subject to discovery in   the event of litigation. If you wish to be excluded from these invitations   then please let me know!

    Hi Marvin, thanks for the quick reply.
    It appears that we don't have Anyconnect Essentials.
    Licensed features for this platform:
    Maximum Physical Interfaces       : Unlimited      perpetual
    Maximum VLANs                     : 100            perpetual
    Inside Hosts                      : Unlimited      perpetual
    Failover                          : Active/Active  perpetual
    VPN-DES                           : Enabled        perpetual
    VPN-3DES-AES                      : Enabled        perpetual
    Security Contexts                 : 2              perpetual
    GTP/GPRS                          : Disabled       perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 250            perpetual
    Total VPN Peers                   : 250            perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    This platform has an ASA 5510 Security Plus license.
    So then what does this mean for us VPN-wise? Is there any way we can set up multiple VPNs with this license?

  • CSS best practice / keepalives

    We have a Cisco 11503 running 7.40.1.03 (standard feature set) that we are setting up as a load balancer for a new e-mail system. I had two previous threads - thanks to Gilles and the others who responded. The box is now more or less configured to do what we want it to do, but I'm curious about "best practice" suggestions for keepalives.
    As I understand it, keepalives are per service. As an example, we have two webmail servers. They are only running SSL, so each server is a service with keepalive type ssl. If webmail1 looses its apache or just dies entirely, the keepalive will not respond, and the CSS will send all traffic to webmail2, which still has its keepalives active.
    This is all well and good. But, our IMAP servers are running multiple protocols - 7 of them. I have two services configured; one for each server, with no protocol specification. Then I have a content rule for each seperate protocol, where the port #s are configured.
    I am thinking that if I want the most out of the CSS, I need to configure a seperate SERVICE for each protocol for the e-mail servers, with a specific keepalive for each individual protocol. That way if SSH goes away, the CSS will close SSH to email1 and only send that traffice to email2, but will still send IMAP or SIMAP to email1, since those protocols didn't go down.
    For me this seems like a configuration disaster. I'd need a seperate service for each server and each protocol, and then a separate content rule as well for every service and every protocol.
    Is this correct? Or is there some way of streamlining the configuration to reduce the number of services and/or content rules?
    Thank you! And let me know if the configuration would be helpful.
    Cheers...

    The best is to indeed split each protocol and create a separate service and rule for each of them.
    2 servers and 7 protocols is not a big config [some customer have 300 servers and 2 or 3 protocols which makes it more problematic to configure].
    If you really think this is too much, simply create 1 ip service per server and 1 ip content rule.
    You don't monitor the protocols but just ip connectivity.
    Easy config, it works but you don't have the granularity to detect specific protocols going down.
    Regards,
    Gilles.

  • Best practices for disabling an employees account, but leaving mailbox available for others while not accepting messages

    I'm sure that other organizations have some policy for this. In our case, we want to keep the mailbox available for others to still access, but disable the user account and remove it from OWA.
    In this case, I've disabled the AD object, disabled OWA from the features, and set the mailbox to only receive emails from a dummy mailbox (so that no new emails are accepted).
    This all works fine and senders receive a NDR that their mail was rejected, however I'd also like to set a friendlier custom NDR to call the office instead when any sender attempts to send email to that recipient.
    What would best practices, suggestions be for this behavior?

    Hi,
    According to your description, the user object in AD has been disabled.
    In this case, the mailbox cannot mostly likely be accessed. Thus, maybe OOF couldn’t help you.
    If I misunderstand your meaning, please feel free to let me know.
    And we can depend on transport rule:
    The recipient is
    send rejection message to sender with enhanced status code:
    http://technet.microsoft.com/en-us/library/bb123506(v=exchg.141).aspx
    Thanks,
    Angela Shi
    TechNet Community Support

Maybe you are looking for

  • Acrobat 9 Pro Install Printer Help?

    Hello All, Everytime I install Adobe Acrobat 9 Pro I loss my printer connection. When I uninstall it I get my printer connection back. Anyone have any ideas? I am running Windows 7 Ultimate x64 on a Dell V305. I have all the software updated on both.

  • How to generate report using toad in oracle 10g

    hi , i am using oracle 10g with toad editor .if i am execute any table, result it will be 100 rows like, i want to make report each records page wise with header,footer etc.. please help.. thank u..

  • IPad Air goes back to homepage for no reason why

    Why is my iPad Air gong back to home page for no reason how to cure please

  • Inventory Roll-up ZfD 3.2 Sp1

    I will try not to make this wordy, but I am including details so responedents don't just throw out have you tried this. I have walked into a situation were the client has created DBs on each leaf server. Prior to the server being a leaf w/DB it was a

  • Mail-notification 4.0

    Hi, I don't want to compile mail-notification package while it requires Evolution and etc as dependencies.. Can the developer of it compile the new package please and update the current version ? Thanks a lot..