New to ColdFusion - Question regarding best practice

Hello there.
I have been programming in Java/C#/PHP for the past two years or so, and as of late have really taken a liking to ColdFusion.
The question that I have is around the actual seperation of code and if there are any best practices that are preached using this language. While I was learning Java, I was taught that it's best to have several layers in your code; example: Front end (JSPs or ASP) -> Business Objects -> Daos -> Database. All of the code that I have written using these three languages have followed this simple structure, for the most part.
As I dive deeper into ColdFusion, most of the examples that I have seen from vetrans of this language don't really incorporate much seperation. And I'm not referring to the simple "here's what this function does" type of examples online where most of the code is written in one file. I've been able to see projects that have been created with this language.
I work with a couple developers who have been writing in ColdFusion for a few years and posed this question to them as well. Their response was something to the affect of, "I'm not sure if there are any best practices for this, but it doesn't really seem like there's much of an issue making calls like this".
I have searched online for any type of best practices or discussions around this and haven't seen much of anything.
I do still consider myself somewhat of a noobling when it comes to programming, but matters of best practice are important to me for any language that I learn more about.
Thanks for the help.

Frameworks for Web Applications can require alot of overhead, more than you might normally need programming ColdFusion, I have worked with Frameworks, including Fusebox, what I discovered is when handing a project over to a different developer, it took them over a month before they were able to fully understand the Fusebox framework and then program it comfortably. I decided to not use Fusebox on other projects for this reason.
For maintainability sometimes its better to not use a framework, while there are a number of ColdFusion developers, those that know the Fusebox framework are in the minority. When using a framework, you always have to consider the amount of time to learn it and succesfuly implement it. Alot of it depends on how much of your code you want to reuse. One thing you have to consider, is if you need to make a change to the web application, how many files will you have to modify? Sometimes its more files with a framework than if you just write code without a framework.
While working on a website for Electronic Component sourcing, I encountered this dynamic several times.
Michael G. Workman
[email protected]
http://www.usbid.com
http://ic.locate-ic.com

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Question regarding best practice

    Hello Experts,
    What is the best way to deploy NWGW?
    We recently architected a solution to install the 7.4 ABAP stack which comes with Gateway. We chose the Central Gateway HUB scenario in a 3 -tier setup. Is this all that's required in order to connect this hub gateway to the business systems ie ECC? Or do we have to also install the gateway add-on on our business system in order to expose the development objects to the HUB? I'm very interested in understanding how others are doing this and what has been the best way according to your own experiences. I thought creating a trusted connection between the gateway hub and the business system would suffice to expose the development objects from the business system to the hub in order to create the gateway services in the hub out of them? Is this a correct assumption? Happy to receive any feedback, suggestion and thoughts.
    Kind regards,
    Kunal.

    Hi Kunal,
    My understanding is that in the HUB scenario you still need to install an addon in to the backend system (IW_BEP). If your backend system is already a 7.40 system then I believe that addon (or equivalent) should already be there.
    I highly recommend you take a look at SAP Gateway deployment options in a nutshell by Andre Fischer
    Hth,
    Simon

  • Architecture/Design Question with best practices ?

    Architecture/Design Question with best practices ?
    Should I have separate webserver, weblogic for application and for IAM ?
    If yes than how this both will communicate, for example should I have webgate at both the server which will communicate each other?
    Any reference which help in deciding how to design and if I have separate weblogic one for application and one for IAM than how session management will occur etc
    How is general design happens in IAM Project ?
    Help Appreciated.

    The standard answer: it depends!
    From a technical point of view, it sounds better to use the same "midleware infrastructure", BUT then the challenge is to find the lastest weblogic version that is certified by both the IAM applications and the enterprise applications. This will pull down the version of weblogic, since the IAM application stack is certified with older version of weblogic.
    From a security point of view (access, availability): do you have the same security policy for the enterprise applications and the IAM applications (component of your security architecture)?
    From a organisation point of view: who is the owner of weblogic, enterprise applications and IAM applications. In one of my customer, application and infrastructure/security are in to different departments. Having a common weblogic domain didn't feet in the organization.
    My short answer would be: keep it separated, this will save you a lot of technical and political challenges.
    Didier.

  • ColdFusion AIR Synchronization - Best practices

    I'm a long time ColdFusion developer who has put off learning Flex because I never had a need (not to mention the bad taste Flash left in my mouth in its early stages).  Now I'm jumping in because of its integration with Air+SQLite and Coldfusion9 that allows online/offline support.
    Really cool stuff, but to my knowledge there are only two blogs that have a post about how to do this synchronization (Jayesh's blog and Terrance's blog), both of which use the same sample application.  So I'm hoping there are experts on this subject (which is not entirely CF+Air sync related), can shed some light for me.
    My questions are from a best practice standpoint, primarily: where should all of this code go?  Let me be a little more specific (and I appologize in advance for my misuse of termonology, and my overall ignorance of OO ):
    In the example code there are two tables (AS classes): Address.as and Customer.as.  The contents of these files and the ORM concepts in general I understand.  However, the files are lodged in a folder called "onetoone" in the main "src" directory.  In a best practice scenario, should these be in a "model.vo" directory (assuming no other frameworks are being used)
    Also in the example code, all of the AS code that handles connecting to the back end, fetching, saving to SQLite and back to the server is in the main application mxml file "CFAIROfflineCustomerManagerApp.mxml".  I'm not a Flex developer yet, but I don't think this is good.  I guess it's fine for a simple 2 table scenario, but in real life this will become a huge beast.  How would this best be broken up into separate files?  I'm relatively new to OO, I get the concepts but haven't used it in a pure form in any production applications.  My initial inclination is to separate the code that deals with the CF back-end and the SQLite database into a "services" directory and all the code that pulls that data into the UI (and stuffs it back) into a "controllers" directory.
    Perhaps one of the many frameworks out there may clear things up for me?  I have been reluctant to use a framework at this point because I would like to understand better what they solve before choosing a framework.  Are there any examples of this new sync feature via cfair.swc in any of the popular frameworks?  I've searched, but turned up nothing.
    I think this is enough for now.  I like to keep my code really organized, and before I get too far into writing bad code in an unorganized way - I thought I'd ask for some guidance.  At least this way I can write bad code in an organized way!
    Thanks in advance for any advice, resources, etc...
    --Abram

    I'm a long time ColdFusion developer who has put off learning Flex because I never had a need (not to mention the bad taste Flash left in my mouth in its early stages).  Now I'm jumping in because of its integration with Air+SQLite and Coldfusion9 that allows online/offline support.
    Really cool stuff, but to my knowledge there are only two blogs that have a post about how to do this synchronization (Jayesh's blog and Terrance's blog), both of which use the same sample application.  So I'm hoping there are experts on this subject (which is not entirely CF+Air sync related), can shed some light for me.
    My questions are from a best practice standpoint, primarily: where should all of this code go?  Let me be a little more specific (and I appologize in advance for my misuse of termonology, and my overall ignorance of OO ):
    In the example code there are two tables (AS classes): Address.as and Customer.as.  The contents of these files and the ORM concepts in general I understand.  However, the files are lodged in a folder called "onetoone" in the main "src" directory.  In a best practice scenario, should these be in a "model.vo" directory (assuming no other frameworks are being used)
    Also in the example code, all of the AS code that handles connecting to the back end, fetching, saving to SQLite and back to the server is in the main application mxml file "CFAIROfflineCustomerManagerApp.mxml".  I'm not a Flex developer yet, but I don't think this is good.  I guess it's fine for a simple 2 table scenario, but in real life this will become a huge beast.  How would this best be broken up into separate files?  I'm relatively new to OO, I get the concepts but haven't used it in a pure form in any production applications.  My initial inclination is to separate the code that deals with the CF back-end and the SQLite database into a "services" directory and all the code that pulls that data into the UI (and stuffs it back) into a "controllers" directory.
    Perhaps one of the many frameworks out there may clear things up for me?  I have been reluctant to use a framework at this point because I would like to understand better what they solve before choosing a framework.  Are there any examples of this new sync feature via cfair.swc in any of the popular frameworks?  I've searched, but turned up nothing.
    I think this is enough for now.  I like to keep my code really organized, and before I get too far into writing bad code in an unorganized way - I thought I'd ask for some guidance.  At least this way I can write bad code in an organized way!
    Thanks in advance for any advice, resources, etc...
    --Abram

  • Infomation regarding Best Practices Required,

    Dear Friends,
        Happy New Year......
    Im working as a part BI Excellence team in a reputed company.
    I jst want to say a client to install the BI Best Practice(Scenario -  SCM), inorder to do that i need to present him the advantages and difference between Best practice (SPECIFIC FOR BI) over General Implementation.
    When i search in Help.sap.com, it generally speaks about the time consumption n guidelines of Overall SAP Best Practices.
    Can anyone help me wrt to BI (From Blue Print to Go Live), Time line diferrences between SAP BI Best Practice and General Implementation.
    An Example with Specific Scenario Like SCM, Taking a Cube for IM and describing the Start to End Implemenation process and its timeline. How the same differs, when we go by using a SAP BI Best Practice installation?
    Please provide your Valuable suggesstions, as i dont hav any Implementation experience.
    Requesting your Valuable Guidence.
    Regards
    Santhosh kumar.N

    Hi,
    http://help.sap.com/saphelp_nw2004s/helpdata/en/f6/7a0c3c40787431e10000000a114084/frameset.htm
    http://help.sap.com/bp_biv370/html/Bw.htm
    Hope it helps........
    Thanks & Regards,
    SD

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Regarding Best Practices Documents

    Hi All,
    How to search and download SAP Best Practices documents.
    Thanks in Advance
    Pavan

    Hi Pavan,
    Pl go to the URL: http://help.sap.com/
    On the top centre of the page, you find SAP Best Practise tab.
    In there, you have Overview, Baseline packages, Industry packages, Cross-industry packages.
    Click on the desired option and you get to download the BEST PRACTICES.
    Given below is the Best practice URL for industry package for Automotive: Dealer business managament:
    http://help.sap.com/bp_dbmv1600/DBM_DE/html/index.htm
    (This is for your reference only).
    Hope this helps!
    Regards,
    Shilpa

  • Question on best practice/optimization

    So I'm working with the Custom 4 dimension and I'm going to be reusing the highest member in the dimension under several alternate hierarchies. Is it better to drop the top member under each of the alternate hierarchies or create a single new member and copy the value from the top member to the new base one.
    Ex:
    TotC4
    --Financial
    -----EliminationA
    ------EliminationA1
    ------EliminationA2
    -----GL
    -------TrialBalance
    -------Adjustments
    --Alternate
    ----AlternateA
    -------Financial
    -------AdjustmentA
    -----AlternateB
    -------Financial
    -------AdjustmentB
    In total there will be about 8 Alternate Adjustments(it's for alternate trasnlations if you're curious).
    So should I repeate the entire Financial Hierarchy under each alternate rollup, or just write a rule saying FinancialCopy = Financial. It seems like it would be a trade off between performance and database size, but I'm not sure if this is even substantial enough to worry about.

    You are better off to have alternate hierarchies where you repeat the custom member in question (it would become a shared member). HFM is very fast at aggregating the rollups. This is more efficient than creating entirely new members which would use rules to copy the data from the original member.
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Question on best practice to extend schema

    We have a requirement to extend the directory schema. I wanted to know what is the standard practice adopted
    1) Is it good practice to manually create an LDIF so that this can be run on every deployment machine at every stage?
    2) Or should the schema be created through the console the first time and the LDIF file from this machine copied over to the schema directory of the target server ?
    3) Should the custom schema be appended to the 99user.ldif file or is it better to keep it in a separate LDIF ?
    Any info would be helpful.
    Thanks
    Mamta

    I would say it's best to create your own schema file. Call it 60yourname.ldif and place it in the schema directory. This makes it easy to keep track of your schema in a change control system (e.g. CVS). The only problem with this is that schema replication will not work - you have to manually copy the file to every server instance.
    If you create the schema through the console, schema replication will occur - schema replication only happens when schema is added over LDAP. The schema is written to the 99user.ldif file. If you choose this method, make sure you save a copy of the schema you create in your change control system so you won't lose it.

  • Advice needed regarding best practice

    Hi - curious as to if what i have setup now should be changed to best utilize Time Machine. I have iMac with 750GB drive (a small chunk is partitioned for Vista) - lets assume I have 600 GB dedicated for the mac.
    I havetwo firewire external drives - a 160GB and a 300GB.
    Currently, I have my itunes library on the 300GB drive as well as a few FCE files. I have made the 160GB the Time Machine drive. Would I be better off moving my iTunes library to the internal HD and then using the 300GB drive as the Time Machine drive? As I have it now, I don't think my iTunes library is getting backed up. In an ideal situation, is it safe to assume your Time MAchine disk should be at leasta s large if not larger than the internal HD? Thanks.
    Steve

    Steve,
    I would recommend using a drive that is 2x the size of the files you are going to back up. This is specifcally in the event that you make changes to the files and Time Machine starts backing up the new files that you have created. It will back up once every hour and it will only make a back-up copy of files that you have modified. If you are backing up your home folder, and you are using FCE, I would say back up to the 160Gb drive would be sufficient. If you were planning on backing up your home folder & your iTunes library, I would recommend the 300Gb drive. The only reason that you would need a backup drive 2x the size of your HD is if you were backing up your entire drive.

  • Question on best practice for NAT/PAT and client access to firewall IP

    Imagine that I have this scenario:
    Client(IP=192.168.1.1/24)--[CiscoL2 switch]--Router--CiscoL2Switch----F5 Firewall IP=10.10.10.1/24 (only one NIC, there is not outbound and inbound NIC configuration on this F5 firewall)
    One of my users is complaining about the following:
    When clients receive traffic from the F5 firewall (apparently the firewall is doing PAT not NAT, the client see IP address 10.10.10.1.
    Do you see this is a problem? Should I make another IP address range available and do NAT properly so that clients will not see the firewall IP address? I don't see this situation is a problem but please let me know if I am wrong.

    Hi,
    Static PAT is the same as static NAT, except it lets you specify the protocol (TCP or UDP) and port for the local and global addresses.
    This feature lets you identify the same global address across many different static statements, so long as the port is different for each statement (you CANNOT use the same global address for multiple static NAT statements).
    For example, if you want to provide a single address for global users to access FTP, HTTP, and SMTP, but these are all actually different servers on the local network, you can specify static PAT statements for each server that uses the same global IP address, but different ports
    And for PAT you cannot use the same pair of local and global address in multiple static statements between the same two interfaces.
    Regards
    Bjornarsb

  • Buying a new MAC..question regarding moving files

    I have an iMAC that is many years old. I also have a Powerbook which is approx. 3 yrs old. I'm considering buying a new MAC. I'm into photography and need lots of hard drive and the newer screens are much more inviting. Anyway, I know when I purchased the laptop I connected it to the iMAC and it loaded all the programs, fonts, etc so I could just start using the laptop with all the things I had on my iMAC. Since both of my machines are quite old I have lots of stuff on them that quite frankly I'm not smart enough to know that I don't need. I don't want to immediately fill up hard drive space on my new machine with things that are worthless. Is there a way of getting things I need and use onto my new computer (like website downloaded fonts, actions, effects) without getting the worthless stuff? Thank you so much.

    your case must be similar to mine - below is the final of what I did:
    for the entire history of the problem see: http://discussions.apple.com/thread.jspa?messageID=6864711&#6864711
    my final reply: I managed to do it using ethernet cable!
    I did erase and install on MBP while having it etherned to MB then when it comes to the screen when it asks: do you want to transfer... i chose from another mac. then comes the option use firewire or small button: use ethernet. Pressed it and I get info:
    put the install dvd 1(MBP) into your other computer and install: DVD or CD sharing setup
    so when you put MBP install DVD into MB you get window and among other things there's a folder: optional installs and there this DVD or CD setup
    Then when Migrating Assistant is open on MB comes up a dialogue box and then there is an option transfer to another Mac
    'your info has been transfered succesfully'
    I had to do it using ethernet cable - $3.95 for firewire it's incredibly cheap - but I don't live in the US - here this cable costs around $50
    thanks for the time and help!

  • New person's question: regarding importing media into premiere cs4

    Hi folks, please excuse the level of my ignorance but I really need a bit of clarification.  If i import video from a tape captured from my Canon A1, hd 30 frames progressive or if I import into a project video movie which was rendered 1920x1080 progressive f4v is there any difference in quality?
    In other words is digital information captured in the native format via adobe capture window ( set to hd ) of better "quality" then importing a movie that was 1st rendered from a different project in the hd f4v format?
    I am embarished to ask this question as I keep thinking that zeros and ones captured in Hd 1920x1080 adobe flash movie can not degrade if imported into a project vs digital info from a tape but maybe I am missing something that is essential in the work flow process.
    It is easy for me to output sections of my training films as f4v and store them on my hard drive. Later I can import "blocks" of edited material into another training project but I am afraid that I might be degrading the process with this work flow.
    Sorry to ask such a newbie question but I really don't want to make a fundamental mistake that will have a negative effect down the road.
    Thanks again,
    Ken Araujo
    www.kenaraujo.com
    [email protected]

    Capturing is just a transfer of 0's and 1's, nothing gets changed, so no quality loss. What was recorded is transferred.
    Rendering on the other hand means a change from format A to B. That always implies a loss of quality. How severe depends on the formats used.

  • Question on best practice....

    Friends,
    Final Cut Studio Pro 5/Soundtrack Pro 1.0.3
    Powerbook G4, 2GB Ram
    I have a DV session recorded over 6 hours that I need some assistance with. The audio for the session was recorded in two instances....via a conference "mic" plugged into a Marantz PDM-671 audio recorder onto compactflash (located in the front of the room by the presenter(s)) AND via the built-in mics on our Sony HDR-FX1 video camera. Needless to say, the audio recording on the DV tape is not very good (presenters' voice(s) are distant with lots of "noise" in the foreground), while the Marantz recording is also not great...but better.
    Since these two were not linked together or started recording at the same time, the amount/time of recording doesn't match. I'm looking for either of the following:
    (a) Ways to clean up or enhance the audio recording on the DV tape so that the "background" voices of the presenters are moved to the foreground and able to be amplified properly.
    OR
    (b) A software/resource that would allow me to easily match my separate audio recording from the Marantz to the DV tape video, so I could clean up the "better" of the two audio sources, but match the audio and video without having our speakers look like they're in a badly dubbed film.
    Any advice or assistance you could give would be great. Thanks.
    -Steve
    Steven Dunn
    Director of Information Technology
    Illinois State Bar Association
    Powerbook G4   Mac OS X (10.4.6)   2GB RAM

    Hello Steven,
    What I would do in your case since you have 6 hours is to edit the show with the audio off the DV camera. Then, as painfull as this will be, get the better audio from the recorder and sync it back up till it "phases" with the audio from the DV camera. One audio track will have the DV camera audio on it. Create another audio track and import the audio from the recorder and place it on the 2nd audio track. Find the exact "bite" or audio and match it to the start of the DV camera audio clip. Now slip/slid the recorder audio till the sound starts to "phase". This will take awile but in the end works when original camera audio is recorded from across the room. Good luck.

Maybe you are looking for

  • Need help in PISUPER User ?

    HI Experts We are using pi 7.1  When we go to IR And ID Using PISUPER , we got the problum the system not entering. What is the problumg ? My Question PISUPER is by defalut user  or not ? How to Create a new user for IR and ID And Sld , Rwb With All

  • File I/O Using Object Streams

    My program is supposed to be a Driving instructor program that allows someone to enter a Name and Lesson Number, and it will pull up the Date, and Comments that the lesson took place. The information is saved in a file with the person's name, and sav

  • CX_SY_MESSAGE_ILLEGAL _TEXT

    Hi all, Im an Abapper and im getting the below message in my report(se38)RUNTIME ERROR....my technical guys says it should be resolved by Basis Consultant.... CX_SY_MESSAGE_ILLEGAL _TEXT Run time Error. The Exception 'CX_SY_MESSAGE_ILLEGAL _TEXT ' wa

  • Problems installing Sun Directory Server

    I downloaded the package install of the Directory Server Enterprise Edition 6.3. Made it through the install ok without too much trouble. When I run the command dsccsetup initialize, I get the following error: ld.so.1 dsadm failed: libnss3.so: versio

  • Starting up an instance server

    Hello from Spain, We are working with iWS4.1 SP5 over an AIX and when We try to start up an instance server with the servlet engine activated there was the next error: Status: [https-xxx]: start failed. (2: unknown early startup error) [https-xxx]: c