What is best practice for dealing with Engineering Spare Parts?

Hello All,
I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
Our current process is as follows:
All materials are set up as HIBE's
Each material is batch managed
The Batch field is used for the Bin location
We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
Regards
Chris

Hi,
I hope all the 50000 spare parts are maintained as stock items.
1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
3. By doing GI based on reservation, qty can be tracked against the order & equipment.
As this question is MM & WM related, they can give better clarity on this.
Regards,
Maheswaran.

Similar Messages

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • What is best practice for integration with freight forwarders?

    Hello,
    We are looking into the possibilities for automatically exchanging data with one of our freight forwarders. We will send them our shipment information and they will send back shipment status and date information including some additional information like the house bill. Sending the shipment data from our SAP (ECC 6) system is no issue, we have done that before. What is new to us is receiving back the status updates from the forwarder. Is there a kind of best practice of where to store this information on the shipment (or in a separate tabel) and what standard function module or BADI to use for this?
    We are using ECC 6.0 sales and distribution, but no transportation management or SCM modules.
    Would, like to hear the experiences of people who have done this type of intergration with their forwarders.
    Regards,
    Ed

    SAP have added SAP TM 8.10 as a separate package which is also integrated with R/3 which means, a separate server is required if SAP TM needs to be implemented which will take care of your expectations.  For more information on this, search in Google so that you will get couple of documentations on this topic.
    G. Lakshmipathi

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best practice to deal with computer or departed employee

    Dear All,
        I would like to inquire about the best practice to deal with computer and computer account of a departed employee. should be disabled, reset, deleted, or just kept as it is until it is needed by another user?
    Regards
    hiam
    Hiam

    Ultimately your needs for their identities and equipment after they leave are what dictate how you should design this policy.
    First off, I recommend disabling the account immediatly following the employee's departure. This prevents the user from using their credentials to log on again. Personally I have a "Disabled Users" OU in Active Directory. When I disable accounts
    I move them here for easy future retrieval.
    It is possible the user may return, or if they have access to certain systems you may need the account again. I would keep the accounts for a specific amount of time (e.g. 6 months, but this depends on your needs) and then delete them after this period of
    time.
    If the employee knows the passwords to any shared accounts (not a good idea though many organizations have these) or has accounts in other systems that do not use Active Directory authentication, immediately change the passwords to these accounts again following
    the employee's departure.
    If the employee had administrative access to their computer (not a good idea, though is the reality in most cases) you should disable the computer account and remove it from the network. This will prevent the employee from remotely accessing the machine
    until you are able to rebuild or inspect it for unapproved changes.
    Ask the user's manager, team members, and subordinates if there are any files that the employee would have stored on their computer. Back these up as necessary.
    Most likely you will reuse the computer for another employee. For best results you should use an image so you can re-image their machine and not have to worry whether they had installed any unwanted software (backdoors, viruses, illegal software, etc).
    Hope this helps.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • What is best practice for installing Yosemite

    I am currently on OS X Mavericks version 10.9.5 Macbook pro 13.  2.6 ghz intel for i5, 8gb 1600 mhz ddr3.
    I am now downloading yosemite 10.10.1 but since i've been reading all these negative feedback so far, i am having second thoughts if i should continue to install the upgrade or not.
    Any suggestion What is best practice for installing Yosemite?  Or is it not yet time to upgrade since the platform is premature yet?
    Thanks in advance.

    Check your apps are compatible with 10.10 - roaringapps.com
    http://www.etresoft.com/etrecheck can show what is running & installed - look for updates on the developer own sites.
    If you have many kernel extensions or startup items look for updates to them too
    Take a full bootable backup to another disk via Carbon Copy Cloner, Super Duper! or Disk Utility
    Disconnect the backup before you begin any install (ideally set it aside & leave it untouched incase you need to go back to 10.9)
    Personally I prefer a clean install when there are signs of multiple migrations (if you have upgraded several OS for a period of years). Setup Assistant/ Migration Assistant can import user data from a backup, but consider that Apps & 'other data' should be manually reinstalled from the latest versions.
    If you clean install (erase the HD before installation) then make sure you deauthorise iTunes & any other apps that are associated online (like find my Mac).
    Basically the steps you would take before selling a Mac…
    What to do before selling or giving away your Mac - Apple Support

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • What is best practice for...

    What is best practice for deploying applications through the IPCU to 10 ipads?
    I'm looking for a complete step-by-step of the best way to do this.
    Thanks in advance!

    Just place a modem into any console port. Ideally you use a terminal server, but is not always really needed.

  • What's best practice for logging messages in pageflow?

    What's best practice for logging messages in pageflow?
    Workshop complains when I try to use a Log4J logger by saying it's not serializable. Is there a context similar to JWSContext that you can get a logger from?
    There seems to be a big hole in the documentation on debug logging in workflows and JSP pages.
    thanks,
    Rodger...

    Make the configuration change in setDomainEnv.cmd. Find where the following variable is set:
    LOG4J_CONFIG_FILE
    and change it to your desired path.
    In your Global.app class, instantiate a static Logger like this:
    transient static Logger logger = Logger.getLogger(Global.class);
    You should be logging now as long as you have the categories and appenders configured properly in your log4j.xml file.

  • What is best practice for conditional rendering?

    i have a set of radio buttons that conditionally render another set, which in turn conditionally render a 3rd set.
    i am doing this by using a valueChangeListener on an h:selectOneRadio
    when i click yes on the 1st set it renders the 2nd set.
    when i click yes on the 2nd set it renders the 3rd set.
    when i click no on the 1st set now it correctly removes the 2nd & 3rd sets, the method also nulls the values.
    when i click yes on the 1st set, it renders the 2nd set with the old values even though they were nulled. and i can see in the debugger they are still null.
    so there seems to be an issue updating the model, but i dont know what that issue is. im sure it just has something to do with the way i am trying this conditional render. please see below for my first field and my valueChangeListener method.
    any help with what is a best practice for this scenario would be greatly appreciated.
    <h:panelGrid id="grid20a" columns="1">
                             <h:outputText value="Does this consult pertain to a specific "/>
                             <h:outputText value="planned or ongoing research project?: *"/>
                        </h:panelGrid>
                        <h:selectOneRadio id="specificResearch1" value="#{ethicsConsultBacking.bean.specificResearch}"
                             layout="lineDirection" required="true"
                             valueChangeListener="#{ethicsConsultBacking.processValueChangeSpecificResearch}">
                                  <f:selectItems value="#{ethicsConsultBacking.yesNoMap}" />
                                  <a4j:support event="onclick" reRender="form"/>
                        </h:selectOneRadio>
                        <h:message for="specificResearch1" styleClass="redText" />
    public void processValueChangeSpecificResearch(ValueChangeEvent e) throws AbortProcessingException
              String newValue = e.getNewValue().toString();
              //reinitialize
              this.setRenderHumanSubjectResearch(false);
              this.setRenderIrbSection(false);
              this.setRenderIrbProtocolNumber(false);
              this.getBean().setHumanSubjectResearch(null);
              this.getBean().setPrimaryIrb(null);
              this.getBean().setIrbStatus(null);
              this.getBean().setIrbProtocolNumber(null);
              //check condition
              if (newValue.equalsIgnoreCase(this.YES))
                   this.setRenderHumanSubjectResearch(true);
               * Clearing validation messages.
               * This will get around the issue with having a field
               * on the form that is both required & immediate.
              Iterator it = this.getFacesContext().getMessages();
              while (it.hasNext())
                   it.next();
                   it.remove();
              this.getFacesContext().renderResponse();          
         }i also tried doing this another way by just using an action method attached to the a4j:support tag, but that introduced a different set of issue. so ill leave that out for now unless that is the direction someone would like to direct my issue in.
    Thanks in advance

    Similar issue is covered and explained here: [http://balusc.blogspot.com/2007/10/populate-child-menus.html]. Not sure if it solves your problem as you're using ajax4jsf whereas I don't, but it might give new insights. To the point you might need to bind the component(s) and use setValue(null) or setSubmittedValue(null).

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • New mac - what is best practice for accounts?

    I am about to get a new mac (imac g5), and would like it to work well (ie file transfer and backup to from) my existing powerbook.
    Is there a best practice for account set up? should I use the same accounts between the two or can I set up a new account on the new mac?
    Related to this: what will key change sync give me? does that only work with the same accounts on two macs?
    thanks
    John

    With Tiger there is a migration assistant that will move everything over from your Powerbook to your new iMac G5. All you need is a firewire cable and when prompted in your first start-up select migration assistant and connect the two computers. You will need to boot up your Powerbook holding down the 't' key before you connect the two together. Good luck, Jack.

Maybe you are looking for