Best Practices Methodologies and Implementation training courses

Does anybody know regarding Best Practices Methodologies and Implementation training courses available?
Kind regards,
Adi
[email protected]

hi Adi,
please do gothrough this pdf
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/519d369b-0401-0010-0186-ff7a2b5d2bc0
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e5b7bb90-0201-0010-ee89-fc008080b21e
hope this helps you please don,t forget to give points
with regards.
Vinoth

Similar Messages

  • Best Practices Methodologies and Implementation

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?

    Hi dear,
    please don't post the same question several times...
    Look at Best Practice Course
    Bye,
    Roberto

  • SAP SCM and SAP APO: Best practices, tips and recommendations

    Hi,
    I have been gathering useful information about SAP SCM and SAP APO (e.g., advanced supply chain planning, master data and transaction data for advanced planning, demand planning, cross-plant planning, production planning and detailed scheduling, deployment, global available-to-promise (global ATP), CIF (core interface), SAP APO DP planning tools (macros, statistical forecasting, lifecycle planning, data realignment, data upload into the planning area, mass processing u2013 background jobs, process chains, aggregation and disaggregation), and PP/DS heuristics for production planning).
    I am especially interested about best practices, tips and recommendations for using and developing SAP SCM and SAP APO. For example, [CIF Tips and Tricks Version 3.1.1|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700006480652001E] and [CIF Tips and Tricks Version 4.0|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000596412005E] contain pretty useful knowledge about CIF.
    If you know any useful best practices, tips and recommendations for using and developing SAP SCM and SAP APO, I would appreciate if you could share those assets with me.
    Thanks in advance of your help.
    Regards,
    Jarmo Tuominen

    Hi Jarmo,
    Apart from what DB has suggested. you should give a good reading on the following.
    -Consulting Notes (use the application component filters in search notes)
    -Collective Notes (similar to the one above)
    -Release Notes
    -Release Restrictions
    -If $$ permit subscribe to www.scmexpertonline.com. Good perspective on concepts around SAP SCM.
    -There are a couple of blogs (e.g. www.apolemia.com) .. but all lack breadth.. some topics in depth.
    -"Articles" section on this site (not all are classified well.. see in ECCops, mfg, SCM, Logistics etc)
    -Serivce.sap.com- check the solution details overview in knowledge exchange tab. There are product presentations and collaterals for every release. Good breadth but no depth.
    -Building Blocks - available for all application areas. This is limited to vanilla configuration of just making a process work and nothing more than that.
    -Get the book "Sales and Operations Planning with SAP APO" by SAP Press. Its got plenty of  easy to follow stuff, good perspective and lots of screen shots to make life easier.
    -help.sap.com the last thing that most refer after all "handy" options (incl. this forum) are exhausted. Nevertheless, this is the superset of all "secondary" documents. But the maze of hyperlinks that start at APO might lead you to something like xml schema.
    Key Tip: Appreciate that SAP SCM is largely driven by connected execution systems (SAP ECC/ERP). So the best place to start with should be a good overview of ERP OPS solution overview, at least at the significant level of depth.). Check this document at sdn wiki "ERP ops architecture overview".
    I have some good collection of documents though many i havent read myself. If you need them let me know.
    Regards,
    Loknath

  • Is this a best practice of BAM implementation?

    Hello everyone:
    Currently we have done an Oracle BAM implementation. To explain briefly our implementation:
    We have an Oracle Database 8.1.7, were all transactions are recorded. We tried using JMS to import the data into data objects in the Oracle BAM respository. We did this by using a database link to a Oracle Database 10G and then through Advanced Queueing. This did not work due to performance issues. The AQ messages were not consumed as fast as they were produced, so there was no real time data.
    Then we developed a Java component to read the table in the Oracle Database 10g and started using batch upserts into the Oracle BAM through the web services API provided. This solved the performance issue mentioned above.
    Currently we are using all the data procesing in the Oracle 10G database through PL/SQL stored procedures, data mining is applied on the transactions and the summary information is collected intro several tables. This tables are updated and then imported into the Oracle BAM data objects.
    We have noticed, that Oracle BAM has some performance issues when trying to view a report based on a data object with large number of records. Is this really an issue on Oracle BAM? The average number of transactions is 200,000 records. How can we solve this issue?
    Another issue we want to expose is. When viewing reports through the browser, and the browser hangs or suddenly closes. Sometimes the Active Data Cached Feed window hangs or doesn´t close. When this happens, an we try to open another report, the report never displays. Is this a browser side issue or server side issue?
    The Oracle BAm is installed on a Blade with 2X2 Xeon Procesors (4 cpus), 16GB RAM and Windows Server 2003 Enterprise Ed. with SP2.
    How can we get a tuning guide based on best practices?
    Where can we get suggestions about our implementation?
    Thanks to anyone who can help us.

    Even i am facing similar issue. Any pointers would be appreciated.
    Thanks.

  • Best Practice for UPK implementation

    We will start using UPK tool in our Oracle E-Business Suite (11.5.10) environment soon.
    We are in a process of configuring the tool and making a standard template for training documents.
    For example, which screen resolution? which font size and color, bubble icon, pointer position, task bar setting, and etc.
    If anyone have any best practice document to share, I will appreciate it.

    Hi,
    Some of the standards will depend on your end-user capabilities/environments but I have a good standards document we use when developing UPK content for the eBus suite which might help.
    Email me at [email protected] and I'll send it over.
    Jon

  • Best Practice for Launching Internal Training?

    Hello.  I have a series of 12-20 internal trainings that employees are required to take when they start with our company, and some of the trainings are required every year for different groups around the company.  So, I have these 20 pieces of content built, and currently had just one Training set-up for each, and would launch that same link to folks everytime I needed to.  More and more start/stop times on these are all over the place.  I may have Sally and Tom starting the trainings this week and they have 3 weeks to complete, but in a week - I could need Harry to take only 2 of the trainings, and he will have a 1 week window.  Using just that one training link negates my ability to be able to set open/close dates - set reminders within the system...
    So, my question is for those of you that run this type of program - do you set-up a new training each time you launch a piece of content out to folks?  I feel like this could get messy quickly, and cause the need for me to run a ton of reports.  Am I missing an easier way to do this?
    Thank you.

    Hello!
    Apologizes for the late reply on this; have you considered setting up new training courses for each piece of content? This to me would seem to be much easier to manage. You could then setup new versions of the course for each year (month, etc..) they are required. A little bit of extra work at the start but easier in the long run. Keeping the same course for multiple pieces of content is a tricky practice and is generally not recommended. For example, what if Sally needed to take your Course with Content A at the start of the year, then Content B at the later half, using the same course? You would have to reset Sally's training transcript in order for her to retake the course, and she would lose that transcript data.
    If you want a single point of reference for all your training courses, you could consider using the Connect Training Catalog, or developing your own using the Connect APIs.
    Hope this helps!
    Lauren

  • Best Practice for Apex Implementation

    Hello,
    I'm looking for some guidance in best practices on implementing Apex across our enterprise. Do we install it on many databases based on whether an application gets most of its data from that database? And if so, could we use one 10gAS web server to serve up all of the instances?
    We currently have Apex installed on RAC databases in each environment (Dev, Int, QA, Prod), and then use dblinks to connect to the many remaining databases. Each RAC environment then uses an appropriate 10gAS web server (one web server per Apex installation). I'm wondering if this is a good approach or not? Any suggestions are appreciated.
    Julie

    Hi,
    Some of the standards will depend on your end-user capabilities/environments but I have a good standards document we use when developing UPK content for the eBus suite which might help.
    Email me at [email protected] and I'll send it over.
    Jon

  • Toy Store Best Practice: How to implement 'cancel' for register user page

    Let's say user wants to register or (even edit) his/her account. But when forwards to register(edit) account page, he/she decides not to do it. I would like to implement 'cancel' button and return the user whereever he/she was before comming to this page.
    What is the best practice?
    Even worse, if user gets to this page and never saves the entry. The model would be dirty. And then let's say that user wants to commit something in DB, it may commit incorrect(blank) entry for created row. What I am up to, is what is the best practice to keep track if the model gets dirty and delete invalid rows in general?

    You might want to read this thread:
    Cancel operation followed by refresh raises JBO-33035
    (very similar discussion I was having with Steve)

  • Best practices for Indirection implementation?

    Hi, I'm about to start trying out several aspects of indirection in Toplink. My question is; what are best practices to implement this feature?
    To me, it looks like proxy indirection is the cleanest way to do this, I'm not sure however whether there are any restrictions to it - besides the necessity for an interface for each domain class. My goal is to keep the amount of Toplink code 'clutter' in my domain model as low as possible.
    Thanks.

    Although proxy indirection is nice for 1:1 as it reduces the 'clutter' there may be a performance hit. It really depends upon your JDK.
    Personally I prefer to use ValueHolder for 1:1 and transparent collection indirection for my collection mappings. The attribute that you make a ValueHolder is private and the API of your class does not need to expose its existence in any way. I find this best as I do not need to manage an additional interface and keep its API in sync.
    In the very near future you will be able to use our EJB 3.0 implementation that leverage AOP style weaving to dynamically enhance your mapped classes during loading. This will allow you to have 1:1 indirection without the interface or ValueHolder. Since changing the type of the attribute and the implementation of the get/set method is probably less intrusive then factoring our an unecessary interface to your future migration I would stick with the ValueHolder for now.
    Doug

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Best Practice: DAG and AutoDatabaseMountDial Setting

    Hi,
    I am working with management in my organization with regard to our DAG fail over settings. By default, the fail over setting is set to 'Good Availability' (missing 6 logs or less per
    DB). My organization did not feel comfortable with data loss so we changed it to 'Lossless'.
    Of course, we had a SAN failure and we lost a DAG member and nothing failed over to the surviving DAG member. Even eventID 2092 reported the same log sequence for many databases yet the
    surviving DAG member did not mount. Example:
    Database xxxmdb28\EX-SRV1 won't be mounted because the number of lost logs was greater than the amount specified by the AutoDatabaseMountDial.
    * The log file generated before the switchover or failover was: 311894
    * The log file successfully replicated to this server was: 311894
    Only after the SAN was restored and the surviving server came back did another 2092 EventID get logged stating the log sequence was not 311895 (duh! - because the database got mounted again. )We opened a support
    case with Microsoft and they suggested no databases mounted because the surviving DAG member could not communicate with the non-surviving member (which is crazy to me because isn't that THE POINT of the DAG??). Maybe there is always a log file
    in memory (?) so AutoDatabaseMountDial set to lossless will never automatically mount any database? Who knows.
    In any case, we are trying to talk ourselves back into setting it back to 'Good Availability' setting. Here is where we are at now:
         2 member DAG hosting 3000 mailboxes on about 36 databases (~18 active on each node) in same AD site (different physical buildings) w/witness server
         AutoDatabaseMountDial set to lossless
         Transport dumpster is set to 1.5x maximum message size
    What level of confidence can we expect we will not have data loss with the properly configured transport dumpster and a setting of 'Good Availability?' I am also open to other suggestions such as changing
    our active-active DAG to active-passive by keeping all active copies on one server.
    Also, has anyone experienced any data loss with 'Good Availability' and a properly configured transport dumpster?
    Thanks for the guidance.

    Personally I have not experienced loss in this scenario and have not changed this setting from "Good Availability".  I know that setting the transport dumpster to 1.5x is the recommended setting.  Also there is a Shadow Queue for each transport
    server in your environment which verifies the message reaches the mailbox before clearing the message. 
    To make an example for mailflow (assuming these are multi-role for this example, it still applies to split role). you have Server1 and Server2 with an external user sending mail to a user on Server1.  The message will pass through all your external
    hops, etc. and then get to the transport servers.  If it is delivered to Server1, the message has to be sent to Server2 and then back to Server1 to be delivered to the mailbox so that it hits the shadow queue of Server2.  If the message hit Server2
    first, then it would be sent to Server1 and then to the mailbox.
    If either of your servers are down for a period of time, then the shadow queue will try to resend the messages again and that is why you wouldn't have any data loss.
    Jason Apt, Microsoft Certified Master | Exchange 2010
    My Blog

  • Best practice question for implementing a custom component

    I'm implementing a custom component which renders multiple <input type="text" .../> controls as part of it. The examples I've seen that do something similar use the ResponseWriter to generate the markup "by hand" like:
         writer.startElement("input", component);
         writer.writeAttribute("type", "text", null);
         writer.writeAttribute("id", "foo", null);
         writer.writeAttribute("name", "foo", null);
         writer.writeAttribute("value", "hello", null);
         writer.writeAttribute("size", "20", null);
         writer.endElement("input");
    I don't know about anyone else, but I HATE having to write code that manufactures this stuff - seems to me that there are already classes that do this, so why not just use those? For example, the above could be replaced with:
         HtmlInputText textField = new HtmlInputText();
         textField.setId("foo");
         textField.setValue("hello");
         textField.setSize(20);
         // just to be safe, invoke both encodeBegin() and encodeEnd(),
         // though it seems like encodeEnd() actually does the work in this case,
         // but who knows if they might change it at some point
         textField.encodeBegin(context);
         textField.encodeEnd(context);
    So my question is, why does everyone seem to favor the former over the latter? Why not leverage objects that already do the (encoding) work for you?

    Got it!
    You JSP should have this:
    <h:panelGroup styleClass="jspPanel" id="jspPanel1"></h:panelGroup>
    And your code page ValueChangeListener/ActionListner should have this:
              if (findComponent(getForm1(),"myOutputText") == null)
                   FacesContext facesCtx = FacesContext.getCurrentInstance();
                   System.out.println("Adding component");
                   HtmlOutputText output =
                        (HtmlOutputText) facesCtx.getApplication().createComponent(
                             HtmlOutputText.COMPONENT_TYPE);
                   output.setId("myOutputText");
                   output.setValue("It works");
                   getJspPanel1().getChildren().add(output);          
                   System.out.println("Done");
                   DebugUtil.printTree(FacesContext.getCurrentInstance().getViewRoot(),System.out);
              else
                   System.out.println("component already added");
    I just have to figure out this IOException on the closed stream - probably has to do with [immidiate="true"].
    Thanks.
    [9/15/04 13:05:53:505 EDT] 6e436e43 SystemErr R java.io.IOException: Stream closed
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.Throwable.<init>(Throwable.java)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.Throwable.<init>(Throwable.java)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at org.apache.jasper.runtime.JspWriterImpl.ensureOpen(JspWriterImpl.java:294)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:424)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:452)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.faces.component.UIJspPanel$ChildrenListEx.add(UIJspPanel.java:114)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at pagecode.admin.Test.handleListbox1ValueChange(Test.java)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.reflect.AccessibleObject.invokeImpl(Native Method)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.reflect.AccessibleObject.invokeV(AccessibleObject.java:199)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.reflect.Method.invoke(Method.java:252)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:126)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.component.UIInput.broadcast(UIInput.java:492)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:284)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.component.UIViewRoot.processDecodes(UIViewRoot.java:342)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:79)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:200)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.StrictServletInstance.doService(StrictServletInstance.java:110)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.StrictLifecycleServlet._service(StrictLifecycleServlet.java:174)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.IdleServletState.service(StrictLifecycleServlet.java:313)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.StrictLifecycleServlet.service(StrictLifecycleServlet.java:116)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.ServletInstance.service(ServletInstance.java:283)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.ValidServletReferenceState.dispatch(ValidServletReferenceState.java:42)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.ServletInstanceReference.dispatch(ServletInstanceReference.java:40)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.handleWebAppDispatch(WebAppRequestDispatcher.java:948)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.dispatch(WebAppRequestDispatcher.java:530)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:176)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.srt.WebAppInvoker.doForward(WebAppInvoker.java:79)
    [9/15/04 13:05:53:552 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.srt.WebAppInvoker.handleInvocationHook(WebAppInvoker.java:201)
    [9/15/04 13:05:53:552 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.cache.invocation.CachedInvocation.handleInvocation(CachedInvocation.java:71)
    [9/15/04 13:05:53:552 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.srp.ServletRequestProcessor.dispatchByURI(ServletRequestProcessor.java:182)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.oselistener.OSEListenerDispatcher.service(OSEListener.java:334)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.http.HttpConnection.handleRequest(HttpConnection.java:56)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.http.HttpConnection.readAndHandleRequest(HttpConnection.java:610)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.http.HttpConnection.run(HttpConnection.java:435)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:593)
    [9/15/04 13:05:56:146 EDT] 6e436e43 SystemOut O Done

  • Best Practice for Package Implementation of Data Manipulation

    Hi,
    Would like to ask which is better implementation for data manipulation (insert, update, delete) stored procedure for a single table.
    To create a single procedure with input parameter for the action such as 1 for insert, 2 for update and so on
    or
    to create separate procedures for each like procedure pInsData for insert, pUpdData for update...

    Hi,
    Whenever you create a procedure it resides as a seperate object in database.
    In my opinion its better to create a single procedure which takes care of all DML concern to a table, rather than creating different procedures for each DML.
    If your number of DML are more and interrelated then its better to create a package and put all related DML procedures in the package concern to one transaction or table. This is because whenever you will call a package entire package will be placed in the memory for a particular session. So if you create different procedures for DML then you need to call the procedures each time you want it to be executed.
    Twinkle

  • Best practice DataGroup and custom layout

    Hello,
    Some times ago, i made a little TimeLine component in flex 3, the component was very limited in terms of possibility, mainly because of performance issues.
    With flex 4 and virtualisation in layout i thought it was easier to create my component.
    But i'm confronted to some problems.
    I try to extend DataGroup, with a fixed custom Layout, but i have 2 problems :
    -The first one is that  size and position of elements will depends of the dataProvider of the component, so i need to have a direct dependance of the component in my layout.
    -And the main one is depth of the element will depends of data and i don't see a way of updating it in my layout.
    Should i stop using DataGroup and try to implement it directly from UIComponent and IViewPort implementation with a sort of manual virtualisation?
    Should i override the dataProvider setter to sort it the way element depth should be set ?
    Should i use BasicLayout and access directly to property from the DataGroup in itemRenderer to set top left width and height ?
    I'm a little lost and any advice would be a great help.
    Thanks.

    user8410696,
    Sounds like you need a simple resource loading histogram with a threshold limit displayed for each project. I don't know if you are layout saavy but layouts can be customized to fit your exact needs. Sometimes P6 cannot display exactly what you desire but most basic layouts can be tweaked to fit. I believe that you need a combo of resource profiles and columns.
    I'll take some time today and do some tweeking on my end to see if I can modify one of my resource layouts to fit your needs. I'll let you know what I find out.
    talk to you later,

Maybe you are looking for

  • One Drive on Mac Mini Server not indexing - no spotlight or finder results

    I have two drives on my Mac mini server. No RAID, etc. They are factory config 500G each. The "main" drive has all the applications, desktop, etc on it and indexes fine, I can search or spotlight with no problem. Anything on the other disk works fine

  • Can't run dipassistant  on windows

    Hi, Im trying to run the dipassistant command but getting the following error although i have set the ORACLE_HOME E:\Oracle\Product\10g\Orainfra\BIN>dipassistant bs -cfg %ORACLE_HOME%\ldap\odi\samples\ad2oid.properties dipassistant ERROR: The specifi

  • Error while refreshing the runtime Cache! Issue in SLD

    Hello Experts, I am getting a strange issue in Production server due to which messages are ending in error .After every shut down/restart of the PI system ,a technical system for database get created in Production SLD . This is causing cache issue ,

  • Executing ABAP program in RFC to File

    Hi, In the RFC to File, the RFc is called in the ABAP program. So when i am executing the ABAP program for the 1st time it doesnt reach PI, only when clicking the execute button multiple times execution reaches PI and a message ID is created along wi

  • 2 iphones both receive same messages and sometimes phone calls. What causes this?

    My wife and I both have iphones but we are receiving each others messages and sometimes phone calls even though different phones. What is causing this? Both phones registered under the same user. Means she is seeing my texts messages and vice versa.