Multiple jdk versions on solaris--best practices and advice

I am a newcomer to solaris system administration (not by choice--I am normally just a Java programmer, but am now responsible for testing code on a new solaris box), so apologies for the newbie questions below.
#1: is it typical for a brand new solaris install to have multiple versions of Java on it?
After installation, which left me with this version of solaris:
     SunOS asm03 5.10 Generic_120011-14 sun4v sparc SUNW,SPARC-Enterprise-T5220I find from pkginfo, that their are 2 old versions of java installed:
     SUNWj3dev     J2SDK 1.4 development tools
SUNWj3dmo     J2SDK 1.4 demo programs
SUNWj3dvx     J2SDK 1.4 development tools (64-bit)
SUNWj3irt     JDK 1.4 I18N run time environment
SUNWj3jmp     J2SDK 1.4 Japanese man pages
SUNWj3man     J2SDK 1.4 man pages
SUNWj3rt      J2SDK 1.4 runtime environment
SUNWj3rtx     J2SDK 1.4 runtime environment (64-bit)
SUNWj5cfg     JDK 5.0 Host Config. (1.5.0_12)
SUNWj5dev     JDK 5.0 Dev. Tools (1.5.0_12)
SUNWj5dmo     JDK 5.0 Demo Programs (1.5.0_12)
SUNWj5dmx     JDK 5.0 64-bit Demo Programs (1.5.0_12)
SUNWj5dvx     JDK 5.0 64-bit Dev. Tools (1.5.0_12)
SUNWj5jmp     JDK 5.0 Man Pages: Japan (1.5.0_12)
SUNWj5man     JDK 5.0 Man Pages (1.5.0_12)
SUNWj5rt      JDK 5.0 Runtime Env. (1.5.0_12)
SUNWj5rtx     JDK 5.0 64-bit Runtime Env. (1.5.0_12)Both of these versions are years old; I am surprised that there is not just a single version of JDK 1.6 installed; it only came out, what, going on 2 years ago? I definitely need JDK 1.6 for all of my software to run.
On my windows and linux boxes, I never usually have multiple JDKs; I always deinstall the current one before installing a new one. So, I went first to try and deinstall JDK 1.4 by executing
     pkgrm SUNWj3dev SUNWj3dmo SUNWj3dvx SUNWj3irt SUNWj3jmp SUNWj3man SUNWj3rt SUNWj3rtxThe package manager detected dependencies like
WARNING:
     The <SUNWmccom> package depends on the package currently being removed.
WARNING:
     The <SUNWmcc> package depends on the package currently being removed.
[+ 8 more]and I decided to abort deinstallation because I have no diea what all these other programs are, and I do not want to cripple my system.
If anyone has any idea what programs Sun is shipping that still depend on JDK 1.4, please enlighten me.
#2: Is there any easy way to not only deinstall, say, JDK 1.4 but also deinstall all packages which depend on it?
Maybe this is too dangerous.
#3: Is there at least a way that I can find all the programs which depend on an entire group of packages like
     SUNWj3dev SUNWj3dmo SUNWj3dvx SUNWj3irt SUNWj3jmp SUNWj3man SUNWj3rt SUNWj3rtx?
The above functionality would have come in real handy if I could have done it before doing what I describe next.
I next decided to try removing JDK 1.5, so I executed
     pkgrm SUNWj5cfg SUNWj5dev SUNWj5dmo SUNWj5dmx SUNWj5dvx SUNWj5jmp SUNWj5man SUNWj5rt SUNWj5rtxI thought that this command would let me know of any dependencies of ANY of the packages that are listed. It doesn't. Instead, it merely checks the first one, and if no dependencies are found, then removes it before marching down the list. In the case above, it happily removed SUNWj5cfg because there were no dependencies on it. Then it stalled on SUNWj5dev because it found dependencies like:
WARNING:
     The <SUNWmctag> package depends on the package currently being removed.
WARNING:
     The <SUNWmcon> package depends on the package currently being removed.
[+ 3 more]#4: Have I left my JDK 1.5 crippled by removing SUNWj5cfg? Or was this pretty harmless?
#5: Was I fairly stupid to attempt the deinstallations above in the first place? Do solaris people normally leave old JDKs in place?
#6: Or is it the case that those dependency warnings are harmless: I can go ahead and remove all old JDKs, because java programs will always find the new JDK and should run just fine with it?
#7 Whats the deal with solaris and having multiple packages for something like the JDK? With Windows, for instance, the entire JDK has a single installer and deinstaller program. Its much easier to work with that the corresponding Solaris stuff. Do Solaris people simply need that much finer grained control over what gets installed and what doesn't? (Actually, with the Windows JDK, the gui installer program can let you install selected components should you wish; I am just not sure how scriptable this is versus the solaris stuff, which may be more sys admin friendly if you have to administer many machines.)

The easiest thing to do is to just install the latest into a clean directoryI believe different versions of jdk install into their own separate directory by default. All one needs to do is recreate the symbolic links that point to the version they want to use. The java install documentation has the details.

Similar Messages

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Best Practices and Usage of Streamwork?

    Hi All -
    Is this Forum a good place to inquire about best practices and use of Streamwork? I am not a developer working with the APIs, but rather have setup a Streamwork Activity for my team to collaborate on our activities.
    We are thinking about creating a sort of FAQ on our team activity and I was thinking of using either a table or a collection for this. I want it to be easy for team members to enter the question and the answer (our team gets a lot of questions from many groups and over time I would like to build up a sort of knowledge base).
    Does anyone have any suggestions for such a concept in StreamWork? Has anyone done something like this and can share experiences?
    Please let me know if I should post this question in another place.
    Thanks and regards,
    Rob Stevenson

    Activities have a limit of 200 items that can be included.  If this is the venue you wish to use,  it might be better to use a table rather than individual notes/discussions.

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

  • Subversion best practices and assumptions?

    I am using SQL Developer 3.0.04, accessing Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production. I am taking over a project and setting up version control using Subversion. The project uses 4 schema for its tables, PLSQL objects, etc. When saving a PLSQL object (a package specification for example) in SQL Developer (using the File->Save As menu option) the default name is PACKAGE_NAME.sql. The schema name is not automatically set up as part of the file name. In looking at the SQL Developer preferences, I do not see a way to change this.
    In viewing the version control OBE, which uses files from the HR schema, there is an implicit assumption that the files all affect the same schema. Thus the repository directory only contains files from that one schema. Is this the normative/best practice for using Subversion with Oracle and SQL Developer? I want to set up our version-control environment to minimize the likelihood of "user(programmer) error".
    Thus, in our environment, should I :
    1) set up Subversion sub-directories for each schema within my Subversion project, given that each release (we are an Agile project, releasing every 2 weeks) may contain objects from multiple schema?
    2) rename each object to include the schema name in the object?
    Any advice would be gratefully appreciated.
    Vin Steele
    Edited by: Vin Steele on Aug 8, 2011 11:13 AM
    Edited by: Vin Steele on Aug 8, 2011 11:20 AM
    Edited by: Vin Steele on Aug 8, 2011 11:22 AM

    Hi
    It makes sense to have the HCM system in the same system as rest of the components because
    1) We are able to make use of the tight integration between various components, most importantly Payroll - Finance.
    2) We can manage without tiresome ALE/interface development and management.
    3) lesser hardware cost (probably)
    It makes sense to have HCM in different systems because
    1) because of different sequence of HRSP/LCP compared to other systems, we can have a separate strategy for HRSP application independent of other components. We can save a lot of effort in regression testing as only HR needs to be tested after patch application.
    2) IN many countries there are strict data protection laws, and having HR in a separate system ensures that people from other functions do not have access to HR data even accidentally as they will not have user ids in the HR system.
    Hope this is enough to get you started.

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • Applying common styles to multiple HNCS: What is the best practice?

    Hi Community
    Adhering to best practices, we have built a SharePoint 2013 intranet with multiple Host Named Site Collections all accessible via HTTPs, for example
    https://home.domain.com   -  Landing Page
    https://this.doamin.com
    https://that.domain.com
    https://other.domain.com
    We have noticed issues with the home page on each site having an affect on the Meta Data Navigation Menu so thought it was time we reviewed our references.
    Ok, we want to have a common master page and CSS, JavaScript, Fonts etc throughout the intranet.  So what is the best way of implementing this
    and what is a candidate provision strategy say from Dev
    My thoughts are copy a common custom master page to each Master Page Gallery but with options as to how we reference external files
    Option 1:  replicate on each of the HNSCs:
    Local copies of CSS, JS  etc  in  •   /siteAssets/  and or,  •   /Style Library/syles.css Or 
    Option 2:  explicit reference to the styles held on the home site   collection
    The master page might have common reference to
    https://home.domain.com/SiteAssets/css/styles.css
    Or 
    Option 3: use the _layouts file structure  - not my favourite as not accessible in SPD 2013 or using sp2013 built in document management
    Use the hive and not the content database structure.  Hence, all master pages would have references similar to: •   _layouts/15/styles/mystyles.css •   _layouts/15/images/client/home.jpg
    Would be interested to hear you thoughts as clearly there is more that one way to achieve styles nirvana!
    Daniel
    Freelance consultant

    Hi Daniel,
    If you need to use the master page for multiple site collections, then you’d better to choose the option 2 or options 3, as you do not need to make copies of the CSS or JS files and re-upload them to each site collection.
    And per my knowledge, option 3 is better. Because the CSS or JS files are stored at the local system of SharePoint server in option 3, it is faster than referring a file which is stored in database in option 2.
    Generally, it depends based on your situation as you don’t like option 3 when the CSS or JS files are not accessible in SPD.
    Thanks,
    Victoria
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Victoria Xia
    TechNet Community Support

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

  • PI best practice and integration design ...

    Im currently on a site that has multiple PI instance's for each region and the question of inter-region integration has been raised my initial impression is that each PI will be in charge of integration of communications for its reginal landscape and inter-region communications will be conducted through PI - PI interface . I havent come across any best practice in this regard and have never been involved with a multiple PI landscape ...
    Any thoughts ? or links to best practice for this kind of landscape ?...
    to Summaries
    I think this is the best way to set it up, although numerous other combinations are possible, this seems to be the best way to avoid any signifcant system coupling. When talking about ECC - ECC inter-region communications
    AUS ECC -
    > AUS PI -
    > USA PI -
    > USA ECC

    abhishek salvi wrote:
    I need to get data from my local ECC to USA ECC, do i send the data to their PI/my PI/directly to their ECC, all will work, all are
    valid
    If LocalECC --> onePI --> USA ECC is valid, then you dont have to go for other PI in between...why to increase the processing time....and it seems to be a good option to bet on.
    The issue is
    1. Which PI system should any given peice of data be routed through and how do you manage the subsequent spider web of interfaces resulting from PI AUS talking to ECC US, ECC AU, BI US, BI AU and the reverse for the PI USA system.
    2. Increased processing time Integration Engine - Integration Engine should be minimal and it will mean a consistent set of interfaces for support and debug, not to mention the simplification of SLD contents in each PI system.
    I tend to think of like network routing, the PI system is the default gateway for any data not bound for a local systems you send and let PI figure out what the next step is.
    abhishek salvi wrote:
    But then what about this statement (is it a restriction or business requirement)
    Presently the directive is that each PI will manage communications with its own landscape only respectively
    When talking multiple landscapes dev / test / qa / prod, each landscape has its own PI system generally, this is an extention of the same idea except that both systems are productive, from a interface and customisation point of view given the geographical remotness of each system local interface development for local systems and support makes sense, whilst not limited to this kind of interaction typically interfaces for a given business function for a given location (location specific logic) would be developed in concert with the interface and as such has no real place on the remote system (PI).
    To answer your question there is no rule, it just makes sense.

  • JSP Best Practices and Oracle Report

    Hello,
    I am writing an application that obtains information from the user using a JSP/HTML form and then submitted to a database, the JSP page is setup using JSP Best Practices in which the SQL statments, database connectivity information, and most of the Java source code in a java bean/java class. I want to use Oracle Reports to call this bean, and generate a JSP page displaying the information the user requested from the database. Would you please offer me guidance for setting this up.
    Thank you,
    Michelle

    JSP Best Practices.
    More JSP Best Practices
    But the most important Best Practice has already been given in this thread: use JSP pages for presentation only.

  • OIM best practice and E-Business

    I have the business requirement to provision different types of users in EBS. There are different applications developed within EBS for which the user provisioning flow may vary slightly.
    what is the best practice with regards to creating resource objects and forms ? should I create a separate RO and set of Forms for each set of users

    EBS, and SAP, implementations with complex and varying approval work flows is clearly one of the most challenging applications of OIM. There are a number of design patterns but without a lot of detail about your specific implementation it is very hard to say which pattern is the most appropriate.
    (Feel free to contact me on [email protected] if you want to discuss this in more detail but don't want to put all the detail in a public forum.)
    Best regards
    /M

  • SAP best practice and ASAP methodology

    Hi,
            Can any body please explain me
                                                          1. What is SAP best practice?
                                                           2. What is ASAP methodology?
    Regards
    Deep

    Dear,
    Please refer these links,
    [SAP best practice |http://www12.sap.com/services/bysubject/servsuptech/servicedetail.epx?context=0DFA5A0C701B93893897C14DC7FFA7D62DC24E6E9A4B8FFC77CA0603A1ECCF58A86F0DCC6CCC177ED84EA76F625FC1E9C6DCDA90C9389A397DAB524E480931FB6B96F168ACE1F8BA2AFC61C9F8A28B651682A04F7CEAA0C4%7c0E320720D451E81CDACA9CEB479AA7E5E2B8164BEC98FE2B092F54AF5F9035AABA8D9DDCD87520DB9DA337A831009FFCF6D9C0658A98A195866EC702B63C1173C6972CA72A1F8CB611798A53C885CA23A3C0521D54A19FD1B3FD9FF5BB48CFCC26B9150F09FF3EAD843053088C59B01E24EA8E8F76BF32B1DB712E8E2A007E7F93D85AF466885BBD78A8187490370C3CB3F23FCBC9A1A0D7]
    [ASAP methodology|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/asap%2bfocus%]
    ASAP methodology is one methodlogy used in implementing SAP .
    The ASAP methodology adheres to a specific road map that addresses the following five general Phases:
    Project Preparation, in which the project team is identified and mobilized, the project Standards are defined, and the project work environment is set up;
    Blueprint, in which the business processes are defined and the business blueprint document is designed;
    Realization, in which the system is configured, knowledge transfer occurs, extensive unit testing is completed, and data mappings and data requirements for migration are defined;
    Final Preparation, in which final integration testing, stress testing, and conversion testing are conducted, and all end users are trained; and
    Go-Live and Support, in which the data is migrated from the legacy systems, the new system are activated, and post-implementation support is provided.
    ASAP incorporates standard design templates and accelerators covering every functional area within the system, as well as supporting all implementation processes. Complementing the ASAP accelerators, the project manager can create a comprehensive project plan, covering the overall project, project staffing plan, and each sub-process such as system testing, communication and data migration. Milestones are set for every work path, and progress is carefully tracked by the project management team.
    Hope it will help you.
    Regards,
    R.Brahmankar

  • Great new resources on OTN: best practices and OPM project polishing tips

    Two great new resources are now available on OTN.
    Oracle Policy Modeling Best Practice Guide
    A clearly laid out paper that walks through a series of valuable recommendations. It will help you to design and model rules that maximize the advantages of using Oracle Policy Automation's unique natural language approach. Leverages more than 10 years of practical experience in designing and delivering enterprise policy models using OPA. Highly recommended reading for all skill levels.
    Tips for Polishing a Policy Modeling Project
    This presentation contains dozens of useful tips for delivering rich and natural-feeling interactive interviews and other decision-making experiences with OPA.
    See the links at the top of the New and Featured section on the OPA overview tab, and also at the top of the Learn more section.
    http://www.oracle.com/technetwork/apps-tech/policy-automation/overview/index.html
    Jasmine Lee has digested much of her 10 years experience into these fantastically useful new materials - and they're free!
    Davin Fifield

    Thanks Davin to posted this info!
    Thanks Jasmine these material very nice.

  • Oracle BPEL standard, best practice and naming convention

    Hi, folks,
    Is there any standard or best practice associated with Oracle BPEL, regarding development, performace, what to avoid, etc? And is there any naming convention for the process, variable partner link name, etc? Similar to naming convention in writing Java code?
    Thanks
    John

    Hi,
    Here is the best practice guide:
    http://download.oracle.com/technology/tech/soa/soa_best_practices_1013x_drop3.pdf
    Thanks & Regards,
    Dharmendra
    http://soa-howto.blogspot.com

Maybe you are looking for

  • Brand New Carbon X1 loses Touch Screen function!!!!

    Hi there I just purchased Carbon X1 touch screen version for less than a week and all of a sudden I found out the touch screen was not working any more. I checked the system property and it shows "No pen or touch input is available for this display".

  • Display went black

    Was using my macbook as normal, then poof screen went black thought battery was dead but it is 65% also noticed on restarting I can faintly see my log in screen at the right angle. Any opinions as to what is wrong. Of course my apple protection plan

  • AFTER EFFECTS WISH LIST (Prior to Sept 09)

    AFTER EFFECTS WISHLIST What features would you like to see implemented in After Effects? New ideas for plugins? Interface changes? Post 'em here! Let's keep bug identification to individual posts, as these will probably be addressed more readily. Let

  • HP2600n print quality

    The results after executing the print quality page from the HP Toolbox show that the magenta is fading from light to dark to light from the left margin to the right margin ... this condition is not discussed in the print quality diagnostic ... I am a

  • Inter OU/Interco Drop Shipments

    Hi, I an familiar with the normal Drop ship flow;(Please see the steps I follow for this) ·     Enter book Sales Order in OM (source type = 'External') ·     Run Workflow process for Purchase Release ·     Run Requisition Import ·     Query for the r