E-Recruitment 6.0 Architecture considerations

I am looking for some documentation which covers the points to be considered while designing the E-rec architecture. Target versions :
E-rec 6.0,
ERP/HCM : ERP 6.0 EhP4 on NW7.0 EhP1
ESS/MSS portal (in intranet): NW7.0 SP??
Existing SAP portal (with KM) : NW7.0 SP18
Here are our current considerations :
1) Standalone Vs Integrated : We are decising to go for integrated approach, i.e. E-rec and ERP/HCM on the same instance. With this we are accpecting the challanges with version dependencies and taking the advanntage of lesser systems and so less administration/ configuration (RFC/ALE) cost.
2) Fronend (UI) or w/o Frontend : Here if the role of the fronend is just a proxy, then we can use ISA or apachy instead of using additional SAP instance (Landscape). With ISA or apachy we can maintain the reverse proxy rules and so the external candidate willl not see hostname and other sensitive details of the URL.
Can someone please throw some light on this? Can someone redirect me to the right documentations?
Thanks in advance.
Shrikrishna

Thanks Sunny for your reply.
My question was around necessity of E-recruitment frontend.
My uderstanding is as follows:
E-recruitment frontend is ABAP+JAVA (NW 7.0) install with E-recruitment add-on (same as E-recruitment backend install). E-recruitement frontend (UI) contains user data and repositories of webdynpro required for E-recruitment application. So this means in all E-recruitment implementation needs this UI, webdynpros and user data. This frontend piece will be accessed by expernal candidates and non-registered recruiters (anonymous users) from internet, and so this piece MUST be in DMZ (or in other words accessible from inetrnet)
Question is When we are using integrated E-recruitment option (i.e. install ERP 6.0with HCM and E-recruitment backend/frontend for internal candidates and ALSO E-RECRUITMENT FRONTEND for EXTERNAL CANDIDATES on same instance), then we are probably exposing our ERP or HCM data on internet. Am I right? Can this be addressed by Proxis like Apache? Are there any BIG security concerns? Does customer go with such approach?
or we can completely deloy this E-recruitment frontend for external candidates functionality on SAP enterprise portal, but it do not have any ABAP stank.. Any issues?
Thanks,
Krishna

Similar Messages

  • Architectural considerations on WLPS JSP tags

    Hello,
    I'm now looking at the WLPS (2.0) examples, more specifically to the
    BuyBeans example.
    I have experience in developping web applications using J2EE
    (servlets-jsp-ejb).
    Something in the bean example is very disturbing to me : I've always tried
    to maintain a clear separation of concerns between the different blocks of
    an application, namely business logic, control and view. Those of you
    familiar with it will have recognized the MVC (model-view-controller)
    paradigm.
    And in the bean example, I see the JSP full of tags mixing the view and
    controller roles, even some of business logic sometimes.
    So my question is : is there a way of using the personalization/commerce
    server while maintaining the separation of roles described above? And if so,
    is it an efficient way of working or does it create too much overhead from a
    development time point of view?
    I hope I'm being clear enough. Please ask for clarifications if needed.
    Thanks in advance for sharing your opinion on this subject
    Nicolas

    Nicolas,
    I agree that the integration between PS and CS is not as tight as it could
    be and we are working (hard!) on improving this right now. There are options
    however, and a limted integration between the two is possible. The
    DestinationDeterminers are pluggable so it should be possible to write
    custom code to do both. As you say, this may not be an option on a tight
    deadline if you need to deliver immediately.
    Incidentally, the main reason we did not want to replicate the "old"
    BuyBeans demo was that it contained a lot of code that was specific to the
    BuyBeans site and look and feel. We wanted to create a "template" site that
    was a much better starting point for users to customize.
    I appreciate your comments, and I think you will be very pleasantly
    surprised with coming releases which address things like graphical webflow
    definition and move PS (and Portal) onto the Webflow model.
    Sincerely,
    Daniel Selman
    Nicolas Lejeune <[email protected]> wrote in message
    news:[email protected]...
    Thank you Daniel,
    I actually had a good look at the commerce server 3.1. You're right,
    pipelines and
    webflow are useful things, although the latter is a bit tricky since there
    is no editor yet.
    The big problem with version 3.1 is that the perso. server and thecommerce
    server
    are not compatible anymore. I guess that's why the buyBeans example isout,
    and
    the new commerce example (the catalog) is not a portal.
    They are incompatible because with the PS, everything has to be sent tothe
    Portal
    Service Manager, and with the CS, everything has to be sent to the webflow
    manager.
    It might be possible to somehow combine them, but it's certainly not
    straightforward
    and we don't have time to play around in my actual project. I asked a BEA
    instructor
    about it but he had no solution.
    Since the portal look&feel was a priority to us, wo we decided to drop the
    CS, and
    use only the PS.
    Any comments on this subject ?
    Nicolas
    "Daniel Selman" <[email protected]> wrote in message
    news:[email protected]...
    Nicolas,
    MVC considerations were a big part of the rationale behind the Webflow
    and
    Pipeline architecture used in WLCS 3.1. I suggest you take a look at the
    latest release, which does a much better job at handling this (complex)
    topic.
    Sincerely,
    Daniel Selman
    Nicolas Lejeune <[email protected]> wrote in message
    news:[email protected]...
    Hello,
    I'm now looking at the WLPS (2.0) examples, more specifically to the
    BuyBeans example.
    I have experience in developping web applications using J2EE
    (servlets-jsp-ejb).
    Something in the bean example is very disturbing to me : I've always
    tried
    to maintain a clear separation of concerns between the different
    blocks
    of
    an application, namely business logic, control and view. Those of you
    familiar with it will have recognized the MVC (model-view-controller)
    paradigm.
    And in the bean example, I see the JSP full of tags mixing the view
    and
    controller roles, even some of business logic sometimes.
    So my question is : is there a way of using thepersonalization/commerce
    server while maintaining the separation of roles described above? Andif
    so,
    is it an efficient way of working or does it create too much overhead
    from
    a
    development time point of view?
    I hope I'm being clear enough. Please ask for clarifications if
    needed.
    >>>
    Thanks in advance for sharing your opinion on this subject
    Nicolas

  • Architecture Considerations with AD RMS

    Hi,
    I'm looking to implement an AD RMS in an organization, and would like to find out more details on some architectures that I have come up with and hopefully some advice and which is better.
    Architecture 1: 2 Physical Servers for AD RMS and MSSQL
    Of course, we know that this is the most ideal architecture, but it is not cost-efficient for the organization, leading to the concepts of the following architectures.
    Architecture 2: 2 Virtual (VMWare) Servers for AD RMS and MSSQL
    What are the implications of using a virtual server for production?
    Architecture 3: 1 Physical Servers for AD RMS and MSSQL
    I know it is possible to install AD RMS and MSSQL in a single server (regardless of bad practices), but I would like to know the implications and if it will cause any underlying or prospective problems.
    Architecture 4: 1 Virtual (VMWare) Servers for AD RMS and MSSQL
    Most ideal in terms of cost, but what are the implications by doing so?
    Thanks in advance for any advice!

    Hi jeromeee,
    AD RMS is at the end a web service talking to AD and has no problem running on a virtual server. For SQL it might makes sense to run it on a physical box but only for really large environments (dont ask me where really larger begins for RMS). So for the
    projects I did i just used an existing SQL server/cluster provided from the client's SQL team. And you have to check performance as part of your operational tasks anyway, regardless if it is virtual or physical. And then you can move the SQL database to another
    server, physical or virtual.
    If you just plan with one RMS server SQL can be on the same machine. You could even add another RMS machine in the cluster for load balancing, but not for failover unless you don't care about the RMS log files.
    Regards,
    Lutz
    Hi Lutz,
    Thanks for your reply and input! It makes sense that both AD RMS and SQL Server could be running on a shared virtual environment (Architecture 4 as mentioned in my first post), and since it'll only be supporting around 600 users (at max), I feel that this
    set up is the more favorable at the moment.
    However, I don't quite understand what you meant by "..., but not for failover unless you don't care about the RMD log files.". Does this mean that if I plan to scale up by adding an additional RMS machine in the cluster for load balancing in future, there
    is no possibility for failover?
    Once again, thanks for your reply Lutz!

  • How can set up tenant account on share point online 2013

    Hi All,
    My Client has asked me to set-up a development and QA environment on SharePoint Online 2013. I have gone through the following link 
    http://blogs.perficient.com/microsoft/2014/08/how-to-develop-and-deploy-for-sharepoint-online-office-365/
    which explain the benefit of setting up of Development and QA environment. One of the things it highlights is 
    It helps to name tenants consistently. We usually use the convention:
    •https://<production tenant name>.sharepoint.com
    •https://<production tenant name>DEV.sharepoint.com
    •https://<production tenant name>QA.sharepoint.com
    However when I create a subsite the url is very different from the above. The subsite has a url somehting like https://mydomain.sharepoint.com/sites/dev or  https://mydomain.sharepoint.com/sites/QA. I am not sure how can I do the naming in the manner specified
    in the article (i.e. https://<production tenant name>DEV.sharepoint.com)
    Thanks
    Krishna 

    Hi krishnat,
    Yes, you are correct, you will create 2 more tenants for DEV and QA as URL format you listed above.
    For FQDN URLs below (tenant.sharepoint.com) are separate tenant domain URLs, e.g. <production tenant name>,<production tenant name>DEV and <production tenant name>QA, so per your referenced article, it will need
    to create other 2 tenants.
    •https://<production tenant name>.sharepoint.com
    •https://<production tenant name>DEV.sharepoint.com
    •https://<production tenant name>QA.sharepoint.com
    Here is an post with more informaiton about SharePoint Online tenant, you can take a look.
    http://blogs.msdn.com/b/richard_dizeregas_blog/archive/2014/09/01/sharepoint-online-information-architecture-considerations.aspx
    Thanks
    Daniel Yang
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Why doesn't iMovie use ProRes 422 or native instead of AIC

    Does anybody know why apple still uses AIC to transcode all captured video streams instead of ProRes 422? And why does it transcode in the first place? Why can't they use the native HDV or AVCHD streams?
    I know that using native HDV, and especially AVCHD, loads the processor with all the decoding, but it should at least be an option for high-end machines. HDV editing on FCP works fine and the storage requirements go down to between 1/3 and 1/8 of what it is with AIC and ProRes.
    I think the ideal workflow would be to capture in the native format, edit in the native format when no re-compression is necessary and only render to ProRes when effects/titles/filters are applied.
    Is it just too much development work or is there an architectural consideration from the development group to force everything through AIC for some reason?
    Is it a licensing issue? Does Apple pay royalties for ProRes for every FCP sale? Would it be prohibitively expensive to distribute ProRes with iLife?
    Obviously only someone from the iMovie group would be able to answer all of these questions but we may be able to gather some insights from the community to get a better picture.

    I understand that the iMovie and FCP teams at Apple have been, hitherto, completely independent.
    FCP was bought in - under a different original name - and tweaked from its original incarnation before being offered as 'Final Cut Pro' by Apple. See the section marked "History" in this Wikipedia article.
    iMovie, however, was written long ago to Steve Jobs' specifications by Glenn Reid as a simple video editor for amateurs.
    The ProRes codec appears to have been created separately from the Apple Intermediate Codec of iMovie ..probably because of different programmers' responsibilities for the separate programs ..although, under the 'Terms of Use', we're not supposed to speculate here in Apple Discussions.
    HDV and AVCHD, being extremely 'compressed' methods of storing video, similar to the MPEG-2 format used for squeezing long movies onto small DVDs, cannot be edited 'frame-accurately' directly, as most of the video frames rely on data stored in other frames for their content. In other words, the 1st frame of fifteen frames contains a whole frame's worth of data, but the next 14 contain only differences between the first frame of a group and the subsequent frames.
    So there needs to be a method to 'unscramble' or extract the data from the next few frames after the first of each group, in order to reconstitute the rest of the frames for editing them.
    AIC is the method used in iMovie. ProRes is the method chosen for FCP.
    It's interesting, though, that Randy Ubillos, now 'Chief Architect - Video Applications' at Apple, and the "onlie true begetter" of what later became Final Cut Pro, was the man who demonstrated iMovie '09 at last month's MacWorld Keynote. So if Randy's on hand to explain how to use iMovie, and created the new-style iMovie, then maybe we'll see some more convergence occur. (..iMovie has already taken on board FCP's "instant rendering", so that we no longer have to wait for transitions to be rendered within iMovie, but can see the results immediately. iMovie's real "behind the scenes" rendering now takes place during export, after after editing's finished..)

  • Need to change the background color of a textview in offline PDF

    Hi,
    We are using the PDFDocument API in order to generate a PDF file which is eventually stored in a network folder. We were initially using the Webdynpro Interactive Form UI element, but digressed from that approach because of some architecture considerations.
    I am able to do everything with the PDFDocument API, expect two things which are proving to be much tougher than I thought:
    1) I have a textview in my .XDP template. The background color of this textview must change depending on the value that I display inside it. For example, if the value is between 1 and 20, the background color must be green, if the value is between 21 and 40, it must be yellow and so on...
    I still have not found the method to specify the background color of a textview using the .XML data file.
    2) Depending on a certain condition, I need to display 5 images in a table row instead of the usual 6. I have been able to do that by simply not providing the 6th image url, but I also need to resize the 5 such that they occupy the space that was initially used by 6 images.
    Any ideas of how I can go about these requirements?
    Thanks & regards,
    Navneet Nair.
    Edited by: Navneet Nair on Feb 19, 2009 10:09 AM

    1) I added an invisible textfield inside my .XDP file and populated it with the color value that I want my main text field to display.  (This color needs to be specified in the R,G,B format... for example... 128,0,0
    2) Now in the 'Initialize' javascript event of my main text field, I included the following script:
    this.resolveNode("<SubFormName>.<MainTextFieldName>").fillColor = this.resolveNode("<SubFormName>.<InvisibleTextFieldName>").rawValue;
    Hope it helps!
    - Navneet

  • Building Site Collections in SharePoint Online

    Hello all,
    Please forgive me for this very basic question, but I was not able to find the info online. 
    Background info: I have a great deal of experience with SharePoint administration on-prem but not much experience with SharePoint Online. My current organization has O365 and wants to build a collaboration environment and an Intranet. I understand the information
    architecture practices that would have been common for this for an on-prem environment, and I understand that those have changed with 2013 and SharePoint Online due to the new practice of organizing everything under one web app. 
    Between reading some articles and using my experience with on-prem, I am trying to design an organized information architecture that will facilitate both environments and the permissions structures they require. My tentative plan was to build one site collection
    for the Intranet under https://mycompany.sharepoint.com/sites (under which subsites would be built for various Intranet divisions), and then a number of individual site collections for the collaboration environment under https://mycompany.sharepoint.com/teams
    (/hr, /finance, etc., so that content owners and users could work in their own site collection without affecting a site collection belonging to someone else). 
    This is the article that initially caused me to believe that this is possible/a good idea: http://blogs.msdn.com/b/richard_dizeregas_blog/archive/2014/09/01/sharepoint-online-information-architecture-considerations.aspx. 
    "For [managed paths] you get the root, /search (explicit managed path), /sites (wildcard managed path), and /teams (wildcard managed path)."
    However this article leads me to believe that I will not be allowed to create any site collections at all, only subsites: https://support.office.com/en-us/article/Plan-sites-and-manage-users-8e568d8d-3d65-42c4-99fa-f7285c9db842. 
    "You cannot create additional site collections in SharePoint Online for Office 365 for Small Business."
    I understand the answer to this may depend on what version of O365 we have, and I am working to find that out. 
    My understanding is that I cannot be made a SharePoint Online administrator without being a global O365 administrator: http://blogs.technet.com/b/lystavlen/archive/2012/06/14/understanding-the-administrator-role-in-sharepoint-online.aspx.
    "You cannot separate the roles of Office 365 global administrator and SharePoint Online Administrator." 
    Thus, I don't currently have the level of access I need to browse around the administration area and find the answers to my questions that way. When the time comes, I will likely be given temporary access or will be working with an O365 global administrator
    to build.
    Main question: Will I be allowed to build site collections under managed paths?
    Larger question: How are others managing the information architecture for an Intranet and a Collaboration environment in SharePoint Online, especially if the answer to the prior question is no?
    Thanks in advance for any insight you can offer. 
    Shae

    Hi,
    According to the error message, the Microsoft.Online.SharePoint.Client.Tenant.dll seems not been loaded.
    You can find it in the path below:
    C:\Program Files\SharePoint Client Components\Assemblies
    A code snippet about how to retrieve a list of site collections in a tenant for your reference:
    const string username = "[email protected]";
    const string password = "password";
    const string tenantAdminUrl = "https://yourdomain-admin.sharepoint.com/";
    var securedPassword = new SecureString();
    foreach (var c in password.ToCharArray()) securedPassword.AppendChar(c);
    var credentials = new SharePointOnlineCredentials(username, securedPassword);
    using (var context = new ClientContext(tenantAdminUrl))
    context.Credentials = credentials;
    var tenant = new Tenant(context);
    SPOSitePropertiesEnumerable spp = tenant.GetSiteProperties(0, true);
    context.Load(spp);
    context.ExecuteQuery();
    foreach (SiteProperties sp in spp)
    Console.WriteLine(sp.Title);
    Best regards,
    Patrick
    Patrick Liang
    TechNet Community Support

  • PI 7.0 Hardware Requirements & Confirguration

    Hello,
    We are planning to install PI 7.0 on the AIX 5.3.x(64 bit) with database DB2 UDB 8.2.x.(64 bit) for a Sandbox environment.
    I am new to the XI area. I need some small/detailed information regarding following requirements( from PI BASIS point of view)
    1) Hardware requirements
    2) Memory  requirements
    3)Any hardware or platform constraints for the AIX & DB2.
    4) PI 7.0 Configuration.
    5) Idoc Interfacing.
    6) Architectural considerations for Integration server, Java AS, SLD etc.
    7) Authorizations in XI
    you can mail me on [email protected]
    Thanks,
    Dnyandev

    <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/95d7d490-0301-0010-ce93-c58f9a3cde0b">Installation Guide</a>
    <a href="http://help.sap.com/saphelp_nw04/helpdata/en/58/d22940cbf2195de10000000a1550b0/content.htm">User Roles in an XI Landscape</a>
    The requirements depend on the operating system and the database used for the installation.
    Go to http://service.sap.com/instguides -> SAP Netweaver -> Release 04 -> Installation -> SAP Web AS -> SAP Web AS 6.40 SR1 and Related Documentation -> Choose your database (Ex: Oracle ) -> Planning and Preparation -> Select SAP Web AS ABAP for Windows.
    Look at Page 37, minimum RAM is 1 GB (because XI is Unicode), ABAP system 25GB.
    You should check the same for Web AS Java also.
    one more small thought...
    You also need to find the how the end user will access your system. In case an end user access outside of the LAN (say via WAN or dial-up) using https, it will have additional requirements for your hardware as you will need compression enabled for improved response time for WAN. Adding compression or SSL may add additional hardware if you based your sizing based on LAN.
    And also One easy approach is after you define your test case, run some load test on your QA/Staging system. Use the load test data and find out the hardware requirements based on load test. If your requirement is larger than the staging system, you may have to take a staged approach. In that case you roll-out the system to a certain number of users and monitor the system and plan for additional users based on the monitored results.
    Also for RAM memory, HDD size you can check this document:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/unkown/sizing%20guide%20-%20sap%20xi%203.0.pdf
    Also go through these links...
    /people/susanne.janssen/blog/2005/06/09/efficient-sap-hardware-sizing-quick-sizer
    http://service.sap.com/quicksizing

  • Web Part inside a Web Part

    Hello.
    I am currently redesigning a SharePoint. The new proposal is 95%
    complete, I am having one issue which I am struggling to resolve.
    I need to have a web part within another web part (a web part showing the news and announcements list
    inside the whole page web part) inside a table cell. 
    To resolve this I created an iFrame and placed a web part inside.
    This solved this issue however whenever
    the links were clicked in this web part they opened up in the small iFrame window and not the parent window (the one showing the whole page).
    I am trying to find a way to open the links in the iFrame in the parent window, new tab or in
    a new
    browser. Just like when you right click a link and select Open in New Tab but automatic. Also because there is no option to change the hyperlinks in the announcement list, I am unable to use the well known target & _blank functions etc.
     If
    you need anymore details I will be happy to provide you with these.
    Thank you.

    SharePoint architectural considerations aside, you can use the target
    attribute of your href links to escape an iframe. For instance:
    <a href="#" target="_top">Link</a> - will use the full browser window, escaping any iframes
    <a href="#" target="_blank">Link</a> - will open in a new tab/window, depending on user browser settings
    For more information about the target attribute, read
    this tutorial.
    Danny Jessee<br/> MCPD - SharePoint Developer 2010<br/> MCTS - SharePoint 2010, Configuring<br/> dannyjessee.com/blog

  • Crystal Reports on AWS(Amazon Web Services)

    Hi All,
    Can someone share there experiences or Insights on Installtion/Migration of Crystal Reports on AWS(Amazon Web Services Cloud).
    I have a requirement of Installation and Migration of about 100 Crystal Reports on AWS.
    Thank you in Advance.
    Regards,
    Nachiket

    Thank you for the update Shreejith and Dell.
    @Dell: My requirement is to Install Crystal Reports Server on AWS and also my reports will be running off the database on AWS.
    I wanted to know if there are any specific settings that need to be done with regards to AWS.
    This KBA also helps :
    1588667 - SAP on AWS: Overview of related SAP Notes and Web-Links
    After researching a bit I came across the below Information:
    Build your own SAP environment on AWS
    AWS and SAP have collaborated to offer customers options and convenience when deploying SAP applications in the cloud. Customers can now license SAP applications to run in the AWS cloud computing environment from an authorized SAP reseller, or use their existing SAP software licenses on Amazon EC2, with no additional SAP license fees. AWS cloud infrastructure services can be purchased directly from AWS or through one of our Resellers. SAP software licenses are sold directly by SAP or their affiliated channel partners. System integration, deployment, and hosting services are available through SAP partners and AWS Solution Providers.
    The following are the steps required to get started building your own SAP environment on AWS:
    Step 0: Planning
    In order to properly size and configure SAP solutions on Amazon Elastic Compute Cloud (EC2) instances, customers should follow these guidelines established by SAP and AWS
    Use the SAP Quick Sizer
    Follow the technical guidelines outlined in SAP Note 1588667
    Step 1: New to AWS?
    Sign up for an AWS Account
    Read the Getting Started with AWS Guide
    Step 2: Implement the Required Compute, Storage, and Network Resources
    For detailed information and best practice guidelines on the steps necessary to implement SAP solutions on the required AWS infrastructure for an SAP environment please read the Implementing SAP Solutions on AWS Guide.
    Step 3: How to Install/Deploy SAP Solutions
    The AWS Management Console provides an easy-to-use graphical interface to manage your compute, storage, and other cloud resources. Most AWS products can be used from inside the console, and the console supports the majority of functionality for each service.
    To begin to install SAP software on Amazon EC2 start with a base Windows Server, SUSE Linux Enterprise or Red Hat Enterprise Linux system image and then install the SAP software just as you would on any physical or virtual server.
    Automate your deployment or launch directly into your AWS account using the SAP HANA on the AWS Cloud Quick Start Reference Deployment, which serves as a reference and provides architectural considerations and configuration steps necessary for deploying SAP HANA on AWS utilizing a “Bring Your Own License (BYOL)” scenario.
    Step 4: Run SAP Solutions on AWS
    Now that you have installed the software on the AWS cloud, there are special considerations that need to be taken into account to run SAP solutions on AWS.
    The SAP on AWS Operations Guide
    The http://d0.awsstatic.com/enterprise-marketing/SAP/sap-on-aws-backup-and-recovery-guide-v2-2.pdf
    Regards,
    Nachiket

  • Problems debugging Database Java remotely using JDeveloper 9i (9.04)

    I am currently working on a project where we are configuring a COTS product built onto a Oracle 9i database. The product has up versioned and changed its architecture considerably to support Java on the database as well as PL/SQL in the form of an API set. My team are writing custom code to meet the clients requirements and are all proficient PL/SQL developers and have experience with Java but we are having real trouble developing and testing our code in the same way we used to in PL/SQL using tools such as TOAD and PL/SQL Developer. I have invested some time in working out how to remote debug our Java code and have had some success with JDeveloper but have run into the following issues:
    1. I am unable to add Java objects or variables to the Watch but can view PL/SQL variables.
    2. The reported execution of the code seems inaccurate, i.e. often when I step into the Java code, I receive an exception that prevents me for continuing but suggests little about what has gone wrong.
    3. In general, debugging is a very hit or miss affair and does not appear to be a worthwhile tool in fixing code.
    Has anyone managed to set up Jdeveloper so that it is as proficient as TOAD or the like in debugging Java and can suggest how to resolve these issues?
    Peter
    3.

    Topic closed. Problem was with with database generated by DBCA. Resolved by TARs on Metalink

  • Modules needed for ATG lock manager isolation

    Hi Currently I am deploying my complete commerce ear on the lock server instance. I would need to isolate only the components required for lock server instance. Can anyone please tell me what all modules would be required for assembling an ear for lock server instance?
    Thanks,
    Mathew.

    For lock manager server you only need DafEar.Admin and DSS modules.
    Also, you mentioned about deploying your commerce EAR on the lock server instance so just in case you want to run both on the same instance it is not a recommended way. One of the basic ATG deployment architecture consideration is that any of auxiliary server instances like global scenario server, process editor server, lock manager server etc. should not receive user requests instead page server should be the only instances accepting user requests.

  • Java concurrency and inputstream

    HI,
    I want to write the image file from a database to disk by using a queue. I can write these images on disk from the result set.
    Can someone tell me where I am wrong in the following code? I get: "trying to write to disk: Closed Connection"
    Thank you.
    {code}
    public class ExtractPicture implements Runnable{
        private BlockingQueue<InputStreamMessage> queue;
        public ExtractPicture(BlockingQueue<InputStreamMessage> queue){
            this.queue = queue;
        @Override
        public void run() {
            try(Connection conn = new DatabaseConnection().getConnection();
                Statement stmt = conn.createStatement();
                ResultSet rs = stmt.executeQuery("select mypicture from testpicture")){           
                while(rs.next()){
    //                System.out.println(rs.getInt("mypicture") + " <== added");
    //                queue.put(rs.getInt("id"));
                    InputStreamMessage ism = new InputStreamMessage(rs.getBinaryStream("mypicture"));
                    try{
                        queue.put(ism);
                    }catch(Exception e){
                        System.out.println(e.getMessage());
            }catch(Exception e){
                System.out.println(e.getMessage());
    class Consumer implements Runnable{
        private BlockingQueue<InputStreamMessage> queue;
        public Consumer(BlockingQueue<InputStreamMessage> queue){
            this.queue = queue;
        @Override
        public void run() {
            try{           
                int z = 0;
                InputStreamMessage is;
    //            (is = queue.take()) != null
                while((is = queue.take()) != null){
                    System.out.println("consumer aa" + is.getInputStream());
    //                writeToDisk(is.getInputStream(), "c:\\temp\\p" + z + ".jpeg");
                    try{
                        int c = 0;
                        OutputStream f = new FileOutputStream(new File("c:\\temp\\p" + z + ".jpeg"));
                        while((c = is.getInputStream().read()) > -1 ){
                            f.write(c);
                        f.close();
                    }catch(Exception exce){
                        System.out.println("trying to write to disk: " + exce.getMessage());
                    z++;
            }catch(Exception e){
                System.out.println(e.getMessage());
    class InputStreamMessage{
        private InputStream is;
        public InputStreamMessage(InputStream is){
            this.is = is;
        public InputStream getInputStream(){
            return is;
    class RunService{
         public static void main(String[] args) {
            BlockingQueue<InputStreamMessage> queue = new ArrayBlockingQueue(10);
            ExtractPicture ep = new ExtractPicture(queue);
            Consumer c = new Consumer(queue);
            new Thread(ep).start();
            new Thread(c).start();
    {code}

    This is really a JDBC issue: Java Database Connectivity (JDBC)
    Your code is getting a 'STREAM' from the result set and putting a reference to that stream in a queue.
    But then the code executes 'rs.next()' which closes the stream and attempts to move to the next row, if any, of the result set.
    Stream data MUST BE read immediately. Any attempt to access columns after the column being streamed or moving to another rows will close the streamj.
    See the JDBC Dev Guide for the details of processing LOBs and using streams.
    http://docs.oracle.com/cd/B28359_01/java.111/b31224/jstreams.htm#i1014109
    Data Streaming and Multiple Columns
    If a query fetches multiple columns and one of the columns contains a data stream, then the contents of the columns following the stream column are not available until the stream has been read, and the stream column is no longer available once any following column is read. Any attempt to read a column beyond a streaming column closes the streaming column.
    Also see the precautions about using streams:
    http://docs.oracle.com/cd/B28359_01/java.111/b31224/jstreams.htm#i1021779
      Use the stream data after you access it. 
    To recover the data from a column containing a data stream, it is not enough to fetch the column. You must immediately process the contents of the column. Otherwise, the contents will be discarded when you fetch the next column.
    It is important that the process consuming the stream has COMPLETE control over the components needed to process it properly. That includes the connection, statement and result set.
    As TPD pointed out by defining the connection, statement and result set within a method those instances goe OUT OF SCOPE when the method ends.
    Since you also defined the objects within a try block they go out of scope when the block exits even if the method doesn't end.
    You didn't say why you are trying to use queues to do this but I assume it is part of some multi-threaded application. If so you have some additional architectural considerations in terms of keeping things modular while still being able to share certain components.
    For example how are you connections being handled? Are you using a connection pool? That 'queue' processsor need sole access to the connection being used to stream an object. Your code ONLY puts a reference to a stream on the queue but then has to WAIT until the stream has been FULLY READ before that code can try to read the next column, next row or close the result set.
    That makes those two processes mutually dependent and you aren't taking that dependency into account.
    Hopefully you are doing the dev and testing using TINY files until things are working properly?.

  • Sequence and sequence files advice

    I am embarking on a project to test two way radios using around 50 tests (receiever and transmitter).
    It seems each test should be in its own sequence but should each sequence be in its own sequence file? - or should they all be in just a few files?
    I am just not sure on the pros & cons of individual sequence files vs having many sequences in one file, so I'd appreciate any advice on the subject.
    Many thanks,
    Ronnie
    TestStand 4.2.1, LabVIEW 2009, LabWindows/CVI 2009

    Hey Ronnie,
    I would absolutely not put each Sequence in its own Sequence File - this is completely unnecessary and will add in a lot of maintenance overhead to your development.  However, that being said, the way in which you organize your Sequences on a Sequence File basis is a matter of both personal preference and application architecture.  Placing all application Sequences within a single Sequence File will be easier to keep track of, as they'll all be contained within a single file that you need to maintain.  That being said, there are often logical reasons to involve a few different Sequence Files under the umbrella of one single application. 
    For example, what if you were creating a "library" of commonly utilized tests/Sequences and you wanted to store those common Sequences in one Sequence File, but put a set of application or UUT-specific Sequences within a separate Sequence File?  This would make sense, as all applications (no matter which UUT) could then utilize your common "library" Sequence File and you'd introduce a nice modularity to your system.  This is just one example in which this might make perfect sense.
    You can expand upon this and ask yourself - does each test really need its own Sequence (it might, depending on how many steps-per-test) or can each test be more simply organized in the form of an individual code module, thereby cutting down on the number of Sequences you need?
    There are more architecture considerations discussed in our TestStand Customer Education courses (both online and regional).  Let me know if you'd be interested in finding out more information about the courses.
    Derrick S.
    Product Manager
    NI DIAdem
    National Instruments

  • PXE with IP Helpers/DHCP Relay

    I'm a Sysadmin and I have a question about what is best practice in regards to PXE servers. We are currently using DHCP Options for PXE clients (options 66,67). This works for most clients but is not the recommended method from either of the vendors we have used (Microsoft or Symantec). They recommend using IP Helpers / DHCP relay to forward the DHCP discover request to the PXE servers so that the PXE server is getting the actual request. This is more of an issue now with UEFI-based machines where the boot file would be different based on if the client is UEFI.
    My Network team is against using IP Helpers and thinks it can cause issues. This doesn't seem to make much sense to me, as from what I understand, all that happens is both the DHCP server and the PXE servers get the DHCP discover and respond with their relevant info. Can someone clarify what, if any, issues there are using multiple IP helpers/DHCP relay with PXE Servers like SCCM & Altiris? Is this not standard practice?

    It's very common to use DHCP relays (IP helpers) in order to centralize DHCP infrastructure. Larger organizations will frequently use this approach in order to avoid having to manually edit DHCP configurations at the router or switch level. Having a few servers with a central DHCP configuration for all segments is a good management proposition.
    In most environments, there isn't a problem with doing this, but it is a major architectural consideration and not something you just turn on without consideration. This is largely because DHCP works on a broadcast principle. The clients are going to broadcast for the first DHCP server that answers with an acceptable offer, which they will take. If you have a mixture of local DHCP servers and relays, the local servers will respond faster and may not provide the configuration you want to deploy... at best. At worst, you will have a mix of acceptable responses and a lot of potential for conflicting addresses. On any network segment where you're using DHCP relays, the local server needs to be disabled.
    It might be worthwhile going back to your network team and asking what sorts of "issues" that they feel the implementation of DHCP relays would cause. There may be something unique to your environment that makes them reluctant to pursue this approach.

Maybe you are looking for

  • CNTL_SYSTEM_ERROR in webservices from r/3 system

    Can anyone help .... I am trying to consume a WebService generated in the R/3 system. I am working on WebDynPro Adobe forms. Calling this WebService on click of the button on this form. The WebService calls an RFC internally which inserts data into t

  • Mac mini Late 2014 screen flicking

    Just got  the Mac mini Late 2014 a week ago. It serves me well except the screen flicks randomly. I have connected Mac mini to two BenQ monitors, one via mini-display port HDMI adaptor (lets said this is port A), another one via mini-display port to

  • Using Cloudscape for rdbmsRealm

    Using WLS60sp1 on WinNTor HP-UX I was unable to modify user information against the shipped cloudscape database. After boot up all users in the demo DB were displayed as expected but I couldn't modify users settings although the console confirmed tha

  • Alert Categor in Alert Configuration

    Hi All: I created one Alert category(ALERT_TEST) from t-code ALRTCATDEF--CCMS Alerts, I am seeing this Alert category in RWB-Alert Configuration --Select Alert Category. I am seeing the Alert Category in Alert configuration, but when I am selecting i

  • Save As...  Indesign (2014) seems to break files.

    A typical workflow for our Creatives is: Once original is set up. a few days later someone wants a change to it. They typically open the file, chose Save As... Create a V2 or V3 and save it. And make the changes to the new version. If the file was or