SNC/SSL related queries

Hello Experts,
We are planning for SNC and SSL enablement on our SAP systems (ECC and portal) running on Soalris 9 and Oracle 9.2 paltform.
As per my initial analysis:
1. SSL can be implemented with SAP provided cryptographic files (free for customers), but we need a certificate key pair - test certificate is available from SAP for 8 weeks only. Need clarification on the following points:
a. How much is the certificate cost from SAP or other companies like CA Verisign.
b. Can we use certificates generated from Windows Enterprise Certificate server - will this works if the portal is accessed from public networks.
c. How much is the performance impact in terms of CPU, memory, response time (for end user and SAP application server) after implementing SSL.
2. For SNC, we need kerberos library files which are supplied by third party solutions (Cybersafe, Secude,etc) or opensource (MIT). Please help me by providing answers based on your experience:
a. How successfull it is to implement/support SNC with opensource libraries. Can someone share step by step details of how to do this for solaris platform.
b. How much is the performance impact in terms of CPU, memory, response time (at SAPGUI and SAP application server level) after implementing SNC. Encryption/Decrytion works at what level - SAP application or OS layer where SNC application/files have been placed.
Thanks to all the experts for sparing your valuable time.
Best Regards
D P Singh

Hi,
SNC and Kerberos:
I have very good experience with this documentation from Calin:
http://www.mail-archive.com/kerberos(at)mit.edu/msg06889.html
SSL:
the windows enterprise certification server can sign the certificates but each browser has to know this certification instance and has to trust this server. There are already known certification instances implemented in each browser like Versign and others.
Regards
Rainer

Similar Messages

  • SD Related Queries

    Hello all,
    In our project we are using Automotive Industry module.
    so now i want to take Automotive module BW SD related queries from
    [http://help.sap.com/erp2005_ehp_04/helpdata/EN/50/296fe7bf1a474f84d5955cedefa0a3/frameset.htm]
    Please tell me BW sd queries from the above link.
    Regards.
    Edited by: Ranga123 on Mar 24, 2010 2:28 PM

    no one is answering my question, so admin please delete this thread.thanks.

  • AME related queries, where to post ?

    Hi all,
    Can you please tell me where should we post our AME related queries. Since Oracle treats it as part of HRMS, do we post those queries in this forum ? Or is there any separate forum for this purpose. Please provide me the link as well, if you can.
    Thanks a lot in advance.

    You can post it here I think

  • Automotive Industry SD Related Queries

    Hello all,
    In our project we are using Automotive Industry module.
    so now i want to take Automotive module SD related queries from
    [http://help.sap.com/erp2005_ehp_04/helpdata/EN/50/296fe7bf1a474f84d5955cedefa0a3/frameset.htm]
    Please tell me sd quries from the above link.

    Check this link
    [SAP Best Practices for Automotive|http://help.sap.com/content/bestpractices/industry/bestp_industry_automotive.htm]
    thanks
    G. Lakshmipathi

  • Idoc Related queries

    1.     How can we view and rectify the errors or warnings caused ,while we create a new idoc ,which may be an extension of an existing basic Idoc type(at Transaction code – we30)?
    2.     How can we delete an Idoc created,if its already been released (at Transaction code we30) and configured(at transaction code we82)?
    3.     Is that mandatory that the check box ‘Mandatory’ field should always be checked,whenever we create(extend) a new segment to an existing segment(at transaction code we30)?
    4.     On what basis,we can identify that “To which existing segment - we can append our needed segment(new segment if any to be appended)”?

    Hi Nagarajan,
      Answers for your questions:
    1)How can we view and rectify the errors or warnings caused ,while we create a new idoc ,which may be an extension of an existing basic Idoc type(at Transaction code – we30)?
       WE30 is created for IDOCs. First set break point related user exit.For testing WE19. Just enter that error IDOC number in WE19 and press F8. Then it will display the segments. Then press /H in the command box and press inbound function module push button (Just side of inbound push button). Then it will open in debug mode. we can test.
    2. How can we delete an Idoc created,if its already been released (at Transaction code we30) and configured(at transaction code we82)?
    Yes it is possible to delete hte IDOC which is released from our system, i think thorugh remote function but i am not sure.
    3. Is that mandatory that the check box ‘Mandatory’ field should always be checked,whenever we create(extend) a new segment to an existing segment(at transaction code we30)?
    Based on the requirement we can select that check box. suppose it u upload the data for MM01 t-code then observe what are all the manditory feilds in that screen. Based on that choose mandotory check box for proper fields in the segment.(In MM01 suppose meterail number is manditory then while creating segment select that manditory chk box for MATNR)
    4. On what basis,we can identify that “To which existing segment - we can append our needed segment(new segment if any to be appended)”?
    Based on the basic IDOC type and given information from the user.
    Hope this helps you, reply for queries,
    Regards.
    kumar.

  • Adobe create suite 64-bit related queries

    Hi,
    I have following couple of questions related to 64-bit support in Adobe Products.
    1. Would like to know Adobe Illustrator CS3,CS4 and CS5 support 64-bit?
    2. Would like to know Adobe Photoshop CS3,CS4 and CS5 support 64-bit?
    3. Heard that CS5 would support 64-bit. All application underneath Creative Suite 5 would support 64-bit.
    4. does 32-bit and 64-bit have separate installer or same installer can be installed in 32-bit and 64-bit as well?.
    5. In which Window platform CS 64-bit will be supported?
    6. In which MAC platform  CS 64-bit will be supported?
    7. Separate licence to be purchased for 32-bit & 64-bit? or same license can be used?
    Please clarify the above queries.
    Regards,
    Avudaiappan

    Find answers inline.
    AvudaiappanSornam wrote:
    Hi,
    I have following couple of questions related to 64-bit support in Adobe Products.
    1. Would like to know Adobe Illustrator CS3,CS4 and CS5 support 64-bit?
    Illustrator CS5 is not 64 bit.
    2. Would like to know Adobe Photoshop CS3,CS4 and CS5 support 64-bit?
    Photoshop CS5 is 64 bit
    3. Heard that CS5 would support 64-bit. All application underneath Creative Suite 5 would support 64-bit.
    Since answer to question number 1 is no you know the answer
    4. does 32-bit and 64-bit have separate installer or same installer can be installed in 32-bit and 64-bit as well?.
    Same download can install 64 bit if you have a 64 bit OS
    5. In which Window platform CS 64-bit will be supported?
    XP, Vista, Win 7
    6. In which MAC platform  CS 64-bit will be supported?
    10.5.7 and 10.6.x
    7. Separate licence to be purchased for 32-bit & 64-bit? or same license can be used?
    I beleive no, but you can always cross check with Adobe/reseller before purchasing
    Please clarify the above queries.
    Regards,
    Avudaiappan

  • Relational queries through JDBC with the help of Kodo's metadata for O/R mapping

    Due to JDOQL's limitations (inability to express joins, when relationships
    are not modeled as object references), I find myself needing to drop down to
    expressing some queries in SQL through JDBC. However, I still want my Java
    code to remain independent of the O/R mapping. I would like to be able to
    formulate the SQL without hardcoding any knowledge of the relational table
    and column names, by using Kodo's metadata. After poking around the Kodo
    Javadocs, it appears as though the relevant calls are as follows:
    ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
    FieldMetaData fmd = cmd.getDeclaredField( "myField" );
    PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
    JDBCConfiguration conf = (JDBCConfiguration)
    ((EEPersistenceManagerFactory)pmf).getConfiguration();
    ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
    Connector connector = new PersistenceManagerConnector(
    (PersistenceManagerImpl) pm );
    DBDictionary dict = conf.getDictionary( connector );
    FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
    Column[] cols = fm.getDataColumns();
    Does that look about right?
    Here's what I'm trying to do:
    class Foo
    String name; // application identity
    String bar; // foreign key to Bar
    class Bar
    String name; // application identity
    int weight;
    Let's say I want to query for all Foo instances for which its bar.weight >
    100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
    reference to Bar. But there are frequently good reasons for modeling
    relationships as above, for example when Foo and Bar are DTOs exposed by the
    remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
    PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
    to do queries that navigate the relationship; it would be nice to do it in
    JDOQL directly. I will also want to do other weird-ass queries that would
    definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
    metadata.
    Is there anything terribly flawed with this logic?
    Ben

    I have a one point before I get to this:
    There is nothing wrong with using PC instances as both DAO and DTO
    objects. In fact, I strongly recommend this for most J2EE/JDO design.
    However, there should be no need to expose the foreign key values... use
    application identity to quickly reconstitute an object id (which can in
    turn find the persistent version), or like the j2ee tutorial, store the
    object id in some form (Object or String) and use that to re-find the
    matching persistent instance at the EJB tier.
    Otherwise, there is a much easier way of finding ClassMapping instances
    and in turn FieldMapping instances (see ClassMapping.getInstance () in
    the JavaDocs).
    Ben Eng wrote:
    Due to JDOQL's limitations (inability to express joins, when relationships
    are not modeled as object references), I find myself needing to drop down to
    expressing some queries in SQL through JDBC. However, I still want my Java
    code to remain independent of the O/R mapping. I would like to be able to
    formulate the SQL without hardcoding any knowledge of the relational table
    and column names, by using Kodo's metadata. After poking around the Kodo
    Javadocs, it appears as though the relevant calls are as follows:
    ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
    FieldMetaData fmd = cmd.getDeclaredField( "myField" );
    PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
    JDBCConfiguration conf = (JDBCConfiguration)
    ((EEPersistenceManagerFactory)pmf).getConfiguration();
    ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
    Connector connector = new PersistenceManagerConnector(
    (PersistenceManagerImpl) pm );
    DBDictionary dict = conf.getDictionary( connector );
    FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
    Column[] cols = fm.getDataColumns();
    Does that look about right?
    Here's what I'm trying to do:
    class Foo
    String name; // application identity
    String bar; // foreign key to Bar
    class Bar
    String name; // application identity
    int weight;
    Let's say I want to query for all Foo instances for which its bar.weight >
    100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
    reference to Bar. But there are frequently good reasons for modeling
    relationships as above, for example when Foo and Bar are DTOs exposed by the
    remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
    PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
    to do queries that navigate the relationship; it would be nice to do it in
    JDOQL directly. I will also want to do other weird-ass queries that would
    definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
    metadata.
    Is there anything terribly flawed with this logic?
    Ben
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • RFC related queries

    HI Friends,
    i have some queries which are RFC related.Can you please clarify them-:
    1)What does this syntax mean
    call function 'FM' destination 'dev-60'
    if i use this syntax in dev-90  FM  , will it excute the above given remote enabled FM in  dev-60.
    can i use this syntax in the same  remote enabled FM .
    Thanks and Regards,
    Sakshi
    Thanks and Regards,
    Sakshi

    Hello Sakshi,
    This is a basic question which can be answered by googling. It is real easy, try this [link|http://tinyurl.com/yeqwqfv].
    BR,
    Suhas

  • Oracle 9iDS related queries

    Dear all,
    Please, help me to provide the answers or document for the followinf queries.
    1. Major difference between Oracle 6iDS and Oracle 9iDS.
    2. Can I Execute application developed in Oracle 6i
    (Client-Server) as is in Oracle 9iDS?
    3. Can I execute forms developed in Oracle 9iDS without
    Application Server?
    4. Equivalent of DFLT.PRT (available in 6i) in Oracle 9iDS
    You can also send me the document (if any) by mail. My mail id is [email protected]
    Thanks

    Hi,
    1. Major difference between Oracle 6iDS and Oracle 9iDS.
    - Listener Servlet architecture
    - Web only
    - 25+ new features (design, deployment and architecture)
    2. Can I Execute application developed in Oracle 6i
    (Client-Server) as is in Oracle 9iDS?
    You need to re-compile them. Also there are some obsoleted built-ins that you need to migrate if you use them. There is a migration assistant contained in the DeveloperSuite (FMA)
    3. Can I execute forms developed in Oracle 9iDS without Application Server?
    Only with Oracle9iDS there is a stand alone OC4J instance that you use for designtime testing your application. For production the only supported application server is Oracle Application Server
    4. Equivalent of DFLT.PRT (available in 6i) in Oracle 9iDS
    Sounds Reports related and I would try this question on their forum.
    See also the 9i collateral on http://otn.oracle.com/products/forms
    Frank

  • 'ADMINISTRATION TOOL' RELATED QUERIES

    Hi All,
    I have few queries regarding OBIEE administration tool, request to help me out in getting answers for these.
    Using OBIEE version 10.1.3.4.0, any information about these queries/documents/site related to these topics would help me.
    1. Suppose i have more than one dimension model in a single rpd, more than one developer access to this rpd. Is it possible to restrictaccess for one dimension model so that s/he will not be able to access other models?
    2. Also when there are more than one rpds in Administration tool and many developers access to them, can security be defined like 'User A will have access only to RPD1' A cannot access any other offline/online rpd.
    3. Administration tool must be installed in Server or it can be installed in client system also? asking this question because if more than one developer wants to access administration tool at the same time how can it be achieved?
    4. In my rpd i have dimention models ( more than one), can i import one model from this rpd to another rpd?
    5. What is multiuser environment? will it help in any of my above requirements??
    Thanks in advance

    1. No, but you can use MUD to define different projects so that developers "check out" the projects they are working on. See the links provided on the previous response.
    2. The security is defined in each RPD. To open an RPD you need to be an Administrator user defined in that RPD. Online access can be restricted if you block connections to your BI Servers on port 9703 so that they can only come from a local connection or from defined IPs. You will need a firewall for that though. Offline access can not be restricted. If I have an RPD locally and I have an admin account I can open it anywhere I want.
    3. Client only it's fine. You would simply install the client tools in each developer's machine.
    4. Yes, search this forum for "merge repositories", plenty of guidance already available.
    5. The links provided above give you a good explanation of what MUD is and what it is for. Searching this forums also gives you plenty of information.

  • Not able to get SSL related CGI Environment Variables?

    We are currently using APEX 3.2.0.x, OHS 10.1.3.x, and 11gR1 on linux. The APEX application we've been developing will be accessed via SSL and x509 certificates such that a client certificate is passed from a user's browser to the OHS, the information will be read from the certificate, and if the user's cert information exists in a user table associated with the application, they will have the role they've been assigned as an existing user within the application. Otherwise, the user will be a guest and have a minimum role accessing the application.
    We are certainly not guru's when it has come to setting up and configuring SSL and certs, but we have gotten to the point where we have all of the required certs created and installed, and the client cert passes it's information successfully to the OHS to get to the "home" page of the application via the Rewrite statement in the httpd.conf/ssl.conf that points to the appropriate https url. We are now at the point where we need the APEX application page to read the cert information, and this is where we are having problems.
    We have created an "On Load - Before Header" process and temporary item on the "home" page to display CGI environment variables to see what we're getting. It's a PLSQL Anonymous block like this:
    DECLARE
    lUserName VARCHAR2(100);
    BEGIN
    SELECT NVL(owa_util.get_cgi_env('REMOTE_USER'),'NOT POPULATED') INTO lUserName FROM DUAL;
    :P1_REMOTE_USERNAME := lUserName;
    END;
    We can grab any of the cgi environment variables that are listed in the OHS mod_plsql User's Guide. We cannot seem to be able to get any of the SSL CGI environment variables though. We are adding the SSL variables to the dads.conf via the PlsqlCGIEnvironmentList parameter (ex: PlsqlCGIEnvironmentList SSL_CLIENT_S_DN_CN) and bouncing the OHS as needed. Unfortunately, we have not been successful in getting any of them to show up in the item on the APEX page.
    As far as we can tell, we have the SSL/OHS/Certs configured, but may be there is another SSL directive or some other configuration item that we've missed that needs to be set in order for SSL CGI environment variables to be available to the owa_util.get_cgi_env function. If anyone can tell us what we may have missed, it would be appreciated.
    thanks
    bob

    Hey John,
    what we have found that we were not sure of is that we need to use Rewrite rules and conditions in the ssl.conf to grab the ssl cgi environment variables and "put" them into the request header to hold them like this:
    RewriteCond %{SSL:SSL_SERVER_S_DN} (.*)
    RewriteRule .* - [E=SSLS_DN:%1]
    RequestHeader add X-SSL-SERVER-S-DN %{SSLS_DN}e
    Then in the dads.conf put the request header reference in there with the plsqlcgienvironmentlist parameter like this:
    PlsqlCGIEnvironmentList HTTP_X_SSL_SERVER_S_DN
    Restart the OHS, and then grab the HTTP_X_SSL_SERVER_S_DN variable(s) via the owa_util.get_cgi_env in the APEX page to pull the value out with an anonymous block in an On Load - Before Header process like this:
    DECLARE
    lUserName VARCHAR2(100);
    BEGIN
    SELECT NVL(owa_util.get_cgi_env('HTTP_X_SSL_SERVER_S_DN'),'NOTHING HERE') INTO lUserName FROM DUAL;
    END;
    It's the Rewrite rules and putting them into the request header that we were not totally sure about as far as how and where you put the environment variables to make them accessible to the dads.conf with the PlsqlCGIEnvironmentList parameter that in turn makes them accessible to APEX. We're still not 100% sure this is the correct method, but it's working. We don't recall reading in any of the APEX docs, APEX forum threads, or other documentation about needing Rewrite rules and conditions to put the cgi environment variables into the request header to make them accessible to APEX. So, that seems to be our missing piece of the puzzle here.
    Anyway, I think we're okay for the moment, and may be this thread will help someone else out in the future. Thanks for your help John, and will give you a helpful plug on the forum for this thread. BTW, I do have your book, so it was nice to see someone as advanced with APEX as yourself reply to the posting.
    Thanks
    Bob

  • Monitoring Related Queries - BPIM

    Dear All,
    We are planning to implement Business Process & Inteface Monitoring - first phase mainly involving application specific mointoring and interface monitoring. I have few queries on the same. Would be great if you can help in same:
    1) Our present DB is about 35TB. If we implement BPMon in SolMan then how we can make sure that performance of monitored systems like ECC, SRM, etc are not impacted while data collection by local CCMS and then passed to SolMan Cental CCMS. There could be possibility of thousands of open sales order globally at various loc. so that data when collected should have some impact on system performance.
    2) What are the Best Practices & recommendation on BPMon side, specifically for cross application monitoring like ABAP Dumps, Update Errors and Batch Files. I have links for SAP Standard slides so please dont share those one from mkt place.
    3) Any strategic document from your side to show how it was proposed /implemented in some project/clients with real time examples as that will give more clarity.
    4) Escalation Mgmt / Corrective Measure procedure - Any standard facility available for escalation management. Also we are looking for some task assignment kind of feature where alerts action can be assigned to various project team members by process experts for needful, etc.
    Thanks in advance.
    SM

    Hello Suchit,
    1. There is no guarantee that the collector will not influence the performance, however they are written in a way which should not drastically affect the system. If you are in doubt I would suggest running a chosen TBI report (in ST13), which more or less illustrates the data collector and tracing results (especially performance trace).
    2. If you have SAP slides you should be able to find best practices, I believe that the goal of BPMon is to monitor application/process specific errors. That's why also for example ABAP dumps monitor has a selection on what is dumping, so you could search only among errors belonging to your process. In my case we created a separate BP calles cross-process monitoring and we monitor there things that are critical but not possible to assign just to one process.
    3. The only "strategic" document is a technical documentation of BPMon, as SAP will not identify your critical business processes and tell you how to monitor them (at least not for free ).
    4. That depends what kind of tool you are using for the incident management. You can utilize email notifications if you don't have a specific tools, otherwise you might consider building your own BAPi to support your IM tool.
    BR
    Jolanta

  • Smart Sync Related Queries

    A. Any guidelines in defining parent chlid relation ships and association when making BAPI Wrappers???
        Any documents/note/links.
    C. How can we separate BAPI Wrappers Interface and filtering rules. Can I bring my Filtering /distribution rules defined at backend
        to MI Middleware. If yes How??? Any documents.
    D. Which type is suitable for Master Data and which Type is suitable for Transactional Data.
        Is it OK to make every BAPI Wrapper of type T51(Server Driven).
        Are there any drawbacks in this approach.
    E. In the case of Server Driven, who has more load Server or the middleware??
    regards
    anubhav

    Hi Anubhav,
        T51 is the best type of syncBO not only for master data, but for transactional data also.  This is the one with highest performance and the least load both on both backend and MI.
    In T51 case, backend informs MI about the changes in the data (add/modify/delete). Backend is the best person to identify the change in data as the data resides there. The task to idendify the data changes is quite easy in backend (depends on the backend implementation), but if there are too many ways (too many applications) the data can be changed in the backend, the effort to catch all these cases will be higher in the backend.
    In T01 case, MI identifies the changes in the data. Since the data is primarily residing in the backend and changes in data happens there, the only way MI can identify the changes is by comparing every record from the backend with the replicated data in the middleware RDB. This process will be very time consuming and can lead to performance problems if the data is huge. Also the replication time will be higher.
    In the case of master data which seldom occurs changes, the T01 replicator will run periodically ( as scheduled ) comapring the whole data to find out there is no changes. In the case of T51, the replicator will be run automatically only when the backend informs there is a change in data.
    Even for transactional data, T51 is better in terms of performance. The delay for the updation of data in the middleware after the change in backend is very small and is even configurable in middleware. So the latest data will be always there in middleware.
    Regards
    Ajith Chandran

  • Cisco Security Manager related queries

    In one of our projects, we are running CSM 3.2 on VMWare ESX 3.5.There is a project in place to have the ESX upgraded to 4.1
    This looks like a challenge as CSM 3.2 is not supported on ESX 4.1. Cisco TAC has suggested to upgrade CSM to 4.2
    Queries:
    1.       CSM 3.2 to 4.2, will it involve additional license cost?
    2.       If we upgrade CSM 3.2 to 4.2 and  then upgrade ESX from 3.5 to 4.1, will it be back immediately without anything required to be done on CSM end, once VMWare is  upgraded.
    3.       Is there any other preferred solution/workaround to manage the situation
    4.        If we have to move CSM from one ESX to another, what would be the steps  involved to retain the same configuration and logs.
    Regards,
    Nitin

    To migrate from 3.2 to 4.2, you will need to acquire a 4.2 license. The part number depends on how many devices your 3.2 installation is licensed for. Please refer to Table 2 on this announcement. You Cisco reseller or partner can provide you a quote but if you search the Internet for that part number you can see typical costs.
    The procedure for upgrade would be to first move to an interim step of CSM 4.0 and then to 4.2. Please refer to this guide. Also see the section on that guide on moving to a new server for how to handle your ESX upgrade / migration.

  • IRecruitment Related Queries

    Hi,
    1.where is the post advert button located in vacancy details page and what is its functionality . I am unable to view that button.
    2. Once i fill the vacancy details, the page moves on to "Job posting details" page. Is it the same in your case .
    3. Is there any option called source type available on create candidate page, if not then where do i mention the source type.
    4. where do i configure the various source types.
    5.Is the configuration for "skills" in "Create Vacancy " page and "Create Candidate page " the same? Because while creating a vacancy the list of skills which appear is that of the list of competencies , but when i create a candidate i dont get the list of competencies, the LOV is blank.
    Thanks,
    Angelica.

    Hi Angelicka,
    1. The post advert button is located in the page of "Job Posting" , thisi created automatically if you define previously a Site where you want to Advert your Job.
    Depending on how you defined your relations with the external Site, your Vacancy will be post at that site according to start / end dates.
    2. It dependents onyour version, on the latest versions you after Vacancy details you get the page of Vacancy skill requirments.
    3.There is an option called source type when creating a candidate. This is alittle tricky.When a Candidate is filled by agencies than the agency is populated automatically. If a recruiter update the Candidate details (from ver IRC D) than he can fill also Source Type (If you can't see the field look for personalization.
    5. The Vacancy Skill List do not populate automatically at the candidate details. You need to define for Candidates what skills you whish they will populate.
    My advise, please read the irecruitment Implementation guide carefully, since the module implementation is not straight forward.

Maybe you are looking for