About the seperator in database

Hi friends,
I have a question about Oracle DataBase Driver.
Which method can I use to fetch the seperator between the schema and its objects.I know it is "." like scott.emp.
It seems that the driver doesn't offer the medhod to get the schema seperator.
Which method can I use to fetch the catalog seperator like "select * from scott.emp@myDBLINK".
It seems that the catalog seperator returned from the method is wrong.It is NULL but actually it is "@".
Thanks!

1) What database driver are you talking about? JDBC? ODBC? OLE DB? ODP.Net? Something else?
2) The @ sign signifies the presence of a database link. It's entirely possible that whatever generic API you're discussing doesn't have a method to get the @ sign because different databases have rather different syntax for referring to objects on remote databases.
Justin

Similar Messages

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • About the new optional Database related API for CLDC

    Hi friends...
    As i know there are different optional Java ME API's for the CDC toolkit but for implementing the sql in the J2me programming using the CLDC toolkit
    is there any JDBC optional API by Sun Microsystems?
    such as sql for the CDC please tell me that
    please tell me api besides the RMS(Record Management Systems)

    Yes, .Net framework is supported. Basically any programming is still supported and that doesn't really change. MDW will work exactly the same. The biggest difference between the two databases is Oracle Lite works well will Oracle SQL but SQLite is only SQL92 compatible (but can't handle right outer join... I think. So, you would have to modify SQL for this. Not that hard to do.) Berkeley DB/SQLite doesn't support stored procedures... but you can code your own functions and procedures before building the Berkeley DB library and they work the same way. I am playing with this feature now. Synchronization will work exactly the same way, so you don't have to change your knowledge of the Mobile Server. If you currently have a WinCE Oracle Lite deployment, all you have to do for sync is change the platform to Window Mobile SQLite... I know Oracle Support is working on a document note on how to extend platform support. I have posted an article on working with Berkeley DB.
    http://www.rekounas.org/2010/08/26/getting-started-with-berkeley-db-and-oracle-mobile-server/
    I have also ramped up the number of posts I have been creating due to this announcement. BTW, I am not Oracle, I am an independent consultant that has worked on many Oracle Lite and Mobile Server projects. Overall, this is exciting news and I do like the direction, but I do have a soft spot for Oracle Lite. I thought it was a fantastic product that was very powerful for a small database, but with so many features, the footprint was just too large and ultimately why the decision was made to go with BDB. I will find out if I can post the presentation that I did last week as it does give good insight on what Oracle is doing.

  • How to get the health , performance information and about the services run on devices that have connected to the system center?

    Hi All,
    I want to know how to get the health , performance information and about the services run on devices that have connected to the system center to my c# application. Also I need to know about the information of databases that have connected to system center.
    I will appreciate your feedback
    Thank you

    Hi,
    You can configure service monitor for the required service on the server
    refer below link for how to configure service monitoring
    http://www.bictt.com/blogs/bictt.php/2011/03/17/scom-monitoring-a-service-part3
    You can use SCOM SDK to connect to the scom server using c# and get required information
    http://msdn.microsoft.com/en-us/library/hh329086.aspx
    you can find the database in below registry path on management server
    HKLM:\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Setup\DatabaseName
    Regards
    sridhar v

  • HOW TO GET INFORMATION ABOUT THE CLIENT MACHINE AT DATABASE LEVEL

    HOW TO GET INFORMATION ABOUT THE CLIENT MACHINE AT DATABASE LEVEL USING 10g Database and 10g Application Server
    we have developed an application using oracle forms 10g with
    oracle database 10g and Application server 10g
    Application uses a single Oracle User name to connect to database
    where as at Application level there are different users (these are not database users)
    Now how can we get the information about the user/his machine etc. at database level. earlier in 6i/8i we use to get by using
    USERENV('TERMINAL')
    we had written a triggers on tables on Insert/Update where we used to update a database field Last user terminal with USERENV('TERMINAL')
    but not this information is comming to be the machine name of application server where as we wish this to be the machine name of Client. Any Way outs
    thanks
    Chaand Kackria

    hi, you can use the sys_context function, like this:
    select sys_context('userenv','current_user'),
         sys_context('userenv','os_user'),
         sys_context('userenv','host'),
         sys_context('userenv','ip_address'),
         sys_context('userenv','instance'),
         sys_context('userenv','sessionid'),
         sys_context('userenv','terminal')
    from dual;
    Is this what you 're looking for?

  • About listeners for different databases in the same ORACLE_HOME

    Hello experts, I have a doubt
    I have installed 3 different databases in the same server which OS is Suse Enterprise 10. At this moment I have just one listener that registers all the services. The number of connections are big for two of the three databases so I do not know what might be the best practices for this environment. I have some question I would like you suggestion for me:
    DB version: 11.1.0.7
    Should I have three listener, one for each database?
    Is a good practice to have only one listener for multiple instances in the same server?
    Are there differents between have only one listener for all the instance and have one listener for instance?
    What about the performance?
    Than you in advance, I hope you can guide me.

    Best practice is not to have multiple databases on one server, depending on resources.
    Should I have three listener, one for each database?
    No. That is pretty undesirable and requires more administration.
    Is a good practice to have only one listener for multiple instances in the same server?
    Yes. Remember the listener is only a broker and there are no permanent connections between client, listener and database.
    Are there differents between have only one listener for all the instance and have one listener for instance?
    Yes. The latter is an administration nightmare.
    What about the performance?
    Who cares? The only factors are
    listener.log becomes too big --> set up proper cleanup procedures
    The number of semaphores is exceeded (Unix) or no more threads can be created (Microsoft winblows), which will give rise to ora-12500.
    Sybrand Bakker
    Senior Oracle DBA

  • About the Database Solution

    Hello experts, I got a quetion about the Database Solution, I installled the Solution manager 7.0 EHP1 SP24, the service desk is now working, but I dont know how to put in operation the Database Solution. I have to install the TREX in the NW server and connect it to the Solution Manager?? or I can use the IS01 transaction in order to create solutions?, I have no clue here so i would like to know which one is the right step in order to have a Database Solution running in my Solution Manager.
    Thanks,
    Paul Hurtado

    Hello Paul,
    Please review this Blog.
    /people/dolores.correa/blog/2007/10/06/service-desk-solution-database
    Regards,
    Paul

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • About the interface web oracle database express edition

    Hi
    I want execute a export from web interface I click on link "data" I choose the object table and after I click on link "download" but the file export at format csv contain only the lines which I see on the interface web .
    is it possible to export all rows of the table object in CSV file ?
    think

    There are many ways to skin that cat, i.e. just using sqlplus and spooling to a file:
    set pagesize 0 verify off feedback off linesize 500 trimspool on
    spool filename.csv
    select <col1> || ',' || <col2>[|| ','||<colN>] as csvout from <table>;
    spool off
    If there are date (or any of the timestamp) datatypes you'll want to be specific about the output format, i.e. use to_char( <col>, 'yyyymmdd hh24:mi:ss')...
    Also may want to have [var]char column values wrapped in single quotes, just add them in- i.e. ...''''||<col>||'''' it takes two single quotes inside a literal to get one in the output.
    There will still be some stuff in the spool file needing cleanup, like the 'spool off' command and an echo of the SQL if you're running sqlplus interactively, don't think there's any way to turn off that echo. 'set echo off' doesn't do it as one would think.
    Edited by: clcarter on Oct 31, 2012 10:31 AM
    nix the headings comment

  • The question about the HA installation on ECC6.0

    Hi Experts,
    We are about to implement a project with HA environment on the ECC6.0 in the near future, which is just about the ABAP stack. After reading the Installating Guide, I stil have several questions related to the procedures of HA installation.
    In the guide document, I got the following steps to process for realizing the HA of ECC6.0:
    1. Run SAPinst to install the central services instance (ASCS) using the virtual host name on the primary cluster node, host A.
    2. Prepare the standby node, host B, making sure that it meets the hardware and software requirements and it has all the necessary file systems, mount points, and (if required) Network File System (NFS), as described in Preparing for Switchover .
    3. Set up the user environment on the standby node, host B. For more information, see Creating Operating System Users and Groups Manually. Make sure thatyou use the same user and group IDs as on the primary node. Create the home directories of users and copy all files from the home directory of the primary node.
    4. Configure the switchover software and test that switchover functions correctly.
    5. Install the database instance on the primary node, host A.
    6. Install the central instance with SAPinst on the primary node, host A.
    7.If required, install additional dialog instances with SAPinst to replicate the SAP system services that are not a SPOF. These nodes do not need to be part of the cluster.
    My Question is that does standby node(host B in above context) need to install the ASCS, database instance and Central Instance?
    If host B does not need to install the database instance, how about the whole system would be when the primary cluster node (Host A in above context) totally crash, such as power failure.

    Hi Rong,
    I would try to explain it in simple words...
    My Question is that does standby node(host B in above context) need to install the ASCS,
    database instance and Central Instance?
    If host B does not need to install the database instance, how about the whole system would be when
    the primary cluster node (Host A in above context) totally crash, such as power failure.
    1. You don't need to install ASCS on Node B. You are installing it using a VIRTUAL HOSTNAME which represent cluster not individual node. VIRTUAL HOSTNAME is assigned to cluster package, so whichever Node is the owner of the package, will have the VIRTUAL HOSTNAME. (it will switch with cluster switchover)
    2. It is actually a cluster package configuration magic. When Node A is active, cluster package owner is Node A. So all mount points (which is on SAN disk) is mounted on Node A. When you switchover the cluster, those packages will be mounted on Node B.
    Some time single cluster package is used (which includes mount points for SAP instance + Database directories). You can also use 2 cluster package seperating SAP and Database directory structure.
    Only OS related directories should be on servers local disk. All other application related mount points should be on SAN disk which is configured in "Cluster Package". (For example /sapmnt, /usr/sap, /oracle etc.)
    You only need identical users and their enviornment settings on both Nodes.
    In simple words, When primary node fails or crashed only users and thier enviornment setting will be lost. On second node, because of identical users and their profiles, same settings will be available to bring up the SAP system. Your all SAP and Database data is intact as it is on SAN Disk.
    I hope, your confusion is cleared now...
    Regards.
    Rajesh Narkhede

  • About the PIM(Personal Information Manager)

    Hi
    Hello friends
    i will get very important information about the PIM(personal Information Manager)
    that manages the personal database in Handheld devices such as PDA,Mobiles,Cells etc
    also i heard that it (PIM) should be downloaded seperately i search many times on sun's product web but i don't get the PIM
    please tell me where i can get downloaded it the PIM?

    Well directly from the J2ME PIM website ( [http://developers.sun.com/mobility/apis/articles/pim/index.html|http://developers.sun.com/mobility/apis/articles/pim/index.html] ) is this statement:
    "Personal information management (PIM) refers to the ability to manage in electronic form the kinds of personal data that broad classes of users want handy, such as appointment books, contact directories, and to-do lists."
    We're talking about accessing the actual PIM applications on the device (i.e. the Calendar, Contact List, etc. native to the specific phone/pda/device).
    the javax.microedition.pim package is an optional package, meaning it is not part of the core MIDP/CLDC APIs. So, yes, you have to download a separate package and place the library in your classpath in order to compile on your system.
    Now the next problem you're going to run into is "does your target device support this optional package". There is a reference implementation for PocketPC OS from IBM that you can find here: [http://www-106.ibm.com/developerworks/library/j-pda-op|http://www-106.ibm.com/developerworks/library/j-pda-op] . However if you're trying to perform this on a cellphone (for example) the phone's J2ME implementation is going to have to have support for JSR 75 for this to work at all. I don't know where there is a definitive list of phones that support JSR 75, but I believe BlackBerry's with version 4.2 or newer (for example) do support this package. I believe some Motorla, Erickson, Nokia and other phones support this optional package.
    I read somewhere that the following line of code should tell you whether a device supports JSR 75 or not:
    System.getProperty("microedition.io.file.FileConnection.version"); If this returns null then the system does not. If it returns a non-null string, then it does at some level.
    HTH

  • A question about the impact of SQL*PLUS SERVEROUTPUT option on v$sql

    Hello everybody,
    SQL> SELECT * FROM v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0  Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>
    OS : Fedora Core 17 (X86_64) Kernel 3.6.6-1.fc17.x86_64I would like to ask a question about the SQL*Plus SET SERVEROUTPUT ON/OFF option and its impact on queries on views such as v$sql and v$session. Here is the problem
    Actually I define three variables in SQL*Plus in order to store sid, serial# and prev_sql_id columns from v$session in order to be able to use them later, several times in different other queries, while I'm still working in the current session.
    So, here is how I proceed
    SET SERVEROUTPUT ON;  -- I often activate this option as the first line of almost all of my SQL-PL/SQL script files
    SET SQLBLANKLINES ON;
    VARIABLE mysid NUMBER
    VARIABLE myserial# NUMBER;
    VARIABLE saved_sql_id VARCHAR2(13);
    -- So first I store sid and serial# for the current session
    BEGIN
        SELECT sid, serial# INTO :mysid, :myserial#
        FROM v$session
        WHERE audsid = SYS_CONTEXT('UserEnv', 'SessionId');
    END;
    PL/SQL procedure successfully completed.
    -- Just check to see the result
    SQL> SELECT :mysid, :myserial# FROM DUAL;
        :MYSID :MYSERIAL#
           129   1067
    SQL> Now, let's say that I want to run the following query as the last SQL statement run within my current session
    SELECT * FROM employees WHERE salary >= 2800 AND ROWNUM <= 10;According to Oracle® Database Reference 11g Release 2 (11.2) description for v$session
    http://docs.oracle.com/cd/E11882_01/server.112/e25513/dynviews_3016.htm#REFRN30223]
    the column prev_sql_id includes the sql_id of the last sql statement executed for the given sid and serial# which in the case of my example, it will be the above mentioned SELECT query on the employees table. As a result, right after the SELECT statement on the employees table I run the following
    BEGIN
        SELECT prev_sql_id INTO :saved_sql_id
        FROM v$session
        WHERE sid = :mysid AND serial# = :myserial#;
    END;
    PL/SQL procedure successfully completed.
    SQL> SELECT :saved_sql_id FROM DUAL;
    :SAVED_SQL_ID
    9babjv8yq8ru3
    SQL> Having the value of sql_id, I'm supposed to find all information about cursor(s) for my SELECT statement and also its sql_text value in v$sql. Yet here is what I get when I query v$sql upon the stored sql_id
    SELECT child_number, sql_id, sql_text
    FROM v$sql
    WHERE sql_id = :saved_sql_id;
    CHILD_NUMBER   SQL_ID          SQL_TEXT
    0              9babjv8yq8ru3    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;Therefore instead of
    SELECT * FROM employees WHERE salary >= 2800 AND ROWNUM <= 10;for the value of sql_text I get the following value
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES);Which is not of course what I was expecting to find in v$sql for the given sql_id.
    After a bit googling I found the following thread on the OTN forum where it had been suggested (well I think maybe not exactly for the same problem) to turn off SERVEROUTPUT.
    Problem with dbms_xplan.display_cursor
    This was precisely what I did
    SET SERVEROUTPUT OFFafter that I repeated the whole procedure and this time everything worked pretty well as expected. I checked SQL*Plus documentation for SERVEROUTPUT
    and also v$session page, yet I didn't find anything indicating that SERVEROUTPUT should be switched off whenever views such as v$sql, v$session
    are queired. I don't really understand the link in terms of impact that one can have on the other or better to say rather, why there is an impact
    Could anyone kindly make some clarification?
    thanks in advance,
    Regards,
    Dariyoosh

    >
    and also v$session page, yet I didn't find anything indicating that SERVEROUTPUT should be switched off whenever views such as v$sql, v$session
    are queired. I don't really understand the link in terms of impact that one can have on the other or better to say rather, why there is an impact
    Hi Dariyoosh,
    SET SERVEROUTPUT ON has the effect of executing dbms_output.get_lines after each and every statement. Not only related to system view.
    Here below what Tom Kyte is explaining in this page:
    Now, sqlplus sees this functionality and says "hey, would not it be nice for me to dump this buffer to screen for the user?". So, they added the SQLPlus command "set serveroutput on" which does two things
    1) it tells SQLPLUS you would like it <b>to execute dbms_output.get_lines after each and every statement</b>. You would like it to do this network rounding after each call. You would like this extra overhead to take place (think of an install script with hundreds/thousands of statements to be executed -- perhaps, just perhaps you don't want this extra call after every call)
    2) SQLPLUS automatically calls the dbms_output API "enable" to turn on the buffering that happens in the package.Regards.
    Al

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • About the template FSCM9.1 FP2 Peopletools 8.52.03 (v4 - July 2012)

    Hello,
    Just tested quickly this new template delivered 2 months ago (July 2012).
    As far as I undestand, it is just a recut of the one delivered in April 2012. At least it solves the main issue I reported in that other About the template FSCM9.1 FP2 Peopletools 8.52.03 (v3) about the network prompt missing.
    But I still have remarks/issues on the template FSCM9.1 FP2 Peopletools 8.52.03 (v4) released earlier in July 2012.
    _1. First of all, there a lot of errors reported in /var/log/messages_
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526918.883:3): avc:  denied  { read } for  pid=92 comm="restorecon" name="libc.so.6" dev=xvda2 ino=21 scontext=system_u:sys
    tem_r:restorecon_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=lnk_file
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526918.910:4): avc:  denied  { execute } for  pid=92 comm="restorecon" path="/lib64/libc-2.5.so" dev=xvda2 ino=20 scontext=
    system_u:system_r:restorecon_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526921.489:5): avc:  denied  { read } for  pid=296 comm="pam_console_app" name="ld.so.cache" dev=xvda2 ino=94143 scontext=s
    ystem_u:system_r:pam_console_t:s0-s0:c0.c1023 tcontext=system_u:object_r:file_t:s0 tclass=file
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526921.489:6): avc:  denied  { getattr } for  pid=290 comm="pam_console_app" path="/etc/ld.so.cache" dev=xvda2 ino=94143 sc
    ontext=system_u:system_r:pam_console_t:s0-s0:c0.c1023 tcontext=system_u:object_r:file_t:s0 tclass=file
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526921.530:7): avc:  denied  { read } for  pid=293 comm="pam_console_app" name="libc.so.6" dev=xvda2 ino=21 scontext=system
    _u:system_r:pam_console_t:s0-s0:c0.c1023 tcontext=system_u:object_r:file_t:s0 tclass=lnk_file
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526921.530:8): avc:  denied  { execute } for  pid=293 comm="pam_console_app" path="/lib64/libc-2.5.so" dev=xvda2 ino=20 sco
    ntext=system_u:system_r:pam_console_t:s0-s0:c0.c1023 tcontext=system_u:object_r:file_t:s0 tclass=file
    Sep 13 05:02:55 localhost kernel: input: PC Speaker as /class/input/input3
    Sep 13 05:02:55 localhost kernel: Initialising Xen virtual ethernet driver.
    Sep 13 05:02:55 localhost kernel: Error: Driver 'pcspkr' is already registered, aborting...
    Sep 13 05:02:55 localhost kernel: Floppy drive(s): fd0 is unknown type 15 (usb?), fd1 is unknown type 15 (usb?)
    Sep 13 05:02:55 localhost kernel: floppy0: Unable to grab IRQ6 for the floppy driver
    Sep 13 05:02:55 localhost kernel: lp: driver loaded but no devices found
    Sep 13 05:02:55 localhost kernel: md: Autodetecting RAID arrays.
    Sep 13 05:02:55 localhost kernel: md: Scanned 0 and added 0 devices.
    Sep 13 05:02:55 localhost kernel: md: autorun ...
    Sep 13 05:02:55 localhost kernel: md: ... autorun DONE.
    Sep 13 05:02:55 localhost kernel: EXT3 FS on xvda2, internal journal
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526929.896:9): avc:  denied  { execute } for  pid=965 comm="restorecon" path="/lib64/libc-2.5.so" dev=xvda2 ino=20 scontext
    =system_u:system_r:restorecon_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
    Sep 13 05:02:55 localhost kernel: kjournald starting.  Commit interval 5 seconds
    Sep 13 05:02:55 localhost kernel: EXT3 FS on xvda1, internal journal
    Sep 13 05:02:55 localhost kernel: EXT3-fs: mounted filesystem with ordered data mode.
    Sep 13 05:02:55 localhost kernel: kjournald starting.  Commit interval 5 seconds
    Sep 13 05:02:55 localhost kernel: EXT3 FS on xvdb1, internal journal
    Sep 13 05:02:55 localhost kernel: EXT3-fs: mounted filesystem with ordered data mode.
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526930.647:10): avc:  denied  { execute } for  pid=989 comm="setfiles" path="/lib64/libc-2.5.so" dev=xvda2 ino=20 scontext=
    system_u:system_r:setfiles_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526942.398:11): avc:  denied  { net_admin } for  pid=990 comm="setfiles" capability=12  scontext=system_u:system_r:setfiles
    _t:s0 tcontext=system_u:system_r:setfiles_t:s0 tclass=capability
    Sep 13 05:02:55 localhost kernel: hrtimer: interrupt took 35229469 ns
    Sep 13 05:02:55 localhost kernel: Adding 2104504k swap on /dev/xvda3.  Priority:-1 extents:1 across:2104504k SS
    Sep 13 05:02:55 localhost kernel: warning: process `kudzu' used the deprecated sysctl system call with 1.23.
    Sep 13 05:02:55 localhost kernel: Loading iSCSI transport class v2.0-870.
    Sep 13 05:02:55 localhost kernel: libcxgbi:libcxgbi_init_module: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
    Sep 13 05:02:55 localhost kernel: libcxgbi:ddp_setup_host_page_size: system PAGE 4096, ddp idx 0.
    Sep 13 05:02:55 localhost kernel: Chelsio T3 iSCSI Driver cxgb3i v2.0.0 (Jun. 2010)
    Sep 13 05:02:55 localhost kernel: iscsi: registered transport (cxgb3i)
    Sep 13 05:02:55 localhost kernel: NET: Registered protocol family 10
    Sep 13 05:02:55 localhost kernel: cnic: Broadcom NetXtreme II CNIC Driver cnic v2.2.14 (Mar 30, 2011)
    Sep 13 05:02:55 localhost kernel: Broadcom NetXtreme II iSCSI Driver bnx2i v2.6.2.3 (Jan 06, 2010)
    Sep 13 05:02:55 localhost kernel: iscsi: registered transport (bnx2i)
    Sep 13 05:02:55 localhost kernel: iscsi: registered transport (tcp)
    Sep 13 05:02:55 localhost kernel: iscsi: registered transport (iser)
    Sep 13 05:02:55 localhost kernel: iscsi: registered transport (be2iscsi)
    Sep 13 05:02:55 localhost kernel: ip6_tables: (C) 2000-2006 Netfilter Core Team
    Sep 13 05:02:55 localhost kernel: warning: `mcstransd' uses 32-bit capabilities (legacy support in use)
    Sep 13 05:02:55 localhost kernel: type=1400 audit(1347526970.336:12): avc:  denied  { sys_tty_config } for  pid=1374 comm="consoletype" capability=26  scontext=system_u:system_r
    :consoletype_t:s0 tcontext=system_u:system_r:consoletype_t:s0 tclass=capability
    Sep 13 05:02:55 localhost kernel: RPC: Registered udp transport module.
    Sep 13 05:02:55 localhost kernel: RPC: Registered tcp transport module.
    Sep 13 05:02:55 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
    Sep 13 05:03:00 localhost automount[1769]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.master
    Sep 13 05:03:59 localhost kernel: type=1400 audit(1347527039.771:13): avc:  denied  { sys_tty_config } for  pid=2029 comm="consoletype" capability=26  scontext=system_u:system_r
    :consoletype_t:s0 tcontext=system_u:system_r:consoletype_t:s0 tclass=capability
    Sep 13 05:04:00 localhost NET[2061]: /sbin/dhclient-script : updated /etc/resolv.conf
    Sep 13 05:04:01 localhost kernel: IPv6 over IPv4 tunneling driver
    Sep 13 05:04:01 localhost NET[2219]: /opt/oracle/psft/vm/oraclevm-template.sh : updated /etc/resolv.conf
    Sep 13 05:04:08 localhost NET[2472]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
    Sep 13 05:06:08 localhost restorecond: Reset file context /etc/resolv.conf: system_u:object_r:etc_runtime_t:s0->system_u:object_r:net_conf_t:s0
    Sep 13 05:08:19 localhost kernel: Slow work thread pool: Starting up
    Sep 13 05:08:19 localhost kernel: Slow work thread pool: Ready
    Sep 13 05:08:19 localhost kernel: FS-Cache: Loaded
    Sep 13 05:08:19 localhost kernel: FS-Cache: Netfs 'nfs' registered for caching
    Sep 13 05:08:19 localhost kernel: svc: failed to register lockdv1 RPC service (errno 97).
    ...Well, I don't know if it triggers others problems yet, but the last line could reveale an error within the /etc/hosts file which has not been properly modified during deployment (especially IPV6, it probably should be removed) :
    [root@psovmfscmfp2 /]# more /etc/hosts
    127.0.0.1       localhost.localdomain   localhost
    ::1     localhost6.localdomain6 localhost6
    192.168.1.150   psovmfscmfp2.phoenix.nga psovmfscmfp2
    [root@psovmfscmfp2 /]#_2. Now about the COBOL_
    Despite I choosed to install Microfocus, COBOL does not work. Sample COBOL processes such as PTPDBTST and PTPDTTST finished in error.
    The log is empty, here below the output from the file $PS_CFG_HOME/psft/pt/8.52/appserv/prcs/PRCSDOM/LOGS/stdout (psadm2) :
    =================================Error===============================
    Message:     Process 10899 is marked 'Initiated' or 'Processing' but can not detect status of PID
            Process Name: PTPDBTST
            Process Type: COBOL SQL
            Session Id:   9313
    =====================================================================
    OprId = VP1Note that I successfully tested AEs and SQRs.
    Here is the command line fired that I can see from the process monitor > parameter (nga is being my run control id) :
    PSRUN PTPDBTST ORACLE/F91TMPLT/VP1/OPRPSWD/nga/10899//0 I used the following trace setting on the PTPDBTST's process parameter (override) to see what happens :
    %%DBTYPE%%/%%DBNAME%%/%%OPRID%%/%%OPRPSWD%%/%%RUNCNTLID%%/%%INSTANCE%%//%%DBFLAG%%But it does not generate more logs...
    I also use "RCCBL Redirect =1" in psappsrv.cfg (reconfigure and restart appdom), then start the COBOL through menu PeopleTools > Utilities > Debug > PeopleTools Test Utilities, and run a "Remote Call Test".
    I received "COBOL Program PTPNTEST aborted (2,-1) FUNCLIB_UTIL.RC_TEST_PB.FieldChange PCPC:2143 Statement:26", but it generated two empty files (PTPNTEST_VP1_0913064910.out and PTPNTEST_VP1_0913064910.err).
    Next step, checking the folder $PS_HOME/cblbin, it is...er... empty... does this mean COBOL have not been compiled ? Hmmm, I'm pretty sure I replied 'yes' when it was prompted though (still have the screenshots)...
    And we can see several folders dated from today and license seems ok from Microfocus directories :
    [psadm1@psovmfscmfp2 tools]$ cd /opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit
    [psadm1@psovmfscmfp2 svrexp-51_wp4-64bit]$ ls -lrt
    total 264
    -r--r--r--  1 root root 10455 Nov 19  2009 ADISCTRL
    dr-xr-xr-x 10 root root  4096 Nov 19  2009 terminfo
    dr-xr-xr-x  2 root root  4096 Nov 19  2009 xdb
    -r--r--r--  1 root root 11949 Nov 19  2009 eslmf-mess
    dr-xr-xr-x  2 root root  4096 Nov 19  2009 include
    dr-xr-xr-x 17 root root  4096 Nov 19  2009 lang
    dr-xr-xr-x  4 root root  4096 Nov 19  2009 es
    dr-xr-xr-x  2 root root  4096 Nov 19  2009 dynload
    drwxrwxrwx  2 root root  4096 Nov 19  2009 deploy
    dr-xr-xr-x  2 root root  4096 Nov 19  2009 dynload64
    dr-xr-xr-x  2 root root  4096 Nov 19  2009 dialog
    dr-xr-xr-x  2 root root  4096 Nov 19  2009 cpylib
    dr-xr-xr-x  8 root root 28672 Nov 19  2009 lib
    dr-xr-xr-x  3 root root  4096 Nov 19  2009 snmp
    dr-xr-xr-x  8 root root  4096 Nov 19  2009 src
    dr-xr-xr-x 28 root root  4096 Nov 19  2009 demo
    dr-xr-xr-x  6 root root  4096 Nov 19  2009 docs
    -rw-r--r--  1 root root    49 Sep 13 05:13 license.txt
    -r-xr-xr-x  1 root root 12719 Sep 13 05:13 install.orig
    -r-xr-xr-x  1 root root 13006 Sep 13 05:13 install
    dr-xr-xr-x  6 root root  4096 Sep 13 05:13 lmf
    dr-xr-xr-x  2 root root  4096 Sep 13 05:13 aslmf
    dr-xr-xr-x  6 root root  4096 Sep 13 05:15 etc
    dr-xr-xr-x  4 root root 12288 Sep 13 05:15 bin
    [psadm1@psovmfscmfp2 svrexp-51_wp4-64bit]$ more license.txt
    I
    ORACLE-30DAYDEV64
    01030 A0780 014A6 7980B A17CSo let's assume it has been properly installed and let's compile the COBOLs. Here we go :
    [psadm1@psovmfscmfp2 svrexp-51_wp4-64bit]$ cd $PS_HOME/setup
    [psadm1@psovmfscmfp2 setup]$ ./pscbl.mak
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Convert all files for Unicode ....
    Conversion Summary for Source Codes in  :
         Source: /opt/oracle/psft/pt/tools/src/cbl/
         Target: /opt/oracle/psft/pt/tools/src/cblunicode/
          Number of Copy Libraries Read: 71
                         Modified:       71
                     Not Modified:       0
          Number of Programs Read:       44
                         Modified:       44
                     Not Modified:       0
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : All COBOL files were converted for Unicode successfully
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Compiling PTPCBLAE.cbl ...
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak: line 249: cob: command not found
    cp: cannot stat `PTPCBLAE.gnt': No such file or directory
    cp: cannot stat `PTPCBLAE.int': No such file or directory
    cp: cannot stat `PTPCBLAE.lst': No such file or directory
    ...What about env. variables ? COBDIR, COBPATH and COBOL do not appears anywhere in PATH...
    [psadm1@psovmfscmfp2 setup]$ env|grep -i cobol
    [psadm1@psovmfscmfp2 setup]$Let's set the env variables as we could expect to be (page 27, step 17 of the given doc), and retry to compile the COBOL :
    [psadm1@psovmfscmfp2 setup]$ export COBDIR=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit
    [psadm1@psovmfscmfp2 setup]$ export LD_LIBRARY_PATH=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit/lib:$LD_LIBRARY_PATH
    [psadm1@psovmfscmfp2 setup]$ export PATH=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit/bin:$PATH
    [psadm1@psovmfscmfp2 setup]$ ./pscbl.mak
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Convert all files for Unicode ....
    Conversion Summary for Source Codes in  :
         Source: /opt/oracle/psft/pt/tools/src/cbl/
         Target: /opt/oracle/psft/pt/tools/src/cblunicode/
          Number of Copy Libraries Read: 71
                         Modified:       71
                     Not Modified:       0
          Number of Programs Read:       44
                         Modified:       44
                     Not Modified:       0
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : All COBOL files were converted for Unicode successfully
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Compiling PTPCBLAE.cbl ...
    Micro Focus LMF - 010: Unable to contact license manager.                                                                                                                              This product has been unable to contact the                                     License Manager.                                                                                                                                                Execution of this product has been terminated.                                                                                                                  This product cannot execute without the License                                 Manager. Contact your license administrator                                     or refer to the 'Information Messages' chapter                                  of the License Management Facility                                              Administrator's Guide.
    cob64: error(s) in compilation: PTPCBLAE.cbl
    cp: cannot stat `PTPCBLAE.gnt': No such file or directory
    cp: cannot stat `PTPCBLAE.int': No such file or directory
    cp: cannot stat `PTPCBLAE.lst': No such file or directory
    ...Ok, maybe a bit better, at least it is trying to contact LMF. Probably the LMF is not started. Let's try to start it :
    [root@psovmfscmfp2 microfocus]# ./mflmman
    MF-LMF:Thu Sep 13 07:19:37 2012: LMF Starting
    [root@psovmfscmfp2 microfocus]#Good, it is starting, it means it wasn't (sic). Now retry to compile :
    [psadm1@psovmfscmfp2 setup]$ export COBDIR=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit
    [psadm1@psovmfscmfp2 setup]$ export LD_LIBRARY_PATH=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit/lib:$LD_LIBRARY_PATH
    [psadm1@psovmfscmfp2 setup]$ export PATH=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit/bin:$PATH
    [psadm1@psovmfscmfp2 setup]$ ./pscbl.mak
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Convert all files for Unicode ....
    Conversion Summary for Source Codes in  :
         Source: /opt/oracle/psft/pt/tools/src/cbl/
         Target: /opt/oracle/psft/pt/tools/src/cblunicode/
          Number of Copy Libraries Read: 71
                         Modified:       71
                     Not Modified:       0
          Number of Programs Read:       44
                         Modified:       44
                     Not Modified:       0
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : All COBOL files were converted for Unicode successfully
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Compiling PTPCBLAE.cbl ...
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Compiling PTPCURND.cbl ...
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Compiling PTPDBTST.cbl ...
    <snipped>
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : Compiling PTPWLGEN.cbl ...
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : All COBOL programs have been successfully compiled.
    /opt/oracle/psft/pt/tools/setup/pscbl_mf.mak : The COBOL executables were copied to /opt/oracle/psft/pt/tools/cblbin
    rm: cannot remove `/opt/oracle/psft/pt/apptools/src/cblunicode/CECCRLP1.cbl': Permission denied
    rm: cannot remove `/opt/oracle/psft/pt/apptools/src/cblunicode/CECCRLUP.cbl': Permission deniedIt looks better, I think the last lines marked with "Permission denied" can be safely be ignored.
    Those files are owned by psadm3 with a read only for other users (sic). But more concern, I'm wondering why it looks into apptools (???) whereas I'm using psadm1 (tools only, COBPATH=/opt/oracle/psft/pt/tools/cblbin).
    Anyway, seems the *.gnt files required to run the COBOLs programs are now in bin :
    [psadm1@psovmfscmfp2 setup]$ ls /opt/oracle/psft/pt/tools/cblbin
    PTPCBLAE.gnt  PTPDTTST.gnt  PTPECOBL.gnt  PTPLOGMS.gnt  PTPRATES.gnt  PTPSQLGS.gnt  PTPTESTU.gnt  PTPTSCNT.gnt  PTPTSLOG.gnt  PTPTSTBL.gnt  PTPTSWHR.gnt
    PTPCURND.gnt  PTPDTWRK.gnt  PTPEFCNV.gnt  PTPMETAS.gnt  PTPRUNID.gnt  PTPSQLRT.gnt  PTPTESTV.gnt  PTPTSEDS.gnt  PTPTSREQ.gnt  PTPTSUPD.gnt  PTPUPPER.gnt
    PTPDBTST.gnt  PTPDYSQL.gnt  PTPERCUR.gnt  PTPNETRT.gnt  PTPSETAD.gnt  PTPSTRFN.gnt  PTPTFLDW.gnt  PTPTSEDT.gnt  PTPTSSET.gnt  PTPTSUSE.gnt  PTPUSTAT.gnt
    PTPDEC31.gnt  PTPECACH.gnt  PTPESLCT.gnt  PTPNTEST.gnt  PTPSHARE.gnt  PTPTEDIT.gnt  PTPTLREC.gnt  PTPTSFLD.gnt  PTPTSTAE.gnt  PTPTSWHE.gnt  PTPWLGEN.gnt
    [psadm1@psovmfscmfp2 setup]$Have a try to link COBOLs :
    [psadm1@psovmfscmfp2 setup]$ ./psrun.mak
    ./psrun.mak - linking PSRUN for oel-5-x86_64, Version 2.6.32-200.13.1.el5uek ...
    ./psrun.mak - Successfully created PSRUN in directory: /opt/oracle/psft/pt/tools/bin
    ./psrun.mak - linking PSRUNRMT for oel-5-x86_64, Version 2.6.32-200.13.1.el5uek ...
    ./psrun.mak - Successfully created PSRUNRMT in directory: /opt/oracle/psft/pt/tools/bin
    [psadm1@psovmfscmfp2 setup]$The err files are empty :
    -rw-r--r-- 1 psadm1 oracle     0 Sep 13 07:26 psrun.err
    -rw-r--r-- 1 psadm1 oracle     0 Sep 13 07:26 psrunrmt.errSo far, so good now. We are able to test again the sample COBOL... until next failure.
    Yes, unfortunately, it fails again. But good thing, the log is not empty now :
    PSRUN: error while loading shared libraries: libcobrts64.so: cannot open shared object file: No such file or directoryThat's probably coming from some missing libraries during the psprcs.cfg configuration. Let's use the same env. variables settings as for psadm1 when compiling COBOLs.
    [psadm2@psovmfscmfp2 appserv]$ export COBDIR=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit
    [psadm2@psovmfscmfp2 appserv]$ export LD_LIBRARY_PATH=$COBDIR/lib:$LD_LIBRARY_PATH
    [psadm2@psovmfscmfp2 appserv]$ export PATH=$COBDIR/bin:$PATH
    [psadm2@psovmfscmfp2 appserv]$ ./psadminReconfigure, restart prcs and re-test... SUCCESSFULLY !!!!!!!!!!!!!!!!!!!!!!!!!
    Log from PTPDBTST process shows :
    SUCCESSFUL DATABASE CONNECTION
    SUCCESSFUL DATABASE DISCONNECTWhat a pain !
    I did not go further, but we could expect the same issue within the Application COBOLs, since the cblbin directory is also empty out there.
    According to psprcs.env, there're two values for COBDIR and the one for the applications cobol is empty :
    [psadm2@psovmfscmfp2 PRCSDOM]$ more psprcsrv.env
    INFORMIXSERVER=192.168.1.149
    COBPATH=/opt/oracle/psft/pt/apptools/cblbin:/opt/oracle/psft/pt/tools/cblbin
    PATH=/opt/oracle/psft/pt/apptools/bin:/opt/oracle/psft/pt/apptools/bin/interfacedrivers::/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit/bin:/opt/oracle/psft/pt/tools/appserv:/opt
    /oracle/psft/pt/tools/setup:/opt/oracle/psft/pt/tools/jre/bin:/opt/oracle/psft/pt/bea/tuxedo/bin:.:/opt/oracle/psft/pt/oracle-client/11.2.0.x/bin:/opt/oracle/psft/pt/oracle-clie
    nt/11.2.0.x/perl/bin:/usr/local/bin:/bin:/usr/bin:/opt/oracle/psft/pt/tools/bin:/opt/oracle/psft/pt/tools/bin/sqr/ORA/bin:/opt/oracle/psft/pt/tools/verity/linux/_ilnx21/bin:/hom
    e/psadm2/bin:.
    [psadm2@psovmfscmfp2 PRCSDOM]$ ls /opt/oracle/psft/pt/apptools/cblbin
    [psadm2@psovmfscmfp2 PRCSDOM]$ ls /opt/oracle/psft/pt/tools/cblbin
    PTPCBLAE.gnt  PTPDTTST.gnt  PTPECOBL.gnt  PTPLOGMS.gnt  PTPRATES.gnt  PTPSQLGS.gnt  PTPTESTU.gnt  PTPTSCNT.gnt  PTPTSLOG.gnt  PTPTSTBL.gnt  PTPTSWHR.gnt
    PTPCURND.gnt  PTPDTWRK.gnt  PTPEFCNV.gnt  PTPMETAS.gnt  PTPRUNID.gnt  PTPSQLRT.gnt  PTPTESTV.gnt  PTPTSEDS.gnt  PTPTSREQ.gnt  PTPTSUPD.gnt  PTPUPPER.gnt
    PTPDBTST.gnt  PTPDYSQL.gnt  PTPERCUR.gnt  PTPNETRT.gnt  PTPSETAD.gnt  PTPSTRFN.gnt  PTPTFLDW.gnt  PTPTSEDT.gnt  PTPTSSET.gnt  PTPTSUSE.gnt  PTPUSTAT.gnt
    PTPDEC31.gnt  PTPECACH.gnt  PTPESLCT.gnt  PTPNTEST.gnt  PTPSHARE.gnt  PTPTEDIT.gnt  PTPTLREC.gnt  PTPTSFLD.gnt  PTPTSTAE.gnt  PTPTSWHE.gnt  PTPWLGEN.gnt
    [psadm2@psovmfscmfp2 PRCSDOM]$The directory "/opt/oracle/psft/pt/apptools/cblbin" is owned by psadm3 and hosted on the database server (nfs mounted), so I assume we also need to set proper values for env variables, and compile the COBOLs before being able to use them.
    To resume what I did to make the COBOLs working on this PSOVM :
    1. As root, start LMF (this has to be done only once)
    cd /opt/oracle/psft/pt/cobol/microfocus
    ./mflmman
    2. As psadm1, set proper env. variable and compile (setting env variable has to be done each time you want to compile COBOLs)
    export COBDIR=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit
    export LD_LIBRARY_PATH=$COBDIR/lib:$LD_LIBRARY_PATH
    export PATH=$COBDIR/bin:$PATH
    cd $PS_HOME/setup
    ./pscbl.mak
    ./psrun.mak
    3. As psadm2, set proper env. variable and reconfigure psprcs.cfg, restart, restart (setting env variable has to be done each time you want to start the process scheduler, so probably better to append these in the .bash_profile)
    export COBDIR=/opt/oracle/psft/pt/cobol/svrexp-51_wp4-64bit
    export LD_LIBRARY_PATH=$COBDIR/lib:$LD_LIBRARY_PATH
    export PATH=$COBDIR/bin:$PATH
    cd $PS_HOME/appserv
    ./psadmin
    4. Same as step 2, but with user psadm3.
    HTH,
    Nicolas.
    PS: will it be the same issue on the HCM template delivered at the same time ? To be tested as well.
    PS2: and yes, I tested it twice before posting, result is same.
    Edited by: N Gasparotto on Sep 13, 2012 5:17 PM

    Fortunately, the COBOL issue does not exist on PSOVM HCM9.1 FP2 PT8.52.06 delivered in July 2012 (v3). COBOL are properly compiled (tools and app COBOLs), cblbin is not empty and they run successfully on the first shot.
    Nicolas.

  • How to calculate the size of database (a different one)

    Hello Friends,
    I am told to move the data from server to another machine (cloning is one very good option, I agree. But I expect a different answer regarding exports and imports). so How should I go about the task. The destination machine has got unlimited space. So thats not a problem. My questions are :
    1) How should I start the task ( I generally go for studying the structures and their sizes. Is it ok ?)
    2) If I am using Unix machine and there is a limitation that my server will not support file sizes of more than 2 GB, What should I do ?
    3) Shall I have to go for a full database backup or fragment the task ? If I do that , there are many schemas, so it will become tedious. But full backup will exceed OS size limitation. What should be done ?
    4) Is there anyway, I can go through a dump file, to find out, the database objects present inside that and note the related dependencies.
    Please respond.
    Regards,
    Ravi Duvvuri

    1) They are Unix machines. So will the size problem occur(if there is a limitation). How to overcome that?
    If the OS are of the same version you will have any problem. Regarding the storage only you have to have space in disk to store the datafiles, redo logs and controlfiles and nothing else.
    2) I am trying Export/import measure. Though there are other good methods, I just want to explore the full capabilities of Exp/Imp
    r.- Recreate the controlfiles is more effective and fast if the OS are of the same version.
    3) And the oracle version is 9i 2. (If I have to perform this on 8i (both source and destination, will the methods are going to differ ?)).
    R.- The method is the same for 8i/9i/10g.
    How should I go about doing this export/import ?
    r.- To use this method you have to have the original database started.
    To recreate the controlfile without having the datafile that you mentioned you have to get out it from the CREATE CONTROLFILE sentence and that's it.
    For Example: I mean, if your database has 8 datafiles and you have only 7, you have to include only 7 in the CREATE CONTROLFILE sentence.
    Joel Pérez
    http://otn.oracle.com/experts

Maybe you are looking for