Which LKM and IKM require ?????

I need to import one oracle table into .txt file. Can any body describe the steps involved in this process.
Which IKM and LKM we need to use ?

Hi Asif,
Steps used for import table to File:
1) create the file datastore in the model
2)create the interafce int1 where choose the oracle table which is to be imported as source and File data store as target
3)Do the mapping in the target columns.
You can use IKM ISO SQL to File Append Km for the interface.
Thanks,
Madha.

Similar Messages

  • Which LKM and IKM to use for Fast data loading b/w MSSQL 2005 and Oracle 11

    Hi,
    Can anybody help us to decide which LKMs and IKMs are best for data loading between MSSQL and Oracle.
    Staging Area is Oracle. We have to load around 400Million rows from MSSQL to Oracle 11g.
    Best regards,
    Muhammad

    Thanks Ayush,
    You are right and it has dumped the file very quickly; but it is giving error on sqlldr call thorugh jython. I have reaised SR with oracle to look into it further.
    thanks again and have a very nice time.
    Regards,
    Muhammad

  • Inconsistencies in LKM and IKM?

    Hello,
    I am trying to export from the demo HSQL_SRC, the city table to a file.
    I have created file datastores, physical an logical and a model.
    On load, the LKM prefixes all filenames with the directory I have provided, /temp.
    The IKM seems to ignore this prefix.
    Some logs:
    Create work table
    create table /temp/C$_mfileds2ds
         CITY_ID     STRING ,
         CITY     STRING
    which I think is correct,
    Export:
    insert into /temp/C$_mfileds2ds
         CITY_ID,
         CITY
    values
         :CITY_ID,
         :CITY
    This is what I see.
    Integration:
    select      distinct     
         'city_id'     CITY_ID,
         'city'     CITY
    from     "C$_mfileds2ds"
    where      (1=1)
    Should this not read: from "/temp/C$_mfileds2ds"
    insert into mfileds2ds
         city_id,
         city
    values
         :city_id,
         :city
    Same here, prefix with /temp I think.
    But still this is a warning, so no problem.
    when inserting the thing crashes.
    Why do I have a automatic /temp prefix during loading and not during integration?
    Kind regards,
    Frans.

    Thanks for your suggestion.
    I have applied the change but the problem is still there.
    So it seems to be located in the integration module.
    create table /temp/C$_mfileds2ds
         CITY_ID     STRING ,
         CITY     STRING ,
         POPULATION     NUMERIC ,
         REGION_ID     NUMERIC
    insert into mfileds2ds
         city_id,
         city,
         population,
         region_id
    values
         :CITY_ID,
         :CITY,
         :POPULATION,
         :REGION_ID
    I see the file.
    The C$_ file is dropped
    But still get the stacktrace:
    -22 : S0002 : java.sql.SQLException: Table not found: C$_mfileds2ds in statement [select     DISTINCT       CITY_ID    CITY_ID,      CITY    CITY,      POPULATION    POPULATION,      REGION_ID    REGION_ID  from     "C$_mfileds2ds"   where      (1=1)]
    java.sql.SQLException: Table not found: C$_mfileds2ds in statement [select     DISTINCT       CITY_ID    CITY_ID,      CITY    CITY,      POPULATION    POPULATION,      REGION_ID    REGION_ID  from     "C$_mfileds2ds"   where      (1=1)]
         at org.hsqldb.jdbc.jdbcUtil.throwError(jdbcUtil.java:62)
         at org.hsqldb.jdbc.jdbcPreparedStatement.<init>(jdbcPreparedStatement.java:1804)
         at org.hsqldb.jdbc.jdbcConnection.prepareStatement(jdbcConnection.java:547)
         at com.sunopsis.sql.SnpsQuery.a(SnpsQuery.java)
         at com.sunopsis.sql.SnpsQuery.a(SnpsQuery.java)
         at com.sunopsis.sql.SnpsQuery.updateExecStatement(SnpsQuery.java)
         at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Thread.java:595)
    kind regards,
    Frans.

  • ITunes Automatically Change EQ Setting Based On Which Airport Express Streaming?  I have two Airport Expresses running at my house. Both are connected to different types of speakers. Neither speaker set has its own equalizer, and both require a different

    I have two Airport Expresses running at my house. Both are connected to different types of speakers. Neither speaker set has its own equalizer, and both require a different EQ setting to sound just right. Does anyone know of a way to have iTunes automatically change its EQ setting based on which Airport Express it is streaming to?

    I'm in the same boat. I have a 6 zone amplifier running with 6 airport expresses to different rooms inside and outside the house. Each has it's own acoustic characteristics. I'd really love to be able to set each airport to equalize based on my SPL frequency sweep I did for each room.
    Setting the eq in itunes won't do it as that is global plus I have many more sources for auiod other than itunes that use the AEs directly.

  • Do I go with the MacBook pro or the MacBook air? I'm going into 8th grade and were required to have labtop. I Also edit videos for my YouTube chanel. Which should I get 13 inch model.

    Do I go with the MacBook pro or the MacBook air? I'm going into 8th grade and were required to have labtop. I Also edit videos for my YouTube chanel. Which should I get 13 inch model.

    MacBook Air: thin, light, fast SSD storage. Cons: not upgradable, expensive.
    MacBook Pro: cheaper, upgradable, better graphics card, more storage and RAM for the buck.  Cons: not as light
    you'll get more for the money with the MacBook Pro. the Air is better as a second computer for people.

  • I am trying to connect to tyne internet and computer requires my user and password in which i do not remember

    i am trying to connect to tyne internet and computer requires my user and password in which i do not remember

    I'm guessing that you are referring to Tyne Internet in the UK? Are you forgetting your Tyne username and passcode, or one for the computer? To recover the Tyne, you will need to contact them. If it is your Mac, from what little I understand about Mac computers, do you have it in your Keychain?

  • Recently purchased a itunes card and it requires a email and i entered my old one which i have no access to?what do i do?

    I recently purchased an itunes gift card and it requires an email and i accedently entered the wrong one which i have no acces to, what should i do?

    if you log into you account in iTunes you may be able to change your email address if that work work try to set your emails to forward to your new email address

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • Not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c

    not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c
    But i'm able to see other IKM's please help me, how can i see them

    Nope, It has not been altered.
    COMPONENT NAME: LKM Oracle to Oracle (datapump)
    COMPONENT VERSION: 11.1.2.3
    AUTHOR: Oracle
    COMPATIBILITY: ODI 11.1.2 and above
    Description:
    - Loading Knowledge Module
    - Loads data from an Oracle Server to an Oracle Server using external tables in the datapump format.
    - This module is recommended when developing interfaces between two Oracle servers when DBLINK is not an option.
    - An External table definition is created on the source and target servers.
    - When using this module on a journalized source table, the Journaling table is first updated to flag the records consumed and then cleaned from these records at the end of the interface.

  • Attempted to install AirPort Utility 6.0 on my new mac mini. Install failed and now the previous version 5.5.3 won't open. I downloaded and tried to instal 6.0 without software update, which failed, and my old 5.5.3 AirPort Utility still won't open? Help.

    Now the previous version won't open. I downloaded and tried to instal 6.0 without software update, which failed, and my old 5.5.3 Airport Utility still won't open.
    iTunes no longer recognizes my airport express.
    Error Message:
    None of the selected updates could be installed.
    An unexpected error occurred.
    Can't delete Airport Utility and reinstal from scratch as it is, “AirPort Utility” can’t be modified or deleted because it’s required by Mac OS X.
    Any sugestions?

    I reinstalled AirPort 5.5.3 which worked, but the app still won't launch but iTunes now recognizes my AirPort Express, I just hope I don't want to change its settings.

  • Which LKM is to be used for data from oracle to file

    Hi All
    I am new to ODI. Please help me in understanding this tool.
    I want to know Which LKM is to be used for moving data from oracle to a comma seperated file etc.

    Hi John
    Thanks a ton... I was able to move ahead. I executed thinking all params are correct.
    But it did not go thru and gave me this error:-
    936 : 42000 : java.sql.SQLException: ORA-00936: missing expression
    java.sql.SQLException: ORA-00936: missing expression
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
         at oracle.jdbc.driver.T4CPreparedStatement.execute_for_describe(T4CPreparedStatement.java:503)
         at oracle.jdbc.driver.OracleStatement.execute_maybe_describe(OracleStatement.java:965)
         at oracle.jdbc.driver.T4CPreparedStatement.execute_maybe_describe(T4CPreparedStatement.java:535)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1051)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:2984)
         at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3026)
         at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

  • Best back up software and hardware requirement for the same

    Good morning to all,
    I would like to know about back solutions. We are a small organization around 60 employees, currently we have windows server 2008 r2 and using symantec backup exc and we back up on external USB drives. Management not happy with Symantec 
    We plan to upgrade our servers , and moving to VMs  and plan to implement network storage.
    So we want a back up solution which could back up VMs ,Shared folder and other required data.
    What are the best practice and solutions for backups.

    Nopes :)
    It depends on you. I've been using Symantec from many years now ;)
    Arnav Sharma | http://arnavsharma.net/ Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.

  • CONCURRENT MANAGER SETUP AND CONFIGURATION REQUIREMENTS IN AN 11I RAC ENVIR

    제품 : AOL
    작성날짜 : 2004-05-13
    PURPOSE
    RAC-PCP 구성에 대한 Setup 사항을 기술한 문서입니다.
    PCP 구현은 CM의 workload 분산, Failover등을 목적으로 합니다.
    Explanation
    Failure sceniro 는 다음 3가지로 구분해 볼수 있습니다.
    1. The database instance that supports the CP, Applications, and Middle-Tier
    processes such as Forms, or iAS can fail.
    2. The Database node server that supports the CP, Applications, and Middle-
    Tier processes such as Forms, or iAS can fail.
    3. The Applications/Middle-Tier server that supports the CP (and Applications)
    base can fail.
    아래부분은 CM,AP 구성과
    CM과 GSM(Global Service Management)과의 관계를 설명하고 있습니다.
    The concurrent processing tier can reside on either the Applications, Middle-
    Tier, or Database Tier nodes. In a single tier configuration, non PCP
    environment, a node failure will impact Concurrent Processing operations do to
    any of these failure conditions. In a multi-node configuration the impact of
    any these types of failures will be dependent upon what type of failure is
    experienced, and how concurrent processing is distributed among the nodes in
    the configuration. Parallel Concurrent Processing provides seamless failover
    for a Concurrent Processing environment in the event that any of these types of
    failures takes place.
    In an Applications environment where the database tier utilizes Listener (
    server) load balancing is implemented, and in a non-load balanced environment,
    there are changes that must be made to the default configuration generated by
    Autoconfig so that CP initialization, processing, and PCP functionality are
    initiated properly on their respective/assigned nodes. These changes are
    described in the next section - Concurrent Manager Setup and Configuration
    Requirements in an 11i RAC Environment.
    The current Concurrent Processing architecture with Global Service Management
    consists of the following processes and communication model, where each process
    is responsible for performing a specific set of routines and communicating with
    parent and dependent processes.
    아래 내용은 PCP환경에서 ICM, FNDSM, IM, Standard Manager의 역활을 설명하고
    있습니다.
    Internal Concurrent Manager (FNDLIBR process) - Communicates with the Service
    Manager.
    The Internal Concurrent Manager (ICM) starts, sets the number of active
    processes, monitors, and terminates all other concurrent processes through
    requests made to the Service Manager, including restarting any failed processes.
    The ICM also starts and stops, and restarts the Service Manager for each node.
    The ICM will perform process migration during an instance or node failure.
    The ICM will be
    active on a single node. This is also true in a PCP environment, where the ICM
    will be active on at least one node at all times.
    Service Manager (FNDSM process) - Communicates with the Internal Concurrent
    Manager, Concurrent Manager, and non-Manager Service processes.
    The Service Manager (SM) spawns, and terminates manager and service processes (
    these could be Forms, or Apache Listeners, Metrics or Reports Server, and any
    other process controlled through Generic Service Management). When the ICM
    terminates the SM that
    resides on the same node with the ICM will also terminate. The SM is ?hained?
    to the ICM. The SM will only reinitialize after termination when there is a
    function it needs to perform (start, or stop a process), so there may be
    periods of time when the SM is not active, and this would be normal. All
    processes initialized by the SM
    inherit the same environment as the SM. The SM environment is set by APPSORA.
    env file, and the gsmstart.sh script. The TWO_TASK used by the SM to connect
    to a RAC instance must match the instance_name from GV$INSTANCE. The apps_<sid>
    listener must be active on each CP node to support the SM connection to the
    local instance. There
    should be a Service Manager active on each node where a Concurrent or non-
    Manager service process will reside.
    Internal Monitor (FNDIMON process) - Communicates with the Internal Concurrent
    Manager.
    The Internal Monitor (IM) monitors the Internal Concurrent Manager, and
    restarts any failed ICM on the local node. During a node failure in a PCP
    environment the IM will restart the ICM on a surviving node (multiple ICM's may
    be started on multiple nodes, but only the first ICM started will eventually
    remain active, all others will gracefully terminate). There should be an
    Internal Monitor defined on each node
    where the ICM may migrate.
    Standard Manager (FNDLIBR process) - Communicates with the Service Manager and
    any client application process.
    The Standard Manager is a worker process, that initiates, and executes client
    requests on behalf of Applications batch, and OLTP clients.
    Transaction Manager - Communicates with the Service Manager, and any user
    process initiated on behalf of a Forms, or Standard Manager request. See Note:
    240818.1 regarding Transaction Manager communication and setup requirements for
    RAC.
    Concurrent Manager Setup and Configuration Requirements in an 11i RAC
    Environment
    PCP를 사용하기위한 기본적인 Setup 절차를 설명하고 있습니다.
    In order to set up Setup Parallel Concurrent Processing Using AutoConfig with
    GSM,
    follow the instructions in the 11.5.8 Oracle Applications System Administrators
    Guide
    under Implementing Parallel Concurrent Processing using the following steps:
    1. Applications 11.5.8 and higher is configured to use GSM. Verify the
    configuration on each node (see WebIV Note:165041.1).
    2. On each cluster node edit the Applications Context file (<SID>.xml), that
    resides in APPL_TOP/admin, to set the variable <APPLDCP oa_var="s_appldcp">
    ON </APPLDCP>. It is normally set to OFF. This change should be performed
    using the Context Editor.
    3. Prior to regenerating the configuration, copy the existing tnsnames.ora,
    listener.ora and sqlnet.ora files, where they exist, under the 8.0.6 and iAS
    ORACLE_HOME locations on the each node to preserve the files (i.e./<some_
    directory>/<SID>ora/$ORACLE_HOME/network/admin/<SID>/tnsnames.ora). If any of
    the Applications startup scripts that reside in COMMON_TOP/admin/scripts/<SID>
    have been modified also copy these to preserve the files.
    4. Regenerate the configuration by running adautocfg.sh on each cluster node as
    outlined in Note:165195.1.
    5. After regenerating the configuration merge any changes back into the
    tnsnames.ora, listener.ora and sqlnet.ora files in the network directories,
    and the startup scripts in the COMMON_TOP/admin/scripts/<SID> directory.
    Each nodes tnsnames.ora file must contain the aliases that exist on all
    other nodes in the cluster. When merging tnsnames.ora files ensure that each
    node contains all other nodes tnsnames.ora entries. This includes tns
    entries for any Applications tier nodes where a concurrent request could be
    initiated, or request output to be viewed.
    6. In the tnsnames.ora file of each Concurrent Processing node ensure that
    there is an alias that matches the instance name from GV$INSTANCE of each
    Oracle instance on each RAC node in the cluster. This is required in order
    for the SM to establish connectivity to the local node during startup. The
    entry for the local node will be the entry that is used for the TWO_TASK in
    APPSORA.env (also in the APPS<SID>_<HOSTNAME>.env file referenced in the
    Applications Listener [APPS_<SID>] listener.ora file entry "envs='MYAPPSORA=<
    some directory>/APPS<SID>_<HOSTNAME>.env)
    on each node in the cluster (this is modified in step 12).
    7. Verify that the FNDSM_<SID> entry has been added to the listener.ora file
    under the 8.0.6 ORACLE_HOME/network/admin/<SID> directory. See WebiV Note:
    165041.1 for instructions regarding configuring this entry. NOTE: With the
    implementation of GSM the 8.0.6 Applications, and 9.2.0 Database listeners
    must be active on all PCP nodes in the cluster during normal operations.
    8. AutoConfig will update the database profiles and reset them for the node
    from which it was last run. If necessary reset the database profiles back to
    their original settings.
    9. Ensure that the Applications Listener is active on each node in the cluster
    where Concurrent, or Service processes will execute. On each node start the
    database and Forms Server processes as required by the configuration that
    has been implemented.
    10. Navigate to Install > Nodes and ensure that each node is registered. Use
    the node name as it appears when executing a nodename?from the Unix prompt on
    the server. GSM will add the appropriate services for each node at startup.
    11. Navigate to Concurrent > Manager > Define, and set up the primary and
    secondary node names for all the concurrent managers according to the
    desired configuration for each node workload. The Internal Concurrent
    Manager should be defined on the primary PCP node only. When defining the
    Internal Monitor for the secondary (target) node(s), make the primary node (
    local node) assignment, and assign a secondary node designation to the
    Internal Monitor, also assign a standard work shift with one process.
    12. Prior to starting the Manager processes it is necessary to edit the APPSORA.
    env file on each node in order to specify a TWO_TASK entry that contains
    the INSTANCE_NAME parameter for the local nodes Oracle instance, in order
    to bind each Manager to the local instance. This should be done regardless
    of whether Listener load balancing is configured, as it will ensure the
    configuration conforms to the required standards of having the TWO_TASK set
    to the instance name of each node as specified in GV$INSTANCE. Start the
    Concurrent Processes on their primary node(s). This is the environment
    that the Service Manager passes on to each process that it initializes on
    behalf of the Internal Concurrent Manager. Also make the same update to
    the file referenced by the Applications Listener APPS_<SID> in the
    listener.ora entry "envs='MYAPPSORA= <some directory>/APPS<SID>_<HOSTNAME>.
    env" on each node.
    13. Navigate to Concurrent > Manager > Administer and verify that the Service
    Manager and Internal Monitor are activated on the secondary node, and any
    other addititional nodes in the cluster. The Internal Monitor should not be
    active on the primary cluster node.
    14. Stop and restart the Concurrent Manager processes on their primary node(s),
    and verify that the managers are starting on their appropriate nodes. On
    the target (secondary) node in addition to any defined managers you will
    see an FNDSM process (the Service Manager), along with the FNDIMON process (
    Internal Monitor).
    Reference Documents
    Note 241370.1

    What is your database version? OS?
    We are using VCP suite for Planning Purpose. We are using VCP environment (12.1.3) in Decentralized structure connecting to 3 differect source environment ( consisting 11i and R12). As per the Oracle Note {RAC Configuration Setup For Running MRP Planning, APS Planning, and Data Collection Processes [ID 279156]} we have implemented RAC in our test environment to get better performance.
    But after doing all the setups and concurrent programs assignment to different nodes, we are seeing huge performance issue. The Complete Collection which takes generally on an avg 180 mins in Production, is taking more than 6 hours to complete in RAC.
    So I would like to get suggestion from this forum, if anyone has implemented RAC in pure VCP (decentralized) environment ? Will there be any improvement if we make our VCP Instance in RAC ?Do you PCP enabled? Can you reproduce the issue when you stop the CM?
    Have you reviewed these docs?
    Value Chain Planning - VCP - Implementation Notes & White Papers [ID 280052.1]
    Concurrent Processing - How To Ensure Load Balancing Of Concurrent Manager Processes In PCP-RAC Configuration [ID 762024.1]
    How to Setup and Run Data Collections [ID 145419.1]
    12.x - Latest Patches and Installation Requirements for Value Chain Planning (aka APS Advanced Planning & Scheduling) [ID 746824.1]
    APSCHECK.sql Provides Information Needed for Diagnosing VCP and GOP Applications Issues [ID 246150.1]
    Thanks,
    Hussein

  • Oracle 11g Certification and Workshop Requirement

    Hello,
    I have recently cleared below Oracle Exams.
    1Z0-051 (SQL Fundamentals)
    1Z0-052 (Oracle 11g Admin I)
    1Z0-053 (Oracle 11g Admin II) - Cleared in May 2013
    I haven't completed the Workshop requirement which indicates I am not OCP as of now.
    Question is, is there any specific time limit within which I have to complete the Workshop? Specifically can I wait for another 6 months before completing the Workshop?
    If I can wait for 6 months, is there any chance OU will stop offering these workshops in next 6 months (by Dec 2013)?
    Thanks

    There is no time limit. But more importantly, you do not need to attend either of the DBA workshop courses. There are many courses which will meet the requirement. As you have the knowledge to  have passed the exams, you should attend a more advanced course. Perhaps the Database Performance Tuning course, or (if you want a shorter and  cheaper one) SQL Statement Tuning. 

  • Who created which program and when?

    Hi all,
    Who created which program and when?
    How can I get this info, thanks.
    Deniz.

    Hi Thomas,
    Yep True....
    Very True...
    Thanks for sharing your knowledge...
    But If User wants something else in selection screen then what is available in standard se90 or se84 or any other TCODE or Program....
    Again There can be many solution for one problem... And I provided one of the solution...
    If user wants program type in seleciton screen
    like
    Created DATE
    Created By
    Modified DATE
    Modified By
    If user wants these and may be some other details in selection screen then ???
    Alternative is to use direct SE16 as well...
    So there are many solutions to one problem... it all depends on user/customer/client's requirement...
    Correct me if i am wrong..
    Thanks & Regards
    ilesh 24x7

Maybe you are looking for

  • How do I rid of CouponDropDown malware?

    I was looking for a B-side mp3 album to buy online. I suppose it never released for purchase on major websites, because neither iTunes or Spotify had it available. So, I tried to find a download online (yes, mistake, I know). I downloaded something s

  • Muse site not displaying correctly when 'Published' to BC

    Hi all Strange incidents happening with a Muse site I'm designing for a client. When I use the Publish feature (which uploads the site to the default Business Catalyst domain you get with the Muse subscription) it throws  the elements all over the pl

  • More mail authentication issues

    I have set up a mail server on a 10.6.2 macminiserver. I have setup multiple users, and one (mine of course) works just fine, the others all can not login. I have tried reducing it to minimum sercurity with clear passwords, but still no help. I keep

  • Process Chains Problem

    Hi, I’m facing a problem with the triggering to the Process Chains. When I’m creating a new process chain and scheduling it for immediate run using ‘Activate & Schedule’ (F8) the chains fails to start. In SM37 I can see the following status for BI_PR

  • Change countries

    I clicked on a link to update an app from a vendor (Daylite-Market Circle) and a message came up saying that I had to switch to the Canadian store to download the app. So I did and now I dont know how to get back to the USA store. At least I think th