Job management setup using Solman_setup

I want to implement job management via Solution manager but if I go to SPRO or Job Management via SOLMAN_SETUP no help or program guidance is displayed. Both my Solman systems are patched to 7.1 stack 10 wit the latest notes applied.
If action to check prerequisites is performed - it either does nothing or aborts after 10sec.
Similar when trying to configure in SPRO - when selecting activate services no list is provided etc
Solman_setup -> Job Management:
Any suggestions welcome!

Hi Jan,
I checked in my system, the same the initial step1  is not given any output log, seems its a bug. not sure what exactly this step doing
But after the execution of the step, the job *JSM_PRE_CHECK* completed successfully, so you can check, if the job completed, you proceed for next step.
moreover these are the pre requestie for job scheduling management, so you can refer the below for documentation and need to finish for JSM
Do refer here also Job Scheduling Management - SAP Solution Manager - SAP Library
Thanks
Jansi

Similar Messages

  • Error on Loading Credit Management Setups using iSetup in target instance

    Hi All,
         I am trying to migrate Oracle credit management functional setups(Scoring model, Checklist, Automation rules) from source instance to target instance. I am trying to achieve this by the usage of iSetup. Once the Load is done, I am getting the following error and the setups are not migrated. This error comes up only for Credit management Setups. Not for other functionalities. Please help me out on this.
    +---------------------------------------------------------------------------+
    Application Implementation: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    AZR12LOADER module: iSetup R12 Loader
    +---------------------------------------------------------------------------+
    Current system time is 09-OCT-2013 03:15:18
    +---------------------------------------------------------------------------+
    Concurrent Request Parameters
    RELOAD_REQUEST_ID=
    DBC_FILE_NAME=DEVF.dbc
    SNAPSHOT_REQUEST_TYPE=E
    SNAPSHOT_NAME=RAC Credit Management Test
    RELOAD_METHOD=
    REQUEST_TYPE=L
    IS_REMOTE=Y
    JOB_NAME=RAC Credit Management Test1
    USER_NAME=DEPPS
    Downloading the extract from central instance
    Successfully copied the Extract
    Time taken to download Extract and write as zip file = 0 seconds
    Validating Primary Extract...
    Parsing driver.xml
    Time taken to parse the Driver file and construct setup objects:6 milliseconds
    Sorting Apis based on their dependency...
    Time taken to sort the Apis=12 milliseconds
    Printing sorted Apis
      Load Sequence 1
      1)AR_CMGT_Scores
    Oct 9, 2013 3:15:48 AM oracle.adf.share.config.ADFConfigFactory findOrCreateADFConfig
    INFO: oracle.adf.share.config.ADFConfigFactory No META-INF/adf-config.xml found
    Name: AR_CMGT_Scores
    Type: BC4J
    Path: oracle.apps.ar.ispeed.creditmgt.scores.server.ScoresAM
    Time Taken(seconds): 2.0
    Importing rows from xml file, and validating rows ......
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for ScoresEO.
    java.lang.Integer incompatible with java.lang.Long
    Message not found. Application: AZ, Message Name: AZW_FWK_USER_ROW_EXCEPTION. Tokens: VONAME = Scores; KEY = Score Model Id = 'null'
    ; EXCEPTION = Updates are not supported by this API.
    74 rows processed, now posting changes to the database ......
    Transaction committed.
    Processed API:AR_CMGT_Scores
    Status: WARNING
    Concurrent program completed.
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 09-OCT-2013 03:15:50
    +---------------------------------------------------------------------------+

    Vivek,
    Thanks for the update.
    Regards,
    Hussein

  • ASA5512-X Setup using Management Interface

    I have a brand new ASA5512-X running 8.6.1, and am trying to do an initial setup using the Quick Start Guide that came with it.  However, the Management Interface is not working.  I have a PC connected and set to use DHCP, but the port is not active. 
    I connected a console cable and can see in the config that the interface is shutdown.  So I set it to active, and the port is now active, but is not giving out a DHCP address as the guide says it should.
    I would like to use the ASDM Startup Wizard to configure this device, so how do I get it to work the way the instructions say it should?
    Thanks!

    Hello,
    Try by console cable to use the commad:
    config factory-default
    and in case you don't have this:
    ssl encryption des-sha1
    It should get your ASDM working.
    Let me know how this works for you.
    Regards

  • Cannot open Catalog Manager and Job Manager on Windows 7 (11.1.1.6.2 BP1)

    Hi,
    We recently upgraded to OBIEE version 11.1.1.6.2 BP1 and installed Client tools from the same version (downloaded from Oracle Support). Once installed, ODBC connections are created to connect to the BI Server and we are able to connect to BI Server using Administration Tool. But Catalog Manager and Job Manager are not opening.
    Is there any setup missing to make Catalog Manager and Job Manager work?
    Thanks in advance.

    Hi,
    It is a known bug. Please find the solution here.
    http://123obi.com/2013/02/unable-to-open-catalog-manager-obiee-11g/
    Cheers,
    Kalyan Chukkapalli
    http://123obi.com

  • Send Email Report with Publisher? and  Job Manager...

    Hi, i have a great Doubt, i try to send Email with a report, first i try to send with Publisher, this is all configurations.
    Delivery Configuration
    !http://lh3.ggpht.com/_V2lpPpulbm0/Slwm-4TLiJI/AAAAAAAACag/xZ5w57zkRnM/s800/BI%20Publisher.PNG!
    Email Configuration
    !http://lh5.ggpht.com/_V2lpPpulbm0/Slwm_PKsjVI/AAAAAAAACak/baCeaA9kV_s/s800/BI%20Publisher2.PNG!
    Configuration Send Parameters
    !http://lh5.ggpht.com/_V2lpPpulbm0/SlwroVMXh4I/AAAAAAAACao/0-UaglX4N5Y/s512/configuracionparametros.PNG!
    But when i submit.. i have an error: "Must issue a STARTTLS comand first",*this error* means that Gmail Server not suporter?
    Do you know Some Mail Company to Rrealize some test.
    !http://lh3.ggpht.com/_V2lpPpulbm0/Slwm-8g_LJI/AAAAAAAACac/1CeWNWZLL18/s640/errorpublisher.PNG!
    But... It is neccesary configurate Job Manager? why is necessary configurate Publisher and Job manager, is the same? or Each one is in use for a different thing, all forms i post my Job Manager configuration too.
    !http://lh4.ggpht.com/_V2lpPpulbm0/Slwm-X7idrI/AAAAAAAACaU/eq_N_Rloqjs/s576/SchedulerConfiguration.PNG!
    thanks all and sorry for Spam Images(I think You comprehension better me)

    . i have an error: "Must issue a STARTTLS command first",this error means that Gmail Server not suporter?  yes gmail will not support to send emails from this......because our accounts are free...
    Try with your corporate mail ids it will work.
    It is neccesary configurate Job Manager? why is necessary configurate Publisher and Job manager, is the same? or Each one is in use for a different thing Bi Publisher will not use Job manager mail settings.....If yes ,why we need to configure the mail settings in BIPublisher itself??
    Two things are different......
    thanks,
    Saichand.V

  • Solution Manager Setup & Configuartion

    Hi There
    This is a question on Solution Manager setup.
    We are in the process of configuring Solution manager. Basically we have installed two Sol Man 7.0 (Ehp1) systems development (SOD)  and Production (SOP)
    We are insterested in configuring EWA and also would want CCMS monitoring (diagnostics) for our R3, SCM,and BI systems and we are not implementing CHARM, Service Desk or any other functionalities. I would like to know how to proceed with utilising the Dev and prod Solution manager systems for acheiving the above. I would like to get some suggestion on which system should be used to configure the EWA and monitoring. These are the two scenarios that I am considering and would like some suggestion on.
    Scenario1:
    create Dev Landsacpe & Solution for Dev systems in SOD and Prod Landsacpe & Solution for Production systems in SOP
    Turn on EWA and Diagnostics for Prod system in SOP
    Scenario2:
    create Dev Landsacpe & Solution for Dev systems in SOD and a master landscape and solutions for all the systems (Dev& Prod) in SOP. Turn on EWA and Diagnostics for Prod systems in SOP
    What are the pros and cons for the above options ? Is there any other better options ?
    NOTE: We do not have PI(or XI) and have no other Landsacpe config except for BI systems which have been installed as a local SLD. Our R3 4.7 and SCM are on SAP BASIS 620. The idea is to turn on EWA and monitor only the production systems.We are not implementing any Business Solutions in Sol Man system.
    Thanks
    sapkid

    Hi,
    >
    sapkid wrote:
    > Scenario1:
    > create Dev Landsacpe & Solution for Dev systems in SOD and Prod Landsacpe & Solution for Production systems in SOP
    > Turn on EWA and Diagnostics for Prod system in SOP
    > sapkid
    This is the SAP recommended way to create your solutions. All Dev Systems into one solution and All PRD systems into a separate solution.
    check this note 1257308 for EWA FAQs.
    Check these Best Bractices for your BI monitoing
    https://websmp209.sap-ag.de/~sapidb/011000358700001121992008E
    http://service.sap.com/~sapidb/011000358700001123042008E
    Hope this helps.
    Feel free to revert back.
    -=-Ragu

  • Project management systemwho uses what?

    My dilemma:
    We are a small Design Marketing firm that uses a hodge-podge of different software systems to manage the various aspects of a project lifecycle.
    My wish:
    To find a single system that will:
    1. Generate work estimates
    2. Convert estimates to project tickets
    3. Track all time and resources for a project
    4. Generate an invoice at any interval and/or at the end of the project
    Currently we use Word to generate an estimate, Filemaker Pro to set up an actual project number/ticket and track time, Basecamp (online) to manage files and correspondance with client, and Word to generate an invoice. No two of those communicate with the other so there are huge inefficiencies. Been in the business 20 yrs, and even with the ubiquity of computing, nobody out there as solved the "universal" project management tool.
    Has anyone at least found a system that minimizes the number of software systems needed?

    Print Management Software
    If you go to indesign mac forum and search for "project management" or "job management" something should pop up. We've discussed this several times.

  • Not able to connect with managed server using ssl connection

    Hi Guys,
    My weblogic server is running on linux. I have setup ssl connction bu using Demo Identity and Demo Trust.In server logs i can find the following infomation that server is running on secure port.
    But once i try connect to managed server using client i m facing below error:
    <May 27, 2013 2:55:00 PM IST> <Info> <Security> <BEA-090905> <Disabling CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true>
    <May 27, 2013 2:55:00 PM IST> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true>
    <May 27, 2013 2:55:00 PM IST> <Info> <Security> <BEA-090908> <Using default WebLogic SSL Hostname Verifier implementation.>
    javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3s://host:port: Destination unreachable; nested exception is:
         javax.net.ssl.SSLHandshakeException: General SSLEngine problem; No available router to destination]
         at weblogic.jndi.internal.ExceptionTranslator.toNamingException(ExceptionTranslator.java:40)
         at weblogic.jndi.WLInitialContextFactoryDelegate.toNamingException(WLInitialContextFactoryDelegate.java:767)
         at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:366)
         at weblogic.jndi.Environment.getContext(Environment.java:315)
         at weblogic.jndi.Environment.getContext(Environment.java:285)
         at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117)
         at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
         at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:307)
         at javax.naming.InitialContext.init(InitialContext.java:242)
         at javax.naming.InitialContext.<init>(InitialContext.java:216)
         at com.akt.client.WLCLIENT.makeConnection(WLCLIENT.java:40)
         at com.akt.client.WLCLIENT.main(WLCLIENT.java:60)
    Caused by: java.net.ConnectException: t3s://host:port: Destination unreachable; nested exception is:
         javax.net.ssl.SSLHandshakeException: General SSLEngine problem; No available router to destination
         at weblogic.rjvm.RJVMFinder.findOrCreateInternal(RJVMFinder.java:216)
         at weblogic.rjvm.RJVMFinder.findOrCreate(RJVMFinder.java:170)
         at weblogic.rjvm.ServerURL.findOrCreateRJVM(ServerURL.java:165)
         at weblogic.jndi.WLInitialContextFactoryDelegate$1.run(WLInitialContextFactoryDelegate.java:345)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
         at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:340)
         ... 9 more
    Caused by: java.rmi.ConnectException: Destination unreachable; nested exception is:
         javax.net.ssl.SSLHandshakeException: General SSLEngine problem; No available router to destination
         at weblogic.rjvm.ConnectionManager.bootstrap(ConnectionManager.java:470)
         at weblogic.rjvm.ConnectionManager.bootstrap(ConnectionManager.java:321)
         at weblogic.rjvm.RJVMManager.findOrCreateRemoteInternal(RJVMManager.java:260)
         at weblogic.rjvm.RJVMManager.findOrCreate(RJVMManager.java:197)
         at weblogic.rjvm.RJVMFinder.findOrCreateRemoteServer(RJVMFinder.java:238)
         at weblogic.rjvm.RJVMFinder.findOrCreateInternal(RJVMFinder.java:200)
         ... 15 more
    But in server logs i can see below message
    opt/Oracle/Middleware/wlserver_12.1/server/lib/DemoIdentity.jks.>
    <May 27, 2013 2:47:06 PM IST> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /opt/Oracle/Middleware/wlserver_12.1/server/lib/DemoTrust.jks.>
    <May 27, 2013 2:47:06 PM IST> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /opt/jdk1.7.0_21/jre/lib/security/cacerts.>
    <May 27, 2013 2:47:06 PM IST> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on hostname:port for protocols iiops, t3s, ldaps, https.>
    <May 27, 2013 2:47:06 PM IST> <Notice> <WebLogicServer> <BEA-000332> <Started the WebLogic Server Managed Server "Server-Test" for domain "base_domain" running in development mode.>
    Please suggest
    Edited by: 1008140 on May 27, 2013 2:37 AM

    Welcome to OTN
    This section related to Database question not fusion middle ware Post your question
    Oracle Discussion Forums » Fusion Middleware

  • CONCURRENT MANAGER SETUP AND CONFIGURATION REQUIREMENTS IN AN 11I RAC ENVIR

    제품 : AOL
    작성날짜 : 2004-05-13
    PURPOSE
    RAC-PCP 구성에 대한 Setup 사항을 기술한 문서입니다.
    PCP 구현은 CM의 workload 분산, Failover등을 목적으로 합니다.
    Explanation
    Failure sceniro 는 다음 3가지로 구분해 볼수 있습니다.
    1. The database instance that supports the CP, Applications, and Middle-Tier
    processes such as Forms, or iAS can fail.
    2. The Database node server that supports the CP, Applications, and Middle-
    Tier processes such as Forms, or iAS can fail.
    3. The Applications/Middle-Tier server that supports the CP (and Applications)
    base can fail.
    아래부분은 CM,AP 구성과
    CM과 GSM(Global Service Management)과의 관계를 설명하고 있습니다.
    The concurrent processing tier can reside on either the Applications, Middle-
    Tier, or Database Tier nodes. In a single tier configuration, non PCP
    environment, a node failure will impact Concurrent Processing operations do to
    any of these failure conditions. In a multi-node configuration the impact of
    any these types of failures will be dependent upon what type of failure is
    experienced, and how concurrent processing is distributed among the nodes in
    the configuration. Parallel Concurrent Processing provides seamless failover
    for a Concurrent Processing environment in the event that any of these types of
    failures takes place.
    In an Applications environment where the database tier utilizes Listener (
    server) load balancing is implemented, and in a non-load balanced environment,
    there are changes that must be made to the default configuration generated by
    Autoconfig so that CP initialization, processing, and PCP functionality are
    initiated properly on their respective/assigned nodes. These changes are
    described in the next section - Concurrent Manager Setup and Configuration
    Requirements in an 11i RAC Environment.
    The current Concurrent Processing architecture with Global Service Management
    consists of the following processes and communication model, where each process
    is responsible for performing a specific set of routines and communicating with
    parent and dependent processes.
    아래 내용은 PCP환경에서 ICM, FNDSM, IM, Standard Manager의 역활을 설명하고
    있습니다.
    Internal Concurrent Manager (FNDLIBR process) - Communicates with the Service
    Manager.
    The Internal Concurrent Manager (ICM) starts, sets the number of active
    processes, monitors, and terminates all other concurrent processes through
    requests made to the Service Manager, including restarting any failed processes.
    The ICM also starts and stops, and restarts the Service Manager for each node.
    The ICM will perform process migration during an instance or node failure.
    The ICM will be
    active on a single node. This is also true in a PCP environment, where the ICM
    will be active on at least one node at all times.
    Service Manager (FNDSM process) - Communicates with the Internal Concurrent
    Manager, Concurrent Manager, and non-Manager Service processes.
    The Service Manager (SM) spawns, and terminates manager and service processes (
    these could be Forms, or Apache Listeners, Metrics or Reports Server, and any
    other process controlled through Generic Service Management). When the ICM
    terminates the SM that
    resides on the same node with the ICM will also terminate. The SM is ?hained?
    to the ICM. The SM will only reinitialize after termination when there is a
    function it needs to perform (start, or stop a process), so there may be
    periods of time when the SM is not active, and this would be normal. All
    processes initialized by the SM
    inherit the same environment as the SM. The SM environment is set by APPSORA.
    env file, and the gsmstart.sh script. The TWO_TASK used by the SM to connect
    to a RAC instance must match the instance_name from GV$INSTANCE. The apps_<sid>
    listener must be active on each CP node to support the SM connection to the
    local instance. There
    should be a Service Manager active on each node where a Concurrent or non-
    Manager service process will reside.
    Internal Monitor (FNDIMON process) - Communicates with the Internal Concurrent
    Manager.
    The Internal Monitor (IM) monitors the Internal Concurrent Manager, and
    restarts any failed ICM on the local node. During a node failure in a PCP
    environment the IM will restart the ICM on a surviving node (multiple ICM's may
    be started on multiple nodes, but only the first ICM started will eventually
    remain active, all others will gracefully terminate). There should be an
    Internal Monitor defined on each node
    where the ICM may migrate.
    Standard Manager (FNDLIBR process) - Communicates with the Service Manager and
    any client application process.
    The Standard Manager is a worker process, that initiates, and executes client
    requests on behalf of Applications batch, and OLTP clients.
    Transaction Manager - Communicates with the Service Manager, and any user
    process initiated on behalf of a Forms, or Standard Manager request. See Note:
    240818.1 regarding Transaction Manager communication and setup requirements for
    RAC.
    Concurrent Manager Setup and Configuration Requirements in an 11i RAC
    Environment
    PCP를 사용하기위한 기본적인 Setup 절차를 설명하고 있습니다.
    In order to set up Setup Parallel Concurrent Processing Using AutoConfig with
    GSM,
    follow the instructions in the 11.5.8 Oracle Applications System Administrators
    Guide
    under Implementing Parallel Concurrent Processing using the following steps:
    1. Applications 11.5.8 and higher is configured to use GSM. Verify the
    configuration on each node (see WebIV Note:165041.1).
    2. On each cluster node edit the Applications Context file (<SID>.xml), that
    resides in APPL_TOP/admin, to set the variable <APPLDCP oa_var="s_appldcp">
    ON </APPLDCP>. It is normally set to OFF. This change should be performed
    using the Context Editor.
    3. Prior to regenerating the configuration, copy the existing tnsnames.ora,
    listener.ora and sqlnet.ora files, where they exist, under the 8.0.6 and iAS
    ORACLE_HOME locations on the each node to preserve the files (i.e./<some_
    directory>/<SID>ora/$ORACLE_HOME/network/admin/<SID>/tnsnames.ora). If any of
    the Applications startup scripts that reside in COMMON_TOP/admin/scripts/<SID>
    have been modified also copy these to preserve the files.
    4. Regenerate the configuration by running adautocfg.sh on each cluster node as
    outlined in Note:165195.1.
    5. After regenerating the configuration merge any changes back into the
    tnsnames.ora, listener.ora and sqlnet.ora files in the network directories,
    and the startup scripts in the COMMON_TOP/admin/scripts/<SID> directory.
    Each nodes tnsnames.ora file must contain the aliases that exist on all
    other nodes in the cluster. When merging tnsnames.ora files ensure that each
    node contains all other nodes tnsnames.ora entries. This includes tns
    entries for any Applications tier nodes where a concurrent request could be
    initiated, or request output to be viewed.
    6. In the tnsnames.ora file of each Concurrent Processing node ensure that
    there is an alias that matches the instance name from GV$INSTANCE of each
    Oracle instance on each RAC node in the cluster. This is required in order
    for the SM to establish connectivity to the local node during startup. The
    entry for the local node will be the entry that is used for the TWO_TASK in
    APPSORA.env (also in the APPS<SID>_<HOSTNAME>.env file referenced in the
    Applications Listener [APPS_<SID>] listener.ora file entry "envs='MYAPPSORA=<
    some directory>/APPS<SID>_<HOSTNAME>.env)
    on each node in the cluster (this is modified in step 12).
    7. Verify that the FNDSM_<SID> entry has been added to the listener.ora file
    under the 8.0.6 ORACLE_HOME/network/admin/<SID> directory. See WebiV Note:
    165041.1 for instructions regarding configuring this entry. NOTE: With the
    implementation of GSM the 8.0.6 Applications, and 9.2.0 Database listeners
    must be active on all PCP nodes in the cluster during normal operations.
    8. AutoConfig will update the database profiles and reset them for the node
    from which it was last run. If necessary reset the database profiles back to
    their original settings.
    9. Ensure that the Applications Listener is active on each node in the cluster
    where Concurrent, or Service processes will execute. On each node start the
    database and Forms Server processes as required by the configuration that
    has been implemented.
    10. Navigate to Install > Nodes and ensure that each node is registered. Use
    the node name as it appears when executing a nodename?from the Unix prompt on
    the server. GSM will add the appropriate services for each node at startup.
    11. Navigate to Concurrent > Manager > Define, and set up the primary and
    secondary node names for all the concurrent managers according to the
    desired configuration for each node workload. The Internal Concurrent
    Manager should be defined on the primary PCP node only. When defining the
    Internal Monitor for the secondary (target) node(s), make the primary node (
    local node) assignment, and assign a secondary node designation to the
    Internal Monitor, also assign a standard work shift with one process.
    12. Prior to starting the Manager processes it is necessary to edit the APPSORA.
    env file on each node in order to specify a TWO_TASK entry that contains
    the INSTANCE_NAME parameter for the local nodes Oracle instance, in order
    to bind each Manager to the local instance. This should be done regardless
    of whether Listener load balancing is configured, as it will ensure the
    configuration conforms to the required standards of having the TWO_TASK set
    to the instance name of each node as specified in GV$INSTANCE. Start the
    Concurrent Processes on their primary node(s). This is the environment
    that the Service Manager passes on to each process that it initializes on
    behalf of the Internal Concurrent Manager. Also make the same update to
    the file referenced by the Applications Listener APPS_<SID> in the
    listener.ora entry "envs='MYAPPSORA= <some directory>/APPS<SID>_<HOSTNAME>.
    env" on each node.
    13. Navigate to Concurrent > Manager > Administer and verify that the Service
    Manager and Internal Monitor are activated on the secondary node, and any
    other addititional nodes in the cluster. The Internal Monitor should not be
    active on the primary cluster node.
    14. Stop and restart the Concurrent Manager processes on their primary node(s),
    and verify that the managers are starting on their appropriate nodes. On
    the target (secondary) node in addition to any defined managers you will
    see an FNDSM process (the Service Manager), along with the FNDIMON process (
    Internal Monitor).
    Reference Documents
    Note 241370.1

    What is your database version? OS?
    We are using VCP suite for Planning Purpose. We are using VCP environment (12.1.3) in Decentralized structure connecting to 3 differect source environment ( consisting 11i and R12). As per the Oracle Note {RAC Configuration Setup For Running MRP Planning, APS Planning, and Data Collection Processes [ID 279156]} we have implemented RAC in our test environment to get better performance.
    But after doing all the setups and concurrent programs assignment to different nodes, we are seeing huge performance issue. The Complete Collection which takes generally on an avg 180 mins in Production, is taking more than 6 hours to complete in RAC.
    So I would like to get suggestion from this forum, if anyone has implemented RAC in pure VCP (decentralized) environment ? Will there be any improvement if we make our VCP Instance in RAC ?Do you PCP enabled? Can you reproduce the issue when you stop the CM?
    Have you reviewed these docs?
    Value Chain Planning - VCP - Implementation Notes & White Papers [ID 280052.1]
    Concurrent Processing - How To Ensure Load Balancing Of Concurrent Manager Processes In PCP-RAC Configuration [ID 762024.1]
    How to Setup and Run Data Collections [ID 145419.1]
    12.x - Latest Patches and Installation Requirements for Value Chain Planning (aka APS Advanced Planning & Scheduling) [ID 746824.1]
    APSCHECK.sql Provides Information Needed for Diagnosing VCP and GOP Applications Issues [ID 246150.1]
    Thanks,
    Hussein

  • Primary Archivelog Management When Using Logical Standby

    I was wondering if anyone has seen or created a script to do archivelog management when using standby databases. We recently had an experience where our standby database and primary database got out of sync and we had to recreate the standby....I had hope to advoid this but it was easier than trying to recall tapes to retrieve the backups of the archivelogs from the primary.
    Management on the logical side is fine. I know of the ability to delete log files on the logcial standby after they have been applied to the database. However, this does not address the issue of the management of archivelogs on the primary site. Does anyone have a script that only deletes the archivelogs from the primary site after they have been applied to the logical standby? Now this is a LOGICAL standby not a physical standby....RMAN has the capability of doing this for a physical standby but I don't know of any commands ro setup to allow it to do it for a logical standby.
    Regards
    Tim

    Even with 1 TB of space I only have room enough for about 3 days worth of archivelogs. Working in the environment that I do, there is another group in charge of the ESAN which serves many different systems. They were wanting detail reasoning and justification why I needed 3 days worth of space when I requested it.
    It is not truly a problem, there are always work arounds. I can restore from the backups, I can put in scripts to send out email notification when the standby falls to far behind the OE. However even with all that, I wanted to write a script that someone could run without having to check the system to delete the archivelogs if space becomes an issue on the production side.
    I was just hoping to see if anyone had a script to do it already.
    Thanks for all the input.
    Regards
    Tim

  • Attribute Functions in Oracle Project Management Setup

    I have been trying to find the functionality and applicability of the menu option " Attribute Functions" that is available in Oracle Project Management Setup.
    I have used attribute groups and context to create view / placeholders to capture additional information related to the project, but have not been able to figure out the applicability of Attribute Functions.
    Please help me if anyone has used it or know how to use it.

    Hi,
    Refer Oracle Project Management User Guide. (Section 2: Workplan and Progress Management)
    Oracle Projects archives all submitted task and workplan progress. You can correct
    submitted progress records and create backdated progress records.
    Backdating Progress
    ============
    You can create backdated progress records to fill gaps in the progress history of a task
    or project. You can select any non-cycle progress date prior to the system date and enter
    progress for this past date.
    For example, you can enter backdated progress for a Friday in the past even if a project
    has a progress cycle with Monday as the reporting day.
    You enter an As of Date, progress status, a brief overview, comments, and physical
    percent complete when you enter backdated progress.
    Backdated progress transactions are for informational purposes
    ~Sumit

  • Unable to export Hive managed table using Oraloader

    Hi,
    I am using MySQL as Hive metastore and trying to export a Hive managed table using Oraloader.
    I get the following excpetion in Jobtracker:
    2012-09-12 12:23:56,337 INFO org.apache.hadoop.mapred.JobTracker: Job job_201209121205_0007 added successfully for user 'oracle' to queue 'default'
    2012-09-12 12:23:56,338 INFO org.apache.hadoop.mapred.AuditLogger: USER=oracle IP=192.168.1.5 OPERATION=SUBMIT_JOB TARGET=job_201209121205_0007 RESULT=SUCCESS
    2012-09-12 12:23:56,353 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201209121205_0007
    2012-09-12 12:23:56,353 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201209121205_0007
    2012-09-12 12:23:56,594 INFO org.apache.hadoop.mapred.JobInProgress: jobToken generated and stored with users keys in /opt/ladap/common/hadoop-0.20.2-cdh3u1/hadoop-datastore/mapred/system/job_201209121205_0007/jobToken
    2012-09-12 12:23:56,606 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201209121205_0007 = 5812. Number of splits = 2
    2012-09-12 12:23:56,606 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201209121205_0007_m_000000 has split on node:/default-rack/hadoop-namenode
    2012-09-12 12:23:56,606 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201209121205_0007_m_000001 has split on node:/default-rack/hadoop-namenode
    2012-09-12 12:23:56,607 INFO org.apache.hadoop.mapred.JobInProgress: job_201209121205_0007 LOCALITY_WAIT_FACTOR=1.0
    2012-09-12 12:23:56,607 ERROR org.apache.hadoop.mapred.JobTracker: Job initialization failed:
    java.lang.NegativeArraySizeException
    at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:748)
    at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4016)
    at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:679)
    2012-09-12 12:23:56,607 INFO org.apache.hadoop.mapred.JobTracker: Failing job job_201209121205_0007
    2012-09-12 12:23:56,607 INFO org.apache.hadoop.mapred.JobInProgress$JobSummary: jobId=job_201209121205_0007,submitTime=1347467036196,launchTime=1347467036607,,finishTime=1347467036607,numMaps=2,numSlotsPerMap=1,numReduces=0,numSlotsPerReduce=1,user=oracle,queue=default,status=FAILED,mapSlotSeconds=0,reduceSlotsSeconds=0,clusterMapCapacity=10,clusterReduceCapacity=2
    2012-09-12 12:23:56,639 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/opt/ladap/common/hadoop/logs/history/hadoop-namenode_1347465941865_job_201209121205_0007_oracle_OraLoader to file:/opt/ladap/common/hadoop/logs/history/done
    2012-09-12 12:23:56,648 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/opt/ladap/common/hadoop/logs/history/hadoop-namenode_1347465941865_job_201209121205_0007_conf.xml to file:/opt/ladap/common/hadoop/logs/history/done
    My oraloader console log is below:
    [oracle@rakesh hadoop]$ bin/hadoop jar oraloader.jar oracle.hadoop.loader.OraLoader -conf olh-conf/TestAS/scott/testmanagedtable/conf.xml -fs hdfs://hadoop-namenode:9000/ -jt hadoop-namenode:9001
    Oracle Loader for Hadoop Release 1.1.0.0.1 - Production
    Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
    12/09/12 12:23:42 INFO loader.OraLoader: Oracle Loader for Hadoop Release 1.1.0.0.1 - Production
    Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
    12/09/12 12:23:47 INFO loader.OraLoader: Sampling disabled, table: LDP_TESTMANAGEDTABLE is not partitioned
    12/09/12 12:23:47 INFO loader.OraLoader: oracle.hadoop.loader.loadByPartition is disabled, LDP_TESTMANAGEDTABLE is not partitioned
    12/09/12 12:23:47 INFO output.DBOutputFormat: Setting reduce tasks speculative execution to false for : oracle.hadoop.loader.lib.output.JDBCOutputFormat
    12/09/12 12:23:47 INFO loader.OraLoader: Submitting OraLoader job OraLoader
    12/09/12 12:23:50 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    12/09/12 12:23:50 INFO metastore.ObjectStore: ObjectStore, initialize called
    12/09/12 12:23:51 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
    12/09/12 12:23:51 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
    12/09/12 12:23:51 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
    12/09/12 12:23:51 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    12/09/12 12:23:51 INFO DataNucleus.Persistence: Property javax.jdo.option.NonTransactionalRead unknown - will be ignored
    12/09/12 12:23:51 INFO DataNucleus.Persistence: ================= Persistence Configuration ===============
    12/09/12 12:23:51 INFO DataNucleus.Persistence: DataNucleus Persistence Factory - Vendor: "DataNucleus" Version: "2.0.3"
    12/09/12 12:23:51 INFO DataNucleus.Persistence: DataNucleus Persistence Factory initialised for datastore URL="jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true" driver="com.mysql.jdbc.Driver" userName="root"
    12/09/12 12:23:51 INFO DataNucleus.Persistence: ===========================================================
    12/09/12 12:23:52 INFO Datastore.Schema: Creating table `DELETEME1347467032448`
    12/09/12 12:23:52 INFO Datastore.Schema: Schema Name could not be determined for this datastore
    12/09/12 12:23:52 INFO Datastore.Schema: Dropping table `DELETEME1347467032448`
    12/09/12 12:23:52 INFO Datastore.Schema: Initialising Catalog "hive", Schema "" using "None" auto-start option
    12/09/12 12:23:52 INFO Datastore.Schema: Catalog "hive", Schema "" initialised - managing 0 classes
    12/09/12 12:23:52 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    12/09/12 12:23:52 INFO DataNucleus.MetaData: Registering listener for metadata initialisation
    12/09/12 12:23:52 INFO metastore.ObjectStore: Initialized ObjectStore
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 11, column 6 : cvc-elt.1: Cannot find the declaration of element 'jdo'. - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 312, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 359, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 381, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 416, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 453, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 494, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 535, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 576, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 621, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 666, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MDatabase [Table : `DBS`, InheritanceStrategy : new-table]
    12/09/12 12:23:53 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MDatabase.parameters [Table : `DATABASE_PARAMS`]
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 2 index(es) for table `DBS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 0 foreign key(s) for table `DBS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 2 unique key(s) for table `DBS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 2 index(es) for table `DATABASE_PARAMS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 1 foreign key(s) for table `DATABASE_PARAMS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 1 unique key(s) for table `DATABASE_PARAMS`
    12/09/12 12:23:54 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MDatabase
    12/09/12 12:23:54 INFO metastore.HiveMetaStore: 0: get_table : db=db_1 tbl=testmanagedtable
    12/09/12 12:23:54 INFO HiveMetaStore.audit: ugi=oracle     ip=unknown-ip-addr     cmd=get_table : db=db_1 tbl=testmanagedtable     
    12/09/12 12:23:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MSerDeInfo [Table : `SERDES`, InheritanceStrategy : new-table]
    12/09/12 12:23:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MStorageDescriptor [Table : `SDS`, InheritanceStrategy : new-table]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MTable [Table : `TBLS`, InheritanceStrategy : new-table]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MSerDeInfo.parameters [Table : `SERDE_PARAMS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MTable.parameters [Table : `TABLE_PARAMS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MTable.partitionKeys [Table : `PARTITION_KEYS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.bucketCols [Table : `BUCKETING_COLS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.cols [Table : `COLUMNS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.parameters [Table : `SD_PARAMS`]
    12/09/12 12:23:55 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.sortCols [Table : `SORT_COLS`]
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 index(es) for table `SERDES`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 0 foreign key(s) for table `SERDES`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SERDES`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 4 index(es) for table `TBLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 foreign key(s) for table `TBLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 unique key(s) for table `TBLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SDS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SDS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SDS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SD_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SD_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SD_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `BUCKETING_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `BUCKETING_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `BUCKETING_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `COLUMNS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `COLUMNS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `COLUMNS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `PARTITION_KEYS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `PARTITION_KEYS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `PARTITION_KEYS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `TABLE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `TABLE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `TABLE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SERDE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SERDE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SERDE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SORT_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SORT_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SORT_COLS`
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MSerDeInfo
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MStorageDescriptor
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MTable
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MFieldSchema
    12/09/12 12:23:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
    12/09/12 12:23:55 WARN snappy.LoadSnappy: Snappy native library not loaded
    12/09/12 12:23:55 INFO mapred.FileInputFormat: Total input paths to process : 1
    12/09/12 12:23:56 INFO mapred.JobClient: Running job: job_201209121205_0007
    12/09/12 12:23:57 INFO mapred.JobClient: map 0% reduce 0%
    12/09/12 12:23:57 INFO mapred.JobClient: Job complete: job_201209121205_0007
    12/09/12 12:23:57 INFO mapred.JobClient: Counters: 0
    [oracle@rakesh hadoop]$
    Please help. Thanks in advance.
    Regards,
    Rakesh Kumar Rakshit

    Hi Raskesh,
    Can you share the conf.xml and map.xml files you are using? I am trying to do the same (export from HIVE to oracle DB) and i get the following exception ClassNotFoundException: org.apache.hadoop.hive.metastore.TableType.
    Best regards,
    Bilal

  • Templates Project Setup and Lease Management Setups don't show up during selection set creation

    Hi,
    I am new to iSetup.
    I have just installed iSetup using the Isetup List Of Mandatory Patches For R12 (Doc ID 811040.1)
    I don't find the following templates needed by my users to create selection sets:
    - Projects Setup
    - Lease Management Setups
    Any idea why they don't show up?
    Regards,
    Jean

    It's not a backup restoration.
    I'm a newbie, I installed Oracle on my computer. In the installation, Oracle allows to create a database. I didn't change any setting information, excepted set the database's password.
    Oracle was successul installed.
    The database setup (wizard) failed.
    The listener was not created.
    When the wizard step run "Creation and starting an Oracle instance", I get ORA-03113.
    I don't know the log I sent to you is the good log. I'm not sure. My database name is "Essai1"
    Do it is some files that I could send to you?
    thanks for your help.
    Jimmy

  • I in need of oracle project costing,project billing and project management setup documents? can you please help me.

    I am in need of project costing and project billing and project management setup documents with screen shoot? Can anyone help me?

    I have no clue about recovering the info, but when you say you save you should always use Save As. That clears unneeded info from the file. With the file size you have, that may be an important need.

  • Job Management

    Hello SolMan users !
    I am playing around with the new Work Centers, and what I'd like to setup first is the Job Management Work Center.
    Currently, when I open this work center, in the Overview, I have all the sections (Job Requests, Job Monitoring, ...), but no numbers are displayed, there are '???' everywhere.
    Also, I have activated the WebDynpro services for Job management. I tried scheduling a job in a remote system with the BC-XBP scheduler, but I got a "No connexion to BC-XBP" error message.
    I am totally at a loss about what I need to setup to have this working, and I cannot find any documentation.
    Could someone point out to me what I am missing ? What is BC-XBP ? Is it something that needs to be configured in SolMan/the remote systems ?
    Finally, is there any documentation about Work Centers customizing ?
    Thank you.
    Thomas

    Hello,
    please begin with reading SAP Note 1122497 - Process Scheduling for SAP Solution Manager.
    The Application Area for this case is "SV-SMG-PSM".
    You have to install the add on SAP CPS by Redwood. It is a part of license of SAP Netweaver.
    The add on component is ST-PSM.
    Best regards
    Thomas Hiesener

Maybe you are looking for

  • Ipod not recognised on either of 2 computers

    My ipod is not recognised by windows or itunes. Tried it on my friend's computer, and again not recognised by windows or itunes. ipod plays ok, so I assume there's a problem with the USB port on the ipod itself. Not found any useful help on the suppo

  • Itunes Match deletes albums and artwork from the base PC

    I use Windows Seven and have immaculate metadata on all songs.  No albums are broken up and everything works perfectly.  When I tried to use the itunes match feature, a bunch of old albums that I imported from the original CD's were deleted.  In addi

  • ORA 600 [kkqvmRmViewFromLst1] error

    Hi, I am using Binary XML DB . Here is the DB banner info. Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production "CORE 11.2.0.1.0 Production" TNS for Linux: Version 11.2.0.1.0 - Production

  • Do I need to get TV licence on my ATV3 due to live SKY news in UK?

    I recently got an Apple TV3. I don't have freeview/freesat box, nor I have any TV service (cable/aerial/satellite). 99% of time I watch netflix, stream movies from my computer, watch iPlayer. However, with the last software update, ATV3 can also stre

  • How to prevent exported phone numbers from converting into garbage?

    I'm trying to export a list of segment members into Excel. Whether I export them as .csv or .xls format, when I open them in Excel, the numbers display in "number" format - i.e. they've been converted to values like 3.58295E+11. If I try to convert t