Configuration of Issue Management tables

Hello -
I am looking to configure the tables in Issue Management  but I cannot seem to find them via the SPRO transaction.
Is there a guide on how to do the configuration for Issue Management?
Please let me know
Thanks!
Nadine

Please look at Note 1019583. In TA SOLMAN_ISSUE_MGMT there are also the usual Wed Dynpro ALV customizing possibilities (independent of Issue Management).

Similar Messages

  • Issue Management Configuration?

    Hi,
    Can you please point me to basic configuration for Issue Management?
    My requirement is very basic.
    Thank you
    Somar

    Hi Somar,
    just perform the steps described in the Implementation Guide in SAP Solution Manager.
    Run transaction SPRO, choose SAP Reference IMG, expand the tree for SAP Solution Manager.
    Perform all activities listed under Basic Settings, you will also find the required steps for Issue Management.
    Best regards,
    Ruediger Stoecker

  • Issue management  configuration docs required

    Hello sap gurus
    Can anyone forward me Issue management document for configuration and where do we use this in CRM.
    Regards,
    divya

    Hi Somar,
    just perform the steps described in the Implementation Guide in SAP Solution Manager.
    Run transaction SPRO, choose SAP Reference IMG, expand the tree for SAP Solution Manager.
    Perform all activities listed under Basic Settings, you will also find the required steps for Issue Management.
    Best regards,
    Ruediger Stoecker

  • RV320 Bug - Service Management Table (Port Forwarding)

    I'm unable to add more than 16 entries for port forwarding.
    It's a RV320 on v1.1.1.06 (newest to date) and it doesn't accept more than 16 entries in the "Service Management Table" required for port forwarding. As soon as I try to enter number 17 and hit save the window closes like it always does but you can see for a short time it says "Critical failure. Please contact support." Everything else works, except for the entry in the Service Management Table. I'm also unable to use it in the port forwarding section, it just doesn't save the entry. I'm unable to add any services to the list unless I delete others but it only works again until number 16.
    Actually the "limit" is 37 because it comes with 21 services entered out of the box.
    I couldn't find any bug reporting website that I could use without a contract. So I seek for help here.
    Anybody else having this issue or is it just my device?

    10 days ago a post was made in https://supportforums.cisco.com/discussion/12353771/cannot-manage-service-list-all-waited-unacceptably-long-fix  indicating there is a new firmware in beta test, I've contacted support to try to get a copy.
    I'm moving off Draytek, have a 2830 with latest firmware and various weird issues that they've confirmed are bugs but cannot provide a due date for fix. DHCP randomly giving out wrong DNS server addresses, tagged VLAN support flaky and giving out DHCP details from wrong VLAN (worked around using 1 cable from switch per VLAN and using port based VLAN rather than tagged), App Enforcement for IM blocking causing SMTP and Live.com login issues. And that's just what cropped up last week with the unit at work here. Still using it for our live router as we can't put the RV320 in place until we can configure all the required ACLs which needs more than 16 service entries.

  • AS Java (Config Wizard Configuration to Solution Manager)

    Hello.
    While I try to configure "NW:Solution Manager-AS Java(Config Wizard configuration to Solution Manager)" in NWA I receive the following error:
    Error: Name or password is incorrect (repeat logon)
    Step: J2EE User: SOLMAN with SID and Client extension
    Why the program uses the user "CTC_NULL" (in the attached log) to login to the destination?
    I never created such user.
    I am grateful for every tip.
    Log:
    J2EE User: SOLMAN with SID and Client extension
    Description
    No specified detailed description
    State
    START_ACTION_ERROR
    Result
    A user with the name SM2CTCSMR001 exists in the System
    Checking the step
    Please go to the J2EE Engine Useradministration using an user with administrator role (e.g. Administrator)
    Take care that 'user' is selected in the combo box and enter the user name SM2CTCSMR001 into the input field, then press the 'Start' button
    If the user exists, he will be shown in the table, if you select the entry you can see details about him
    Executing the step manually
    Please go to the J2EE Engine Useradministration using an user with administrator role (e.g. Administrator)
    Press the 'Create User' button
    The create user dialog appears
    Please execute following steps in the 'General Info' Tab to create the user:
    In the input field Logon-Id enter the name SM2CTCSMR001
    In the combo box 'Security Policy' select the entry 'System User (Internal RFC and Background Processing)'
    Please enter a password and confirmation for the user
    Press the save button to save the user
    Support Information
    Type of Action
    InvokeServiceDebug Events
    error message
    Error: Name or password is incorrect (repeat logon)
    InvokeService:J2EE User: SOLMAN with SID and Client extension
    Library: sap.com/tclmconfig~content
    SourcePath ./com_sap_sm_ctc_scenario_simulate/com_sap_sm_ctc_scenario_ui_process/ext_1206447991146/proc_1197052628227/proc_1207325025157/proc_1207755933502/maintainUser
    Time 2012/02/29 at 08:46:42
    WSDL content/common/service/usermanagement/maintainUser.wsdl
    Destination parameters
    user : CTC_NULL
    password : % secure content %
    host : host1
    sysnr : 00
    client : 000
    language : EN
    Execute Java Service
    Library: sap.com/tclmctcutilcore_ear
    Class: com.sap.ctc.util.core.services.UserFacade
    Method: void com.sap.ctc.util.core.services.UserFacade.maintainUser(java.lang.String, java.lang.String, java.lang.String)
    Arguments (3)
    userName : SM2CTCSMR001
    password : % secure content %
    userType : password1
    InvokeService- Result: ERROR
    Refresh Env. Messages: false
    Duration: 0.141 sec
    Exception Class: com.sap.mw.jco.JCO$Exception
    Exception Message: Name or password is incorrect (repeat logon)

    Hi,
    As far as we could see, you're facing an authorization issue. To solve
    that...
    Can you check that the user executing the task (SOLMAN, i suppose) is
    assigned the authorization - you can run an authorization trace using
    transaction ST01 (check only the authorization box) and reproduce the
    issue. The resulting trace should show which authorization failed. Once
    this is established the necessary authority can be assigned to the user.
    Thanks,
    Regards
    Vikram

  • Issue Management scenario Troubleshooting dump CX_WDR_ADAPTER_EXCEPTION

    Hi,
    I'm customizing img transaction to configure service desk scenario in a
    Solution Manager SPS15 7.0(SID=SMG), on windows 2003 server/SQl Server
    2005.
    The problem is customizing Issue Management scenario at step "Troubleshooting".
    When I execute this action, I go to smsy transaction->Settings -> Self-Diagnosis
    and an error with this tittle appears:
    Error when processing your request
    What has happened?
    The URL http://solution.grupotec.local:8002/sap/bc/webdynpro/sap/dswp_sd_settings/ was not called due to an error.
    Note
    The following error text was processed in the system SMG : Adapter error in &VIEW_ELEMENT_TYPE& "_08" of view "DSWP_SD_SETTINGS.START": Context binding of property ? cannot be resolved: The Mapping to Node COMPONENTCONTROLLER.1.LAYOUT Has Not Been Completed.
    The error occurred on the application server solution_SMG_02 and in the work process 0 .
    The termination type was: RABAX_STATE
    The ABAP call stack was:
    Method: RAISE_FOR of program CX_WDR_ADAPTER_EXCEPTION======CP
    Method: RAISE_BINDING_EXCEPTION of program CL_WDR_VIEW_ELEMENT_ADAPTER===CP
    Method: GET_BOUND_ELEMENT of program CL_WDR_VIEW_ELEMENT_ADAPTER===CP
    Method: GET_ATTRIBUTE_INTERNAL of program CL_WDR_VIEW_ELEMENT_ADAPTER===CP
    Method: IF_WDR_VIEW_ELEMENT_ADAPTER~SET_CONTENT of program /1WDA/L8STANDARD==============CP
    Method: IF_WDR_VIEW_ELEMENT_ADAPTER~SET_CONTENT of program /1WDA/L7STANDARD==============CP
    Method: CONV_VIEW_INTO_VE_ADAPTER_TREE of program CL_WDR_INTERNAL_WINDOW_ADAPTERCP
    Method: SET_CONTENT_BY_WINDOW of program CL_WDR_INTERNAL_WINDOW_ADAPTERCP
    Method: RENDER_WINDOWS of program CL_WDR_CLIENT_SSR=============CP
    Method: IF_WDR_RESPONSE_RENDERER~RENDER_USER_INTERFACE_UPDATES of program CL_WDR_CLIENT_SSR=============CP
    What can I do?
    If the termination type was RABAX_STATE, then you can find more information on the cause of the termination in the system SMG in transaction ST22.
    If the termination type was ABORT_MESSAGE_STATE, then you can find more information on the cause of the termination on the application server solution_SMG_02 in transaction SM21.
    If the termination type was ERROR_MESSAGE_STATE, then you can search for more information in the trace file for the work process 0 in transaction ST11 on the application server solution_SMG_02 . In some situations, you may also need to analyze the trace files of other work processes.
    If you do not yet have a user ID, contact your system administrator.
    Error code: ICF-IE-http -c: 010 -u: SOLMANADM -l: E -s: SMG -i: solution_SMG_02 -w: 0 -d: 20080618 -t: 115252 -v: RABAX_STATE -e: UNCAUGHT_EXCEPTION
    HTTP 500 - Internal Server Error
    Your SAP Internet Communication Framework Team
    This error creates a dump:
    Runtime Errors         UNCAUGHT_EXCEPTION
    Exception              CX_WDR_ADAPTER_EXCEPTION
    Short text
        An exception occurred that was not caught.
    What happened?
        The exception 'CX_WDR_ADAPTER_EXCEPTION' was raised, but it was not caught
         anywhere along
        the call hierarchy.
        Since exceptions represent error situations and this error was not
        adequately responded to, the running ABAP program
         'CX_WDR_ADAPTER_EXCEPTION======CP' has to be
        terminated.
    Error analysis
        An exception occurred which is explained in detail below.
        The exception, which is assigned to class 'CX_WDR_ADAPTER_EXCEPTION', was not
         caught and
        therefore caused a runtime error.
        The reason for the exception is:
        Adapter error in &VIEW_ELEMENT_TYPE& "_08" of view "DSWP_SD_SETTINGS.START":
        Context binding of property ? cannot be resolved: The Mapping to Node
        COMPONENTCONTROLLER.1.LAYOUT Has Not Been Completed.
        The occurrence of the exception is closely related to the occurrence of
        a previous exception "CX_WD_CONTEXT", which was raised in the program
         "CL_WDR_CONTEXT_NODE_MAP=======CP",
        specifically in line 25 of the (include) program
         "CL_WDR_CONTEXT_NODE_MAP=======CM001".
        The cause of the exception was:
        The Mapping to Node COMPONENTCONTROLLER.1.LAYOUT Has Not Been Completed.
    Information on where terminated
        Termination occurred in the ABAP program "CX_WDR_ADAPTER_EXCEPTION======CP" -
         in "RAISE_FOR".
        The main program was "SAPMHTTP ".
        In the source code you have the termination point in line 45
        of the (Include) program "CX_WDR_ADAPTER_EXCEPTION======CM004".
    I've tested in sicf transaction that /sap/bc/webdynpro/sap/dswp_sd_settings/ service (right click->test service), appears a log-on popup I type usr j2ee_admin and pwd, and the same error appears.
    I'm logged with user to configure Solution Manager
    which was SAP_ALL, SAP_NEW profile and a Z_RFC rol with( S_RFC,
    S_RFCACL authorizations object).
    Please could you help me??
    Thanks and Regards
    Raul

    I've resolved this problem with SAP Note:
    1157740 - Self Diagnosis dumps when Settings is accessed from Menu
    Thanks and Regards
    Raul

  • Configuration for Transaction Management

              Hi,
              I am working with Weblogic Server SP1. I am facing a problem in configuring for
              Transaction Management.
              I have a session EJB say SEJB and two entity EJB say EEJB1 and EEJB2. EEJB1 is
              for the parent table
              and EEJB2 is for the child table.
              I have two records in the database REC1 and REC2.
              REC2 has dependencies and cannot be deleted, while REC1 can be deleted.
              In weblogic-ejb-jar.xml I have configured as follows:
              <weblogic-enterprise-bean>
              <ejb-name>SEJB</ejb-name>
              <stateless-session-descriptor>
              <pool>
              <max-beans-in-free-pool>300</max-beans-in-free-pool>
              <initial-beans-in-free-pool>150</initial-beans-in-free-pool>
              </pool>
              </stateless-session-descriptor>
              <reference-descriptor>
                   <ejb-reference-description>
                   <ejb-ref-name>EEJB</ejb-ref-name>
                   <jndi-name>EEJBean</jndi-name>
                   </ejb-reference-description>
                   </reference-descriptor>
              <jndi-name>SEJBn</jndi-name>
              </weblogic-enterprise-bean>
              Further, in ejb-jar.xml I have set up the <trans-attribute> as RequiresNew for
              Session Bean while Supports
              for the EEJB. Something like this:...
              <container-transaction>
              <method>
              <ejb-name>SEJB</ejb-name>
              <method-intf>Remote</method-intf>
              <method-name>*</method-name>
              </method>
              <trans-attribute>RequiresNew</trans-attribute>
              </container-transaction>
              In spite of this setting, when, through the client, I am selecting the two records
              REC1 and REC2 at the same
              time and deleting them, REC1 gets deleted while REC2 does not and gives a TransactionRollbackException.
              Ideally, since both are part of a single transaction, both should have been rolled
              back.
              Please suggest if I am missing on some kind of configuration parameter or setting.
              I'll be more than
              happy to provide some more details to get the problem solved.
              I can also be reached at [email protected]
              Thanks in advance,
              Regards,
              Rishi
              

    TCode: SWF5
    Enterprise_Extensions:
    -> EA-FS
    Enterprise_Business_Functions:
    -> FIN_TRM*
    Rg
    Lorenz

  • Sol Man Issue Management

    Hi Team,
    I am configuring isssue management in Sol Man. I am using SAP EHP 1 for SAP Solution Manager 7.0, ST: SAPKITL433.
    I have created ZSSI t.code (Copy of SLFI).
    1.When i try to create issue in DSWP and in SOLAR01, i could able to create only SLFI not ZSSI. How to get my transaction type ZSSI in DSWP and SOLAR01.
    2.When i create the issue, i could not able to select the "Project" in the "Context" tab. How to link the project with an issue?
    3. While creating the issue in DSWP, i need to specify the entry in the "Subject" field. I would like to add the custom defined "Subject" entries. How to add entries in the "Subject" field apart from the standard dropdown?
    4. In the tab "Service Session", i could not able to assign any service sessions. How to assign this? How to create service session in DSWP?
    Regards
    PSK
    Edited by: psk.psg on Oct 1, 2010 1:35 PM

    Hi PSK!
    Regarding the customizing, I think that note 1019583 would be helpful for you as it brings information on customizing in issue management. Check if you have done it correctly.
    Regards,
    Daniel.

  • Unable to export Hive managed table using Oraloader

    Hi,
    I am using MySQL as Hive metastore and trying to export a Hive managed table using Oraloader.
    I get the following excpetion in Jobtracker:
    2012-09-12 12:23:56,337 INFO org.apache.hadoop.mapred.JobTracker: Job job_201209121205_0007 added successfully for user 'oracle' to queue 'default'
    2012-09-12 12:23:56,338 INFO org.apache.hadoop.mapred.AuditLogger: USER=oracle IP=192.168.1.5 OPERATION=SUBMIT_JOB TARGET=job_201209121205_0007 RESULT=SUCCESS
    2012-09-12 12:23:56,353 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201209121205_0007
    2012-09-12 12:23:56,353 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201209121205_0007
    2012-09-12 12:23:56,594 INFO org.apache.hadoop.mapred.JobInProgress: jobToken generated and stored with users keys in /opt/ladap/common/hadoop-0.20.2-cdh3u1/hadoop-datastore/mapred/system/job_201209121205_0007/jobToken
    2012-09-12 12:23:56,606 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201209121205_0007 = 5812. Number of splits = 2
    2012-09-12 12:23:56,606 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201209121205_0007_m_000000 has split on node:/default-rack/hadoop-namenode
    2012-09-12 12:23:56,606 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201209121205_0007_m_000001 has split on node:/default-rack/hadoop-namenode
    2012-09-12 12:23:56,607 INFO org.apache.hadoop.mapred.JobInProgress: job_201209121205_0007 LOCALITY_WAIT_FACTOR=1.0
    2012-09-12 12:23:56,607 ERROR org.apache.hadoop.mapred.JobTracker: Job initialization failed:
    java.lang.NegativeArraySizeException
    at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:748)
    at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4016)
    at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:679)
    2012-09-12 12:23:56,607 INFO org.apache.hadoop.mapred.JobTracker: Failing job job_201209121205_0007
    2012-09-12 12:23:56,607 INFO org.apache.hadoop.mapred.JobInProgress$JobSummary: jobId=job_201209121205_0007,submitTime=1347467036196,launchTime=1347467036607,,finishTime=1347467036607,numMaps=2,numSlotsPerMap=1,numReduces=0,numSlotsPerReduce=1,user=oracle,queue=default,status=FAILED,mapSlotSeconds=0,reduceSlotsSeconds=0,clusterMapCapacity=10,clusterReduceCapacity=2
    2012-09-12 12:23:56,639 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/opt/ladap/common/hadoop/logs/history/hadoop-namenode_1347465941865_job_201209121205_0007_oracle_OraLoader to file:/opt/ladap/common/hadoop/logs/history/done
    2012-09-12 12:23:56,648 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/opt/ladap/common/hadoop/logs/history/hadoop-namenode_1347465941865_job_201209121205_0007_conf.xml to file:/opt/ladap/common/hadoop/logs/history/done
    My oraloader console log is below:
    [oracle@rakesh hadoop]$ bin/hadoop jar oraloader.jar oracle.hadoop.loader.OraLoader -conf olh-conf/TestAS/scott/testmanagedtable/conf.xml -fs hdfs://hadoop-namenode:9000/ -jt hadoop-namenode:9001
    Oracle Loader for Hadoop Release 1.1.0.0.1 - Production
    Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
    12/09/12 12:23:42 INFO loader.OraLoader: Oracle Loader for Hadoop Release 1.1.0.0.1 - Production
    Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
    12/09/12 12:23:47 INFO loader.OraLoader: Sampling disabled, table: LDP_TESTMANAGEDTABLE is not partitioned
    12/09/12 12:23:47 INFO loader.OraLoader: oracle.hadoop.loader.loadByPartition is disabled, LDP_TESTMANAGEDTABLE is not partitioned
    12/09/12 12:23:47 INFO output.DBOutputFormat: Setting reduce tasks speculative execution to false for : oracle.hadoop.loader.lib.output.JDBCOutputFormat
    12/09/12 12:23:47 INFO loader.OraLoader: Submitting OraLoader job OraLoader
    12/09/12 12:23:50 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    12/09/12 12:23:50 INFO metastore.ObjectStore: ObjectStore, initialize called
    12/09/12 12:23:51 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
    12/09/12 12:23:51 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
    12/09/12 12:23:51 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
    12/09/12 12:23:51 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    12/09/12 12:23:51 INFO DataNucleus.Persistence: Property javax.jdo.option.NonTransactionalRead unknown - will be ignored
    12/09/12 12:23:51 INFO DataNucleus.Persistence: ================= Persistence Configuration ===============
    12/09/12 12:23:51 INFO DataNucleus.Persistence: DataNucleus Persistence Factory - Vendor: "DataNucleus" Version: "2.0.3"
    12/09/12 12:23:51 INFO DataNucleus.Persistence: DataNucleus Persistence Factory initialised for datastore URL="jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true" driver="com.mysql.jdbc.Driver" userName="root"
    12/09/12 12:23:51 INFO DataNucleus.Persistence: ===========================================================
    12/09/12 12:23:52 INFO Datastore.Schema: Creating table `DELETEME1347467032448`
    12/09/12 12:23:52 INFO Datastore.Schema: Schema Name could not be determined for this datastore
    12/09/12 12:23:52 INFO Datastore.Schema: Dropping table `DELETEME1347467032448`
    12/09/12 12:23:52 INFO Datastore.Schema: Initialising Catalog "hive", Schema "" using "None" auto-start option
    12/09/12 12:23:52 INFO Datastore.Schema: Catalog "hive", Schema "" initialised - managing 0 classes
    12/09/12 12:23:52 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    12/09/12 12:23:52 INFO DataNucleus.MetaData: Registering listener for metadata initialisation
    12/09/12 12:23:52 INFO metastore.ObjectStore: Initialized ObjectStore
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 11, column 6 : cvc-elt.1: Cannot find the declaration of element 'jdo'. - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 312, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 359, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 381, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 416, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 453, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 494, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 535, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 576, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 621, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/opt/ladap/common/hadoop-0.20.2-cdh3u1/lib/hive-metastore-0.7.1-cdh3u1.jar!/package.jdo" at line 666, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified.
    12/09/12 12:23:53 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MDatabase [Table : `DBS`, InheritanceStrategy : new-table]
    12/09/12 12:23:53 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MDatabase.parameters [Table : `DATABASE_PARAMS`]
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 2 index(es) for table `DBS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 0 foreign key(s) for table `DBS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 2 unique key(s) for table `DBS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 2 index(es) for table `DATABASE_PARAMS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 1 foreign key(s) for table `DATABASE_PARAMS`
    12/09/12 12:23:54 INFO Datastore.Schema: Validating 1 unique key(s) for table `DATABASE_PARAMS`
    12/09/12 12:23:54 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MDatabase
    12/09/12 12:23:54 INFO metastore.HiveMetaStore: 0: get_table : db=db_1 tbl=testmanagedtable
    12/09/12 12:23:54 INFO HiveMetaStore.audit: ugi=oracle     ip=unknown-ip-addr     cmd=get_table : db=db_1 tbl=testmanagedtable     
    12/09/12 12:23:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MSerDeInfo [Table : `SERDES`, InheritanceStrategy : new-table]
    12/09/12 12:23:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MStorageDescriptor [Table : `SDS`, InheritanceStrategy : new-table]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MTable [Table : `TBLS`, InheritanceStrategy : new-table]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MSerDeInfo.parameters [Table : `SERDE_PARAMS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MTable.parameters [Table : `TABLE_PARAMS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MTable.partitionKeys [Table : `PARTITION_KEYS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.bucketCols [Table : `BUCKETING_COLS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.cols [Table : `COLUMNS`]
    12/09/12 12:23:54 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.parameters [Table : `SD_PARAMS`]
    12/09/12 12:23:55 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.sortCols [Table : `SORT_COLS`]
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 index(es) for table `SERDES`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 0 foreign key(s) for table `SERDES`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SERDES`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 4 index(es) for table `TBLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 foreign key(s) for table `TBLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 unique key(s) for table `TBLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SDS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SDS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SDS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SD_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SD_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SD_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `BUCKETING_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `BUCKETING_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `BUCKETING_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `COLUMNS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `COLUMNS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `COLUMNS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `PARTITION_KEYS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `PARTITION_KEYS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `PARTITION_KEYS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `TABLE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `TABLE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `TABLE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SERDE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SERDE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SERDE_PARAMS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 2 index(es) for table `SORT_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 foreign key(s) for table `SORT_COLS`
    12/09/12 12:23:55 INFO Datastore.Schema: Validating 1 unique key(s) for table `SORT_COLS`
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MSerDeInfo
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MStorageDescriptor
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MTable
    12/09/12 12:23:55 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MFieldSchema
    12/09/12 12:23:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
    12/09/12 12:23:55 WARN snappy.LoadSnappy: Snappy native library not loaded
    12/09/12 12:23:55 INFO mapred.FileInputFormat: Total input paths to process : 1
    12/09/12 12:23:56 INFO mapred.JobClient: Running job: job_201209121205_0007
    12/09/12 12:23:57 INFO mapred.JobClient: map 0% reduce 0%
    12/09/12 12:23:57 INFO mapred.JobClient: Job complete: job_201209121205_0007
    12/09/12 12:23:57 INFO mapred.JobClient: Counters: 0
    [oracle@rakesh hadoop]$
    Please help. Thanks in advance.
    Regards,
    Rakesh Kumar Rakshit

    Hi Raskesh,
    Can you share the conf.xml and map.xml files you are using? I am trying to do the same (export from HIVE to oracle DB) and i get the following exception ClassNotFoundException: org.apache.hadoop.hive.metastore.TableType.
    Best regards,
    Bilal

  • INVENTORY & ORDER MANAGEMENT Tables join conditions required

    Hi All,
    I have to create one view, it has to display item_number, item_description,quantity_in_warehouse and pendingquantity_in_salesorder.
    query is based on INVENTORY & ORDER MANAGEMENT Tables. i have written below query, it is not giving expected output. Kindly go through once and rectify me if i did any thing wrong while writing the join conditions.
    SELECT mtlb.segment1 Item_Number,
    mtlb.description Item_Description,
    mtld.transaction_quantity Quantity_in_Wearhouse,
    oel.ordered_quantity PendingQuantity_in_SalesOrder
    FROM mtl_system_items_b mtlb,
    mtl_onhand_quantities_detail mtld,
    oe_order_headers_all oeh,
    oe_order_lines_all oel
    WHERE mtlb.inventory_item_id = mtld.inventory_item_id
    AND mtlb.organization_id ; = mtld.organization_id
    AND mtlb.inventory_item_id ; = oel.inventory_item_id
    AND oeh.header_id ; = oel.header_id
    AND oeh.org_id ; = oel.org_id
    AND oeh.ship_from_org_id ; = mtlb.organization_id
    AND oel.ship_from_org_id ; = mtlb.organization_id
    AND mtld.subinventory_code LIKE 'Warehouse'
    AND oel.flow_status_code LIKE 'AWAITING_SHIPPING'
    ORDER BY mtlb.segment1;
    it is bit urgent. please help me to resolve this issue.
    Thanks.
    Edited by: user627525 on Jul 19, 2010 4:35 AM

    Hi Jonny,
    Thanks for your reply. it is very useful for me. need small clarification.
    i have execute the query that u have given. please find the following output.
    segment1 order_number line_number shipment_number on_hand
    210020004000011 10005 1 1 139150
    210020004000011 10006 1 1 139150
    210020004000013 10005 2 1 27675
    210020004000013 10006 2 1 27675
    i have modified the query and added column ordered_quantity from oe_order_lines_all table.
    SELECT
    mtlb.segment1 ; as Item_Number,
    mtlb.description as Item_Description,
    sum(mtld.primary_transaction_quantity) as Quantity_in_Whouse,
    oel.ordered_quantity as PQuantity_in_SOrder
    FROM oe_order_headers_all oeh,
    oe_order_lines_all oel,
    mtl_system_items_b mtlb,
    mtl_onhand_quantities_detail mtld
    WHERE oel.header_id = oeh.header_id
    AND oel.inventory_item_id = mtlb.inventory_item_id
    AND oel.ship_from_org_id = mtlb.organization_id
    AND oel.inventory_item_id = mtld.inventory_item_id
    AND oel.ship_from_org_id = mtld.organization_id
    AND oel.flow_status_code = 'AWAITING_SHIPPING'
    AND mtld.subinventory_code = 'Warehouse'
    GROUP By mtlb.segment1,mtlb.description,oel.ordered_quantity
    ORDER BY mtlb.segment1;
    i have executed the above query. Quantity_in_Whouse is showing double the actual available quantity.
    item_number item_description Quantity_in_Whouse PQuantity_in_SOrder
    210020004000011 item1 278300 100
    210020004000013 item3 55350 125
    when i executed the below query, Quantity_in_Whouse is showing 139050 and 27550 respectively.
    SELECT mtlb.segment1 Item_Number,
    mtlb.description,
    sum(mtld.primary_transaction_quantity) Quantity_in_Whouse
    FROM mtl_system_items_b mtlb,
    mtl_onhand_quantities_detail mtld
    WHERE mtlb.inventory_item_id = mtld.inventory_item_id
    AND mtlb.organization_id ; = mtld.organization_id
    AND mtld.subinventory_code LIKE 'Warehouse'
    and mtlb.segment1 in ('210020004000011','210020004000013')
    group by mtlb.segment1, mtlb.description
    ORDER BY mtlb.segment1;
    item_number item_description Quantity_in_Whouse
    210020004000011 item1 139050
    210020004000013 item3 27550
    please give some solution so that i can display Quantity_in_Whouse as 139050 and 27550 .
    if i am any thing wrong please rectify me.
    Thanks in advance.
    Edited by: user627525 on Jul 19, 2010 7:54 AM
    Edited by: user627525 on Jul 19, 2010 7:56 AM
    Edited by: user627525 on Jul 19, 2010 9:39 AM

  • Issue management security

    Hi does anyone know how to restrict acces for certain users to approve a transport in issue management?
    We customized our issue management status screen for "transport approval " and we would like to give it to certain users only.
    Thanks Mike

    Hi,
    Application visibility to the unwanted users can be stopped by using the Authorizations per MCD i.e. Application.
    You can decide the Authorization objects which are mandatory to view the data in a perticular application. And then only those users who have the necessary authorization will be able to see the data in the application.
    You can not stop the user SAP_C from synchronizing with the server because this user is the valid user in DOE. However you can define the configuration which will avoid the data changed by user SAP_C being processed on the DOE. This scenario is called "Non-repudiation" scenario where a invalid user is trying to change the data on the mobile client. This case can be handled on the DOE and depending on the security setting, the message processing is stopped and correspondingly the admin will be notified.
    Regards,
    Ramanath.

  • Issue Management clarification

    Hi All,
    I have got the following doubts related to issue management.
    1. how is the issue is sent to the processor automatically. where is the configuration done for this .
    2. can we be able to configure automatic email notification to the processor.
    i tried to define a email notification but it was not successful. only the issue is trggered and the smartform is not getting triggered.
    kindly advice on where can we find this configuration.
    Regards,
    Subhashini.

    Hi Fernanado,
    when i go through the help, i could understand that it activates the automatic notification when technical attributes are changed.
    i am not sure about what technical attributes they are referring. is it the status change or priority or something else.
    I have copied the slfi action profile and made a Z copy of it. intially when the issue is created , the automatic notification works fine.
    but i want this notification for my custom defined statuses.
    when i look in to the action profile there is only one action item where it logs the change made.
    i wanted to know where can i control this automatic notification for a set of statuses.
    kindly advice on how can i tackle this problem.
    Regards,
    Subhashini.

  • Issue management during implementation

    Hello!
    We are planning to use issue management in blueprinting and config phases in our rollout. We have SM 7.0 SPS16 with Workcenters configured. In Solar01/02 when I create an issue at a relevant project structure node I get the window " Issue Maintenance".
    How or where do I configure for the following settings in the " Issue Maintenance " window.
    1. Priority - I want to customize the drop-down menu with my own. (Instead of the default 1. very high...etc.)
    2. Subject - I want to customize the drop-down menu entries in this field too.
    3. I want to hide the tabs "Service sessions" and "Change requests" as we are not using this.
    Appreciate your help and inputs.
    Thanks.
    Mike
    Edited by: Mike Sikorsky on Jul 14, 2008 8:32 PM

    Hi
    1) Use OSC3 or OCS3 transaction, i could't remember which one
    2) You will have to use transaction SPRO and navigate to "Define Catalogs, Codes, Code Groups" and make changes there. After that you need to adjust/customize Code Groups, followed by Code Group Profile, followed by Subject Profile. Then you will have to make changes to Transaction Type SLFI. It's quite a lot of work
    3) I am not sure if you can hide those tabs.

  • Configurator in order management

    Hi guys,
    I need to write a report which shows the configuration exactly as it shows in configurator in order management. I figure out although its using BOM explosion to explode the model it uses some rules so whats in BOM is different than what shows up in configurator. Can anyone tell me tales in CZ schema which can help me in writing the report. I have been looking on metalink but no luck.
    Thanks guys!!

    Chintu --
    The way I'm reading your question, you're asking how to query the Configurator schema for the sub-Lines that are passed to Order Management after a configuration is saved. If I am not interpreting your question correctly, please provide further clarification. If I am interpreting it correctly, then....
    After a configuration is saved, the top-level Order Line record in ONT.OE_ORDER_LINES_ALL has its CONFIG_HEADER_ID and CONFIG_REV_NBR fields populated. These are the primary keys to access the corresponding saved configuration data in Configurator's "Saved Configuration" subschema (that is, the tables beginning with CZ_CONFIG_...). The CZ.CZ_CONFIG_ITEMS table holds the items that were saved and passed through to the Order sub-Lines. (Other data is saved there as well, so you may want to check the documentation for CZ.CZ_CONFIG_ITEMS in the Electronic Technical Reference Manual at http://etrm.oracle.com/ to help determine how to differentiate the BOM Items from the other data.)
    When the CTO Create Configuration program is run after Order booking, the *BOM is created, which, in addition to the Items in the saved configuration, will include all of the Required Components specified in the Model BOM (what I assume you're referring to as "BOM explosion").  Since Configurator is only aware of Optional Components (and their ancestors), the Required Components do not appear in the saved configuration data.  If you need to report on those as well, you will need to employ some different mechanism.
    Hope this helps.
    Eogan

  • Bare minimum configuration for warehouse management

    Hello,
    For one of our Projects, there is a requirement of Warehouse Management merely just to enable the tracking of Material received to the Storage Bin level, as they manage the Inventory in Storage Bins in the existing Warehouses.
    Other no major requirement is expected from WM like, automation of Putaway, etc.
    Few of the Inventory scenarios are like Goods receipt, Goods Issue, Warehouse to Warehouse transfer, Bin to Bin transfer, Physical Inventory,  Subcontracting for repairs, Vendor Consignment, also they will handle Project Special stock.
    Now the configurations for Inventory Management are enabled for all the above processes. I now need to know, what are the bare mimimum configurations required so as to enable the WM related process, following each of the IM processes, like Putaway after Goods Receipt, Stock removal before Goods Issue, etc.
    The Configuration for the Enterprise Structure to support Warehouse Management have been completed by creating the Warehouses & assigning the Storage locations to these Warehouses.
    A detailed step by step process would be very helpful for me at this stage.
    And i surely will assign points for all the useful answers.
    Regards,
    Zafar.

    Hi,
    If you want to define new storage types other than the standard ones, please define them. Otherwise, you can assign
    the existing storage types to the newly defined warehouse.
    Also you have to define storage bin types and storage bin structures.
    All this you can find under the IMG path:
    SPRO->Logistics Execution->Warehouse Management->Master Data.
    Regards,
    Pavani.

Maybe you are looking for