Automatic data collection evaluation issue

We are currently evaluating several automatic data collection applications that we are planning to integrate with our Oracle Applications --- got most of the RFQ constructed using this template, http://www.highjumpsoftware.com/adc/Vertexera062503/
but we want to make sure that all the system integration issues have been accounted for.
Does anyone have any success stories/horror stories about integrating automatic data collection products with the ERP system
Thank you!

cfhttp is a good place to start.

Similar Messages

  • DC520 Check Open Data Collection  - XML issues

    Hello,
    I have the problem that if I save DC via XML Shop Floor interface the activity hook DC520 which is attached as PRE_COMPLETE doesn't see the DC and is complaining that the DC for this operation is missing.
    From the POD it works without any problems.
    Any advice from your side?
    Regards,
    Kai

    Hello Kai,
    As it works when using the POD, there must be some data field(s) different in the records saved when using XML - most likely this is because you are missing some tag in your XML that must match for DC520 to find it, so check in the database and in the XML message for:
    DC_GROUP_OBJECT / DC_GROUP_BO (including revision)
    MEASURE_NAME / PARAMETER_NAME
    MEASURE_GROUP / PARAMETER_GROUP
    OPERATION_OBJECT / OPERATION_BO
    TIMES_PROCSSED
    Andrew.

  • Automatic data collection generating multiple logs

    Hi,
    I am working with a PCI-6251 card connected to a DAQmx board. I am running the newest version of Labview signal express. I am wondering how to create a function that will record a signal with a trigger to start the recording and stop recording after a certain amount of time then start recording in a new log automatically when it is triggered. Under recording options I was able to find a start and stop condition but for what ever reason having a start condition as trigger is not working for me. Any ideas why?
    Also I have use the trigger function in the normal step setup and it works fine. 
    Thank you 
    Peter

    Hi Peter,
    What kind of signal are you reading?  What exactly are the settings of your start condition?  Can you attach a screen shot?
    These links may help:
    Signal Express 2013 - Trigger
    Signal Express 2013 - Start Conditions Page
    Jeff Munn
    Applications Engineer
    National Instruments

  • Performance Report Manager Data Collection Issues

    I have an agent running and collecting data in a circular file.... history.log
    Data Collection has failed approximately 6 days ago without warning. I have tried to find out the issue and have not had any success.
    If I open the Data Collection Window in PRM I see a Yellow triangle beside the hostname:port.
    If you can provide any information as to what to look for, please reply.

    history.log may be corrupted. If you run it through ccat to flatten it does it show any recent (i.e. newer than 6 days) entries?
    If ccat gives errors then just move it out of the way (i.e. rename it to history.log.busted) and restart your Agent. A fresh log will be created. Give it a couple hours to make a couple flushes of data to your SunMC Server and see if PRM works.
    Regards,
    Mike
    [email protected]

  • LMS 4.2 Topology Data Collection Issue

    Hi there,
    i have an issue with the lms 4.2 Topology Data Collection. After installation the Topology Data Collection was running normaly, but since first server reload the Topo Data Collect under Inventory > Dashboards > Device Status > Collection Summary is "frozen". It only shows the following content from first run:
    Topology Data Collection    193    0    Not Available     Running    Schedule
    Is there any option to stop this process elsewhere? I cannot find anything under jobs in running state or so. Clicking on Schedule only give me the option to start data collection, but lms always returns that the process is running.
    Thanks for help.

    Hi,
    i have reseted the ANI db using NMSROOT\bin\perl.exe NMSROOT\bin\dbRestoreOrig.pl dsn=ani dmprefix=ANI
    Everything is set to 0:
    Topology Data Collection      0 0 Not Available                          Running Schedule
    UT Major Acquisition            0 (0 End hosts 0 IP Phones) 0      Not Available Idle Schedule
    VRF Collection                    0 0                                              Not Available Idle Schedule
    But Topology Data Collection is still frozen to "Running" state....

  • LMS 3.2 Data Collection Issue

    Hi ,
    I have LMS 3.2 and license for adding 5000 devices , my data collection is running continously for the past 2 days, when i try to stop the daemon manager and restart it comes to idle state but after some time it again goes to Running state
    I am not getting the best practices deviations and discrepaqncies report after this issue
    I am attaching the ANIServer.properties , ani.log and ANIServer.log file
    Please help

    this seems to be BugID CSCtd49439:
    http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?caller=pluginredirector&method=fetchBugDetails&bugId=CSCtd49439
    also consider to install the patch for BugID CSCtg20882:
    http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?caller=pluginredirector&method=fetchBugDetails&bugId=CSCtg20882
    if you have CM 5.2 you should consider to upgrade to CM 5.2.1 which is available here:
    http://www.cisco.com/cisco/software/type.html?mdfid=282641773&flowid=5141&softwareid=280775108
    the patches for BugIDs CSCtd49439 and CSCtg20882 are availabel by contacting TAC

  • BCS - Data collection issue

    Hi,
    I'm using BCS 4.0. I'm working now in final testing and I have some question regarding to the data collection process using load from data stream method. I ran the task in consolidation monitor for 007.2007 period and for all companies without any error or warnings, but we have differences in financial information for that period.
    I reviewed the content in my BCS cube (RSA1) and we don't have any data for that accounts, the only thing that I found was that all docs were created on same date
    I deleted the ID request in RSA1in my BCS cube and I executed task again in consolidation monitor, but the result was the same.
    Looking at log / source data, in the rows listed, these data is not taking from the infoprovider
    Any idea which could be the problem ?
    Thanks
    Nayeli

    Hi Nayeli,
    I had to do this kind of job (reconciliation of data in the source basis cube and the totals cube) during the final testing a lot of times, with an accountant.
    The only way to have the first clue what is happening is to compare every particular amount s in both cubes. Comparing and trying to figure out any dependencies in data.
    The difference might arise because of ANY reason. Only you may analyze the data and decide what to do.
    AFAIU, you compared only reported data and did no currency translation and eliminations?
    Then, I'm afraid that you've made a very big mistake deleting the request from the totals cube. You have broken the consistency between totals and document data. For a transactional cube the request is yellow until the number of records in it reach 50000 records. Then it is closed and become green. As you may understand, in those 50000 records maybe contained a lot of data for different posting periods, for example, reported data for the previous closed period. Though, documents still contain eliminations for that period. - Inconsistency.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • WJA - Failed Alerts - Data Collection Inaccuracies

    I have been running Web Jetadmin v10.2.7491B on a 2003 server for about 3 years. There are 120 managed printers consisting of 9 different models.
    Issue #1
    Initially I had alerts configured on all printers that would send an email when a consumable dropped below 3%. This worked very relibly for the first year. Over time, an increasing number of printers stopped sending reports, until eventually only a handful sent alerts. I never found the cause of this problem.
    Issue #2
    Since alerts were broken, I setup a daily data collection for all printers and a morning csv report sent via email. I would then sort the data by remaining consumable percentage and replace those at 1% or below. Recently, 3 models out of 9 have stopped reporting accurate consumable percentages in the daily reports. This problem seems to correspond with a firmware update I pushed out to all printers. The WJA console DOES show accurate information for individual printers when they are selected.
    Steps taken to resolve #2
    Printers have been removed from the all Report Data Collection groups, and then added back. This did not resolve the issue. Printers were initially added individually to a Data Collection, but I have implemented a Group Policy that automatically adds a printer to the Data Collection. 6 printer models still show accurate percentages, but models 4345, 4730, and 5035 do not.
    Is there a database file I can delete that will force a Data Collection rebuild for all printers? Is it possible the new firmware for the 3 listed models has a bug that prevents the data from being updated?
    This question was solved.
    View Solution.

    This forum is focused on consumer level products.  For this issue you may have better results posting in the Enterprise Print Servers, Network Storage and Web Jetadmin forum here.
    Bob Headrick,  HP Expert
    I am not an employee of HP, I am a volunteer posting here on my own time.
    If your problem is solved please click the "Accept as Solution" button ------------V
    If my answer was helpful please click the "Thumbs Up" to say "Thank You"--V

  • SRM MDM: Workflow Unlaunch while performing the Automatic data transfer

    Hi,
    We are trying to import some data from R/3 4.6 C by configuraing remote system as ERP and creating Port based based on the XML Schema in the SRM MDM Catalog. We have created work flow to validate the above pulled data accuracy into data manager. Everything goes well here, I am able to get the data transfered into data manager Automatically But the only issue is in the Work flow.
    Set workflow into data manager is not able to launch automatically even though the record is accurate via automatic transfer. I have accuratly set the wokflow name in import manager while svaing the MAP file for automatic import.
    Please have your inputs why the work flow is not able to launch in automatic data transfer from the specified port.
    Thanks/Pawan

    Hi Pawan,
    please check in the Data Manager, that you selected by the workflow, the right Trigger Actions, such as Record Import. And that the Autolunch is also set correcly.
    Regards,
    Tamá

  • Too many BPM data collection jobs on backend system

    Hi all,
    We find about 40,000 data collection jobs running on our ECC6 system, far too many.
    We run about 12 solutions, all linked to the same backend ECC6 system. Most probably this is part of the problem. We plan to scale down to 1 solution rather than the country-based approach.
    But here we are now, and I have these questions.
    1. How can I relate a BPM_DATA_COLLECTION job on ECC6 back to a particular solution ? The job log give me monitor-id, but I can't relate that back to a solution.
    2. If I deactivate a solution in the solution overview, does that immediately cancel the data collection for that solution ?
    3. In the monitoring schedule on a business process step we sometimes have intervals defined as 5 minutes, sometimes 60. Strange thing is that the drop-down of that field does not always give us the same list of values. Even within a solution I see that in one step I have the choice of a long list of intervals, in the next step in that same business process I can only choose between blank and 5 minutes.
    How is this defined ?
    Thanks in advance,
    Rad.

    Hi,
    How did you managed to get rid of this issue. i am facing the same.
    Thanks,
    Manan

  • IDoc Monitoring: Data collection not getting completed

    Hi,
    I have setup IDoc monitoring in Solution Manager 7.1 SP08 for managed systems which is on ST-A/PI 01Q. There are many IDoc being generated in the managed system but I am not getting any alert for it.
    In the alert inbox I get the message that the data collection has not yet taken place since the last activation. and the last data collection time is shown as of 2009.
    Also in the managed system the BPMon background job BPM_DATA_COLLECTION_2 get cancelled with run time error TSV_TNEW_BLOCKS_NO_ROLL_MEMMORY
    What can be the issue?
    Regards
    Vishal

    Hi Vishal,
    Please check and implement below notes to fix the first issue:
    1570282 - Advance Corr. BPMon ST-A/PI 01N
    1946940 - ST-A/PI 01Q SP2: Advance Correction BPMon (Infrastructure and
    EBO)
    1869678 - Advanced Corrections for BPmon/Analytics Delivered with 01Q SP1
    (framework)
    Thanks
    Vikram

  • OIA-Finalize Data Collection Failed

    Hi,
    We are facing an issue with OIM where we have OIA-OIM integration. The issue is that OIA imports were running fine until we suddenly rebooted the OIM server while the OIA import was in progress. After this we are unable to run any job to import the Resources and Entitlements. When we run it it throws the below error:
    21:31:30,599 DEBUG [IamDataSyncMonitorImpl] OIMStatus FINALIZED
    21:31:30,599 DEBUG [DBIAMSolution] Current IamDataSyncMonitor Status: DATA COLLECTION FINALIZED
    21:31:30,600 INFO [DBIAMSolution] No Active DataLoad. Finalizing Previous Collection
    21:31:30,600 INFO [OIMDataProviderImpl] Start Finalize Data Collection ...
    21:31:30,849 DEBUG [DefaultIAMListener] Processing Event:com.vaau.rbacx.iam.IAMEvent[source=com.vaau.rbacx.iam.db.DBIAMSolution@1000d34c], exception:null
    21:31:30,849 DEBUG [IbatisObjectStorageManagerImpl] Getting data with: {id=98}
    21:31:30,850 DEBUG [DefaultIAMListener] Updating ImportRun 98
    21:31:47,126 ERROR [OIMDataProviderImpl] Unable to finalize previous Data Collection
    Thor.API.Exceptions.tcAPIException
    at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:234)
    at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:348)
    at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:259)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl_1035_WLStub.finalizeDataCollectionSessionx(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at weblogic.ejb.container.internal.RemoteBusinessIntfProxy.invoke(RemoteBusinessIntfProxy.java:85)
    at $Proxy206.finalizeDataCollectionSessionx(Unknown Source)
    at Thor.API.Operations.DataCollectionOperationsIntfDelegate.finalizeDataCollectionSession(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at Thor.API.Base.SecurityInvocationHandler$1.run(SecurityInvocationHandler.java:68)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
    at weblogic.security.Security.runAs(Security.java:41)
    at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(weblogicLoginSession.java:52)
    at Thor.API.Base.SecurityInvocationHandler.invoke(SecurityInvocationHandler.java:79)
    at $Proxy207.finalizeDataCollectionSession(Unknown Source)
    at com.vaau.rbacx.iam.db.support.oracle.OIMDataProviderImpl.finalizeDataCollectionSession(OIMDataProviderImpl.java:621)
    at com.vaau.rbacx.iam.db.support.IamDataSyncMonitorImpl.finalizeCurrentCollection(IamDataSyncMonitorImpl.java:132)
    at com.vaau.rbacx.iam.db.DBIAMSolution.loadData(DBIAMSolution.java:199)
    at com.vaau.rbacx.iam.service.impl.RbacxIAMServiceImpl.dataLoad(RbacxIAMServiceImpl.java:510)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy132.dataLoad(Unknown Source)
    at com.vaau.rbacx.scheduling.executor.iam.DbIamJobExecutor.execute(DbIamJobExecutor.java:83)
    at com.vaau.rbacx.scheduling.manager.providers.quartz.jobs.AbstractJob.execute(AbstractJob.java:72)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534)
    Caused by: Thor.API.Exceptions.tcAPIException
    at com.thortech.xl.ejb.beansimpl.DataCollectionOperationsBean.finalizeDataCollectionSession(DataCollectionOperationsBean.java:326)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB.finalizeDataCollectionSessionx(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.jee.spi.MethodInvocationVisitorImpl.visit(MethodInvocationVisitorImpl.java:37)
    at weblogic.ejb.container.injection.EnvironmentInterceptorCallbackImpl.callback(EnvironmentInterceptorCallbackImpl.java:54)
    at com.bea.core.repackaged.springframework.jee.spi.EnvironmentInterceptor.invoke(EnvironmentInterceptor.java:50)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy389.finalizeDataCollectionSessionx(Unknown Source)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl.__WL_invoke(Unknown Source)
    at weblogic.ejb.container.internal.SessionRemoteMethodInvoker.invoke(SessionRemoteMethodInvoker.java:40)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl.finalizeDataCollectionSessionx(Unknown Source)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl_WLSkel.invoke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:668)
    at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230)
    at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:523)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518)
    at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:119)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    21:31:47,127 ERROR [IamDataSyncMonitorImpl] Unable to finalize previous Data Collection; nested exception is Thor.API.Exceptions.tcAPIException
    21:31:47,127 ERROR [DBIAMSolution] Unable to finalize previous Data Collection; nested exception is Thor.API.Exceptions.tcAPIException
    com.vaau.rbacx.iam.RbacxIAMFailToFinalizeDataCollectionException: Unable to finalize previous Data Collection; nested exception is Thor.API.Exceptions.tcAPIException
    at com.vaau.rbacx.iam.db.support.oracle.OIMDataProviderImpl.finalizeDataCollectionSession(OIMDataProviderImpl.java:628)
    at com.vaau.rbacx.iam.db.support.IamDataSyncMonitorImpl.finalizeCurrentCollection(IamDataSyncMonitorImpl.java:132)
    at com.vaau.rbacx.iam.db.DBIAMSolution.loadData(DBIAMSolution.java:201)
    at com.vaau.rbacx.iam.service.impl.RbacxIAMServiceImpl.dataLoad(RbacxIAMServiceImpl.java:510)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy132.dataLoad(Unknown Source)
    at com.vaau.rbacx.scheduling.executor.iam.DbIamJobExecutor.execute(DbIamJobExecutor.java:83)
    at com.vaau.rbacx.scheduling.manager.providers.quartz.jobs.AbstractJob.execute(AbstractJob.java:72)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:203)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534)
    Caused by: Thor.API.Exceptions.tcAPIException
    Please let me know if anyone has faced this issue before.
    Thansk,
    Vishnu

    Hi,
    We are facing an issue with OIM where we have OIA-OIM integration. The issue is that OIA imports were running fine until we suddenly rebooted the OIM server while the OIA import was in progress. After this we are unable to run any job to import the Resources and Entitlements. When we run it it throws the below error:
    21:31:30,599 DEBUG [IamDataSyncMonitorImpl] OIMStatus FINALIZED
    21:31:30,599 DEBUG [DBIAMSolution] Current IamDataSyncMonitor Status: DATA COLLECTION FINALIZED
    21:31:30,600 INFO [DBIAMSolution] No Active DataLoad. Finalizing Previous Collection
    21:31:30,600 INFO [OIMDataProviderImpl] Start Finalize Data Collection ...
    21:31:30,849 DEBUG [DefaultIAMListener] Processing Event:com.vaau.rbacx.iam.IAMEvent[source=com.vaau.rbacx.iam.db.DBIAMSolution@1000d34c], exception:null
    21:31:30,849 DEBUG [IbatisObjectStorageManagerImpl] Getting data with: {id=98}
    21:31:30,850 DEBUG [DefaultIAMListener] Updating ImportRun 98
    21:31:47,126 ERROR [OIMDataProviderImpl] Unable to finalize previous Data Collection
    Thor.API.Exceptions.tcAPIException
    at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:234)
    at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:348)
    at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:259)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl_1035_WLStub.finalizeDataCollectionSessionx(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at weblogic.ejb.container.internal.RemoteBusinessIntfProxy.invoke(RemoteBusinessIntfProxy.java:85)
    at $Proxy206.finalizeDataCollectionSessionx(Unknown Source)
    at Thor.API.Operations.DataCollectionOperationsIntfDelegate.finalizeDataCollectionSession(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at Thor.API.Base.SecurityInvocationHandler$1.run(SecurityInvocationHandler.java:68)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
    at weblogic.security.Security.runAs(Security.java:41)
    at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(weblogicLoginSession.java:52)
    at Thor.API.Base.SecurityInvocationHandler.invoke(SecurityInvocationHandler.java:79)
    at $Proxy207.finalizeDataCollectionSession(Unknown Source)
    at com.vaau.rbacx.iam.db.support.oracle.OIMDataProviderImpl.finalizeDataCollectionSession(OIMDataProviderImpl.java:621)
    at com.vaau.rbacx.iam.db.support.IamDataSyncMonitorImpl.finalizeCurrentCollection(IamDataSyncMonitorImpl.java:132)
    at com.vaau.rbacx.iam.db.DBIAMSolution.loadData(DBIAMSolution.java:199)
    at com.vaau.rbacx.iam.service.impl.RbacxIAMServiceImpl.dataLoad(RbacxIAMServiceImpl.java:510)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy132.dataLoad(Unknown Source)
    at com.vaau.rbacx.scheduling.executor.iam.DbIamJobExecutor.execute(DbIamJobExecutor.java:83)
    at com.vaau.rbacx.scheduling.manager.providers.quartz.jobs.AbstractJob.execute(AbstractJob.java:72)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534)
    Caused by: Thor.API.Exceptions.tcAPIException
    at com.thortech.xl.ejb.beansimpl.DataCollectionOperationsBean.finalizeDataCollectionSession(DataCollectionOperationsBean.java:326)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB.finalizeDataCollectionSessionx(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.jee.spi.MethodInvocationVisitorImpl.visit(MethodInvocationVisitorImpl.java:37)
    at weblogic.ejb.container.injection.EnvironmentInterceptorCallbackImpl.callback(EnvironmentInterceptorCallbackImpl.java:54)
    at com.bea.core.repackaged.springframework.jee.spi.EnvironmentInterceptor.invoke(EnvironmentInterceptor.java:50)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy389.finalizeDataCollectionSessionx(Unknown Source)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl.__WL_invoke(Unknown Source)
    at weblogic.ejb.container.internal.SessionRemoteMethodInvoker.invoke(SessionRemoteMethodInvoker.java:40)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl.finalizeDataCollectionSessionx(Unknown Source)
    at Thor.API.Operations.DataCollectionOperationsIntfEJB_jpx61z_DataCollectionOperationsIntfRemoteImpl_WLSkel.invoke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:668)
    at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230)
    at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:523)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518)
    at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:119)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    21:31:47,127 ERROR [IamDataSyncMonitorImpl] Unable to finalize previous Data Collection; nested exception is Thor.API.Exceptions.tcAPIException
    21:31:47,127 ERROR [DBIAMSolution] Unable to finalize previous Data Collection; nested exception is Thor.API.Exceptions.tcAPIException
    com.vaau.rbacx.iam.RbacxIAMFailToFinalizeDataCollectionException: Unable to finalize previous Data Collection; nested exception is Thor.API.Exceptions.tcAPIException
    at com.vaau.rbacx.iam.db.support.oracle.OIMDataProviderImpl.finalizeDataCollectionSession(OIMDataProviderImpl.java:628)
    at com.vaau.rbacx.iam.db.support.IamDataSyncMonitorImpl.finalizeCurrentCollection(IamDataSyncMonitorImpl.java:132)
    at com.vaau.rbacx.iam.db.DBIAMSolution.loadData(DBIAMSolution.java:201)
    at com.vaau.rbacx.iam.service.impl.RbacxIAMServiceImpl.dataLoad(RbacxIAMServiceImpl.java:510)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy132.dataLoad(Unknown Source)
    at com.vaau.rbacx.scheduling.executor.iam.DbIamJobExecutor.execute(DbIamJobExecutor.java:83)
    at com.vaau.rbacx.scheduling.manager.providers.quartz.jobs.AbstractJob.execute(AbstractJob.java:72)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:203)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534)
    Caused by: Thor.API.Exceptions.tcAPIException
    Please let me know if anyone has faced this issue before.
    Thansk,
    Vishnu

  • Oracle Grid Data Collection Stopped

    Hi Everyone,
    My Environment details:
    Oracle Grid 10g 10.2.0.5.0
    Oracle Database 10g 10.2.0.4.0
    OS -Redhat Linux 5.4 x86_64
    Today i tried to take the awr from Grid console for one of my database but there are no recent snapshot available and the data collection is stopped between agent and database. Please find the error details from emagent.trc
    when i checked the agent services its seems running fine but i don't know why the data collection is not happening.
    Please advice me how to resolve this issue.
    Thanks & Regards,
    Shan
    SQL = "/* OracleOEM */
    DECLARE
    instance_number NUMBER;
    task_id NUMBER;"...
    LOGIN = dbsnmp/<PW>@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=netwon-vip.teleglobe.com)(PORT=1522))(CONNECT_DATA=(SID=newton)))
    2011-12-30 16:41:01,541 Thread-3958328240 ERROR fetchlets.sql: ORA-06550: line 11, column 8:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 10, column 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: line 27, column 41:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 26, column 2:
    PL/SQL: SQL Statement ignored
    2011-12-30 16:41:01,541 Thread-3958328240 ERROR engine: [rac_database,regiprd,latest_hdm_db_metric_helper] : nmeegd_GetMetricData failed : ORA-06550: line 11, column 8:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 10, column 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: line 27, column 41:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 26, column 2:
    PL/SQL: SQL Statement ignored
    2011-12-30 16:41:01,541 Thread-3958328240 WARN collector: <nmecmc.c> Error exit. Error message: ORA-06550: line 11, column 8:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 10, column 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: line 27, column 41:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 26, column 2:
    PL/SQL: SQL Statement ignored

    Log onto the database using SQL*Plus, run the following statements, and post the output.
    SELECT owner, object_type, COUNT(*)
    FROM dba_objects
    WHERE status = 'INVALID'
    GROUP BY owner, object_type;
    SELECT comp_id, version, status
    FROM dba_registry;

  • Problem Topology Data collection LMS 4.1

    Hi there,
    I have an issue with the lms 4.1 Topology Data Collection. In the Topo Data Collect under Inventory > Dashboards > Device Status > Collection Summary, there are only 2 devices succeeded and 0 device fail. but I have more 500 device in state managed. I run Inventory Collections but the number doesn't change.
    Inventory Collection          534                13  02 May 2012, 11:07 CEST Idle          Schedule
    Config Archive                         507               14  02 May 2012, 09:51 CEST Running          Schedule
    Topology Data Collection               2               0  02 May 2012, 11:11 CEST  Idle          Schedule
    I have read others discussions but no one has the same problem or similar problem. The only possible solution is to reset the ANI db but I'm not sure if it it's the best solution.
    Any ideas?
    Thaks for help

    I confirm that Martin. Good spotting!. I just noticed it in a server I installed yesterday. These 2 lines were added to the hosts file.
    127.0.0.1       localhost
    127.0.0.1   LMS4
    A bit silly for both, since these entries were allready in the hosts file, done by the windows install I think
    127.0.0.1       localhost
    ::1             localhost
    Cheers,
    Michel

Maybe you are looking for