DNL_CUST_PROD1 is in Running status for Delta Mat Grp - SRM 7.0

Hi Experts ,
We are configuring SRM7.0 with ECC6 ( Ehp4) having Classic Scenario
We already done initial upload of all Mat Type , Mat Grps, Material  & Service Master data successfully. Now one Mat Grp is added in ECC, and I am tring to replicate this delta objectsin  A3AS , but the DNL_CUST_PROD1 is in Running status  from long time ( in R3AM1) where Source Site is ECC &  Destination is SRM.
In ECC , these is no  any queue stuck  either in SMQ1 or SMQ2.
In SMWP in SRM, it is showing message with Red color Blocked queues: Client 212 Blocked outbound queue: Client 212 Q name DNL_CUST_PROD1 status SYSFAIL dest EDACLNT112 in  section R3AI/R* Start Queues for Loads from OLTP .
In SRM, thes is nothing  stuck in Inbound Queue in SMQ2, but I am afraid , Queue R3AI_DNL_CUST_PROD1 is stucked in Outbound Queue SMQ1 , showing destination ECC sytem , with status  SYSFAIL !!!  I tried to UNLOCK & Activate this queue , but no use.
1. I am wondered  We are imporing Mat Grp from ECC to SRM, so in SRM ssytem, the Queue for Mat Grp ( a nd other objects should be Inbound ( i.e. SMQ2), then how/why the queue  stuck in the Outbound Queue ( SMQ1) ?
2.Can anyone please suggest  what should be the problem a nd solution on same .
Thanks in advane.
NAP

Thanks Nikhil. I varified all steps as per Note 720819 and all seems okay. Even Material Master is replicaing in SRM successfully.
Issue is we even unable to see the error log also. In SMQ2 stuck Queue for DNL_CUST_PROD1 from SYSFAIL ststus we can not guess the issue.Do we other hints to look ?
I am not cleared on the processing of Queues, i.e. how this imported objects looking into SMQ2 O/B Queue , it will be very appricated if you can explain in detail how these Queing works in SRM for importing Mat Types, Mat Grp & Material master data?
Thanks in advane.
NAP

Similar Messages

  • Options for delta deletion in SRM-MDM

    Hi! I understand that MDM cannot do delta deletion unless we have full refresh. However, we usually get delta files that has a column/field that tell us whether the item is an Add, Change or Delete. We were able to use that information to delete the appropriate item in the old catalog management system (Requisite). However, with SRM-MDM we cannot do that.
    One option I thought of was to create a custom field in the repository and then capture this information so that the administrator can delete it manually. However, I wonder whether there is a way to automate it? If we leave the items in the system without deleting it, then they can be potentially used by the users. Another option is to set a timeframe and create a named search to limit the catalog items by timeframe but I am wondering whether there is an easier way to automatically delete the item with the status 'Delete'?  Has anyone tried to do the same before? What options do I have in this case? Appreciate any advice on the above.
    Cheers!
    SF

    Hi SF,
    I can relate to your pain. There is no automatic deletion function delivered by SAP in MDM. However, you can build a custom Java API or ABAP API program to achieve auto-deletion (if your customer is willing to invest in some development in this area).
    Some other tips:
    a. You don't have to create a custom field to indicate that a record is to be deleted. Simply add a new Item Status (standard lookup table), e.g. "Delete", and then populate records in your import files with this Item Status as needed.
    b. For your end users not to see these "Delete" records in the WebDynpro Search UI, build a named search that excludes the Item Status "Delete" value.
    My two cents...
    Cheers,
    Serguei

  • What is difference between additive delta and new status for change record

    Hi Experts
    Can any one explain me about the difference between additive delta and new status for change record with example
    if any one has a document please post it iam thank full to u
    thanks
    Ahmed
    Please search the forum before posting a thread
    Edited by: Pravender on Feb 12, 2012 1:54 PM

    Hi
    Additive delta --- We will get the changed quantity.
    say suppose you have sales order and quantity like  1111   30 which is loaded to cube(BW).
    now same record qty has changed from 30 to 40. As we have additive delta, we will get new record as 1111  10.
    new status for change record: This is same as like After image delta type in standard SAP data sources. for every change in record you should have new record.
    say if you have any number which will be generated by system for new/changed record, then you can use this.
    You can use this option when delta option set to "numeric pointer"
    Regards,
    Venkatesh

  • Tracking completion status for long running DML operations

    Does anybody know:
    Is there any possibility to track a completion status for long running DML operations (for example how many rows is inserted)?
    For example if I execute an INSERT statement which is working for several hours it is very important to estimate the total time for this operation.
    Thanks forward

    I'm working with Oracle8 in present, and unfortunately this solution (V$SESSION_LONGOPS)cannot help me.
    On Oracle8 it works, but with some restrictions:
    - You must be using the cost-based optimizer
    - Set the TIMED_STATISTICS or SQL_TRACE parameter to TRUE
    - Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS package.

  • Error in Process Chain for Delta pack

    Hi All,
    Here I am facing error in process chain for delta pack.
    Error:
         Last delta upload not yet completed. Cancel     I     
    Please send me the solution asap.
    Thanks

    Hi,
    This message means there may be a load triggered by the same infopackage that may be still running or may be still having yellow status. Go to the monitor screen from this IP itself and it will show the paricular request (it will be the latest one). See if that load is active and running. If its active try to find out if load is happening, if its happening you may let it progress. If its not active you need to force the status to red and delete from the target. Once this is done you can retrigger the load. If its a datamart load you may need to reset the data mart status before the triggering the load.
    Refer
    Last delta update is not yet completed
    Last delta update not yet completed-PC Chain Error
    Last Delta Not Yet Completed
    This is the same issue as yours.
    Thanks,
    JituK

  • Extracting User Status for Sales Orders

    I am trying to extract user status transactional data for sales orders (at both the header and item level).  I have been following SAP note 300300 to configure the status extractors and map the BW objects to sales order status data sources (2LIS_11_VASTH & 2LIS_11_VASTI).  I think I have set up both R/3 and BW correctly according to note 300300 and the IMG documentation, but I am not getting any values in the info source for the new user status info objects, when I run the extract.
    Has anyone successfully done this?  If so, do you have any documentation or pointers on how I can get this to work?  Or, am I better off creating a generic extractor?  Thanks for your consideration.
    Sincerely, Hashi Chakravarty

    Hello Ravi,
    well yes, I can derive the status for that order in the update rules if i show the status as an attribute of the 0pm_order; but basically there will be only the last status shown.
    My problem:
    a. i report on day 1, the status of the order is REL
    b. on day 2 i load the master data for the orders, new status is TECO
    c. on day three i want to report in the past and see the status of day 1; that's not possible in this scenario because i'll see the current status (TECO).
    So that's why I want to have the statuses in the cube...
    The 2lis_17_i3hdr extractor is only sensitive to CRT and TECO statuses, but not to the others (like REL / BLC and so on... :_(( )
    I could read the statuses from the JEST and TJ02T tables...but still, if a document status is changed, no delta record is written in the queue...
    Thanks,
    Tudor

  • Missing some master data for delta load( its very urgent please)

    Hi,
    I am working master data for delta load,the problem is when ever changes(SO) in r/3 RECORDS for address text,City Dstrct Nm.its not getting loaded into BW.every day some delta records are coming into BW.
    the data coming thru bw satging and bw.Please help what would be reson and where can i find the detail info,where is it missing.Please see below for example.
    Address Number/     Addr Ln 1 Txt/     City Dstrct Nm
    9025750333/                      #/               #     
    help me its very urgent

    Hi Sumanth,
    check the delta queue and the V3 job is it running correctly ,
    Have a look at OSS note :728687
    and also see the following thread
    Deltas are not available in Delta que
    Delta Queues are not cleared in R/3
    No data in RSA7 for 2lis_03_bf : HELP
    check the data may b it is in modifide status not active.
    regards,
    supriya

  • URGENT!pls hlpDAC Full Load always in 'Running' status at a particular task

    Hi Friends,
    I started a full load yesterday.There are totally 257 tasks.The load went fine without issues till 248th task.But while executing 249th task(Load into Activity Fact),it is always in 'Running' status and is not getting completed even after executing for 2 hours. I checked in the informatica workflow monitor and found that the workflow is in 'running' state and is not getting completed. When right-clicked the session and selected run properties,I can see that 0 rows are inserted into the target table.So I manually tried to stop the workflow.Even after that the task is always in 'Stopping' status and is not getting stopped.Then I manually aborted the workflow.
    Below is the session log file.Could you please check and let me know.
    Regards,
    Vijay
    Edited by: vijayobi on Jul 22, 2011 4:26 AM

    Hi Friends,
    We executed a Full-Load again on Saturday i.e 23rd July 2011.This time we allowed the task 'Load into Activity Fact_CUSTOM' to execute without stopping it manully like we did in the previous data load.It got executed for 3 hours and 45 minutes and then 'Failed' giving the following error ORA-01652(unable to extend temp segment by string in tablespace string).This task got executed successfully in our dev environment.Below is what we found in the sessio .log file and help us resolve this issue.Please revert back as soon as possible as we have this issue in our prod environment.
    2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : RR_4035 : SQL Error [
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
    Oracle Fatal Error].
    2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : BLKR_16004 : ERROR: Prepare failed.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8333 : Rolling back all the targets due to fatal session error.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_PARTY_D_With_Geo_Wid], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8325 : Final rollback executed for the target [W_ACTIVITY_F] at end of load
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
    2011-07-23 14:56:07 : INFO : (8128 | MANAGER) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : PETL_24007 : Received request to stop session run. Attempting to stop worker threads.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8035 : Load complete time: Sat Jul 23 14:56:07 2011
    Thanks in advance.
    Vinay

  • I have an early 2011 MacBook Pro which has been running slow for a while. After looking at responses to similar problems I have downloaded and run EtreCheck and will post the output. Please can someone help me with what it all means.Thanks in advance

    I have an early 2011 MacBook Pro which has been running slow for a while. After looking at responses to similar problems I have downloaded and run EtreCheck. Please can someone help me with what it all means.
    Thanks in advance.
    EtreCheck version: 1.9.15 (52)
    Report generated 19 September 2014 08:07:14 GMT+8
    Hardware Information: ?
      MacBook Pro (13-inch, Early 2011) (Verified)
      MacBook Pro - model: MacBookPro8,1
      1 2.3 GHz Intel Core i5 CPU: 2 cores
      4 GB RAM
    Video Information: ?
      Intel HD Graphics 3000 - VRAM: 384 MB
      Color LCD 1280 x 800
    System Software: ?
      OS X 10.9.4 (13E28) - Uptime: 0 days 0:4:29
    Disk Information: ?
      Hitachi HTS545032B9A302 disk0 : (320.07 GB)
      S.M.A.R.T. Status: Verified
      EFI (disk0s1) <not mounted>: 209.7 MB
      Macintosh HD (disk0s2) / [Startup]: 319.21 GB (147 GB free)
      Recovery HD (disk0s3) <not mounted>: 650 MB
      MATSHITADVD-R   UJ-898 
    USB Information: ?
      Apple Inc. FaceTime HD Camera (Built-in)
      Apple Inc. BRCM2070 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Inc. Apple Internal Keyboard / Trackpad
      Apple Computer, Inc. IR Receiver
    Thunderbolt Information: ?
      Apple Inc. thunderbolt_bus
    Gatekeeper: ?
      Mac App Store and identified developers
    Kernel Extensions: ?
      [not loaded] com.seagate.driver.PowSecDriverCore (5.2.4 - SDK 10.4) Support
      [not loaded] com.seagate.driver.PowSecLeafDriver_10_4 (5.2.4 - SDK 10.4) Support
      [not loaded] com.seagate.driver.PowSecLeafDriver_10_5 (5.2.4 - SDK 10.5) Support
      [not loaded] com.seagate.driver.SeagateDriveIcons (5.2.4 - SDK 10.4) Support
      [loaded] com.sophos.kext.sav (9.1.55 - SDK 10.7) Support
      [loaded] com.sophos.nke.swi (9.1.50 - SDK 10.8) Support
    Launch Daemons: ?
      [loaded] com.adobe.fpsaud.plist Support
      [loaded] com.microsoft.office.licensing.helper.plist Support
      [running] com.sophos.autoupdate.plist Support
      [running] com.sophos.configuration.plist Support
      [running] com.sophos.intercheck.plist Support
      [running] com.sophos.notification.plist Support
      [running] com.sophos.scan.plist Support
      [running] com.sophos.sxld.plist Support
      [running] com.sophos.webd.plist Support
      [running] com.trusteer.rooks.rooksd.plist Support
    Launch Agents: ?
      [loaded] com.divx.dms.agent.plist Support
      [loaded] com.divx.update.agent.plist Support
      [running] com.sophos.uiserver.plist Support
      [running] com.trusteer.rapport.rapportd.plist Support
    User Launch Agents: ?
      [loaded] com.adobe.ARM.[...].plist Support
      [running] com.amazon.music.plist Support
      [loaded] com.google.keystone.agent.plist Support
      [not loaded] jp.co.canon.Inkjet_Extended_Survey_Agent.plist Support
    User Login Items: ?
      iTunesHelper
      TomTomHOMERunner
      AdobeResourceSynchronizer
      Dropbox
    Internet Plug-ins: ?
      FlashPlayer-10.6: Version: 15.0.0.152 - SDK 10.6 Support
      DivX Web Player: Version: 3.2.1.977 - SDK 10.6 Support
      AdobePDFViewerNPAPI: Version: 11.0.09 - SDK 10.6 Support
      AdobePDFViewer: Version: 11.0.09 - SDK 10.6 Support
      Flash Player: Version: 15.0.0.152 - SDK 10.6 Support
      EPPEX Plugin: Version: 10.0 Support
      Default Browser: Version: 537 - SDK 10.9
      OVSHelper: Version: 1.1 Support
      QuickTime Plugin: Version: 7.7.3
      SharePointBrowserPlugin: Version: 14.4.4 - SDK 10.6 Support
      iPhotoPhotocast: Version: 7.0 - SDK 10.7
    Safari Extensions: ?
      Ultimate
    Audio Plug-ins: ?
      BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
      AirPlay: Version: 2.0 - SDK 10.9
      AppleAVBAudio: Version: 203.2 - SDK 10.9
      iSightAudio: Version: 7.7.3 - SDK 10.9
    iTunes Plug-ins: ?
      Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
    3rd Party Preference Panes: ?
      Flash Player  Support
      Perian  Support
      Trusteer Endpoint Protection  Support
    Time Machine: ?
      Skip System Files: NO
      Auto backup: YES
      Volumes being backed up:
      Macintosh HD: Disk size: 297.29 GB Disk used: 160.38 GB
      Destinations:
      Data [Network] (Last used)
      Total size: 2 TB
      Total number of backups: 99
      Oldest backup: 2012-04-20 17:05:32 +0000
      Last backup: 2014-09-18 23:49:25 +0000
      Size of backup disk: Excellent
      Backup size 2 TB > (Disk size 297.29 GB X 3)
      Time Machine details may not be accurate.
      All volumes being backed up may not be listed.
    Top Processes by CPU: ?
          6% InterCheck
          5% iCalExternalSync
          3% WindowServer
          2% CalendarAgent
          2% SystemUIServer
    Top Processes by Memory: ?
      152 MB SophosScanD
      147 MB InterCheck
      106 MB SophosAntiVirus
      66 MB Dropbox
      57 MB com.apple.iTunesLibraryService
    Virtual Memory Information: ?
      161 MB Free RAM
      1.55 GB Active RAM
      1.41 GB Inactive RAM
      902 MB Wired RAM
      611 MB Page-ins
      0 B Page-outs

    Uninstall Trusteer software
    http://www.trusteer.com/support/uninstalling-rapport-mac-os-x
    Remove Sophos
    https://discussions.apple.com/message/21069437#21069437

  • T code to check the job run status

    Hi Gurus,
    I am filling the setup tables for inventory, but i didn't run in background. I clicked on execute button, but suddenly i got disconnected from net and my server also.
    Now i have to check the job run status whether it is running or not.
    Request you to provide the suggessions.
    Thanks & Regards,
    Saketh

    Hi,
    if you ran job in background you can see at SM37.use proper time ,date and use id to get your job.
    if you ran on frond end then you won't see job.if your job was completed then you can see data at SE11 by using setup table name or you can see at t code NPRT by using name of the run.
    if you won't find your job then just delete your setup tables again and fill it.
    No issues.
    Thanks

  • Jobs with running status in OEM while it was already finished

    Hi
    We have rebooted our server and all the jobs which are suppose to run some scripts in the server are showing running state for more than 16hours now! checking the scripts logs it was successfully executed and finished. When trying to delete the running jobs and schedule a new one it refuses to stop (although its not even running in reality). What can we do to clear this running status and let oem to schedule new ones? or how can we resolve this situation?
    We are running Enterprise manager 10g
    Swaid

    Hi Jozsef,
    I've tried to follow the instructions in that Metalink note and the job seems to be deleted from mgmt_job table as well as the executions for that job in mgmt_job_execution; but still everything is hanging and when i checked in the EM page it still shows the job in running state! how can I clean it? and why is it hanging in the first place?
    I've looked into the agent trace and its not getting updated since the problem has started and here is a bit of the last messages we have in the trace:
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = Host, value = snmsmaster:3938
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = Connection, value = Keep-Alive, TE
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = TE, value = trailers, deflate, gzip, compress
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = User-Agent, value = RPT-HTTPClient/0.3-3
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = Accept-Encoding, value = gzip, x-gzip, compress, x-compress
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = Content-type, value = application/octet-stream
    2009-04-17 12:05:09 Thread-2413 ERROR http: secondary header = Content-length, value = 593
    2009-04-17 12:05:09 Thread-2413 ERROR http: --- Error context dump end for incoming request ---
    The emoms trace is getting updated with some repeated error messages that looks like below:
    2009-04-18 15:56:57,114 [HealthMonitor] ERROR em.jobs pingPastDue.604 - Job step continuing
    2009-04-18 15:57:13,362 [HealthMonitor] ERROR emd.main run.291 - HealthMonitor : Found errant task : TaskRegn:ID3440,Callback:class oracle.sysman.emdrep.jobs.JobWorker,Iterative:true,Duration:900,DueTime:1240059433359
    2009-04-18 15:57:13,363 [HealthMonitor] ERROR em.jobs pingPastDue.602 - Entry - 15 Minute timeout error for jobstep:Stepname: Command
    Commandname: remoteOp
    Commandtype: Short-Running
    StepId: 97914
    jobIdStr: 679733b18b7a5698e04400144fa10220
    executionIdStr: 67bd0ff86c9b4268e04400144fa10220
    iterateParam: null
    iterateParamIndex: -1
    Swaid

  • Can we use both 0FI_AP_3 and 0FI_AP_4 for Delta Loads at the same time.....

    Hi Gurus:
    Currently my company uses 0FI_AP_3 for some A/P reporting. It has been heavily customized & uses Delta loading. However, SAP recommends the use of "0FI_AP_4" for A/P data fro delta loads. I was able to Activate 0FI_AP_4 as well & do some Full Loads in Dev/Test boxes. Question is whether I can use both the extractors for "Delta" loads at the same time......? If there are any issue, what is the issue and how ccan I resolve it? Is the use of only one extractor recommended......??
    Please let me know as this impacts a lot of my development....! Thanks....
    Best...... ShruMaa
    PS:  I had posted this in "BI Extractors" forum but there has been no response......  Hope to get some response.......!  Thanks

    Hi,
    I would recommend you to use 0FI_AP_4 rather using both, particularly for many reasons -
    1. DS: 0FI_AP_4  replaces DataSource 0FI_AP_3 and still uses the same extraction structure. For more details refer to the OSS note 410797.
    2. You can run the 0FI_AP_4 independent of any other FI datasources like 0FI_AR_4 and 0FI_GL_4 or even 0FI_GL_14. For more details refer to the OSS note: 551044.
    3. Map the 0FI_AP_4 to DSO: 0FIAP_O03 (or create a Z one as per your requirement).
    4. Load the same to a InfoCube (0FIAP_C03).
    Hope this helps.
    Thanks.
    Nazeer

  • Error while running ETL for Financials_Oracle R1213.

    Hi All,
    I have done all installation and configuration for OBIA 7.9.6.4. I am getting following error while running ETL for Financials_Oracle R1213.
    ===========================================================================================================
    1)While starting DAC server, I am getting following  error:
    SEVERE: Incorrectly specified Post-Etl Script/Executable
    ================================================================================================================
    2) After starting ETL Financials_Oracle R1213,
    ANOMALY INFO::: Error while executing : INFORMATICA TASK:SILOS:SIL_InsertRowInRunTable:1:(Source : FULL Target : FULL)
    MESSAGE:::
    Irrecoverable Error
    pmcmd startworkflow -sv BIA_IS -d Domain_oracle2go2.us.oracle.com -u Administrator -p ****  -f SILOS  -paramfile /home/oracle/Informatica/9.0.1/server/infa_shared/SILOS.SIL_InsertRowInRunTable.ORA_R1213_Flatfile.txt  SIL_InsertRowInRunTable
    Status Desc : Failed
    WorkFlowMessage :
    Error Message : Unknown reason for error code 36331
    ErrorCode : 36331
    EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.IrrecoverableException
    com.siebel.analytics.etl.etltask.InformaticaTask.doExecute(InformaticaTask.java:254)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:477)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:372)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:253)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.run(GenericTaskImpl.java:655)
    com.siebel.analytics.etl.taskmanager.XCallable.call(XCallable.java:63)
    java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    java.util.concurrent.FutureTask.run(FutureTask.java:138)
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    java.util.concurrent.FutureTask.run(FutureTask.java:138)
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
    java.lang.Thread.run(Thread.java:619)
    (Number of retries : 1) pmcmd startworkflow -sv BIA_IS -d Domain_oracle2go2.us.oracle.com -u Administrator -p ****  -f SILOS  -paramfile /home/oracle/Informatica/9.0.1/server/infa_shared/SILOS.SIL_InsertRowInRunTable.ORA_R1213_Flatfile.txt  SIL_InsertRowInRunTable 2013-11-27 11:26:59.923 INFORMATICA TASK:SILOS:SIL_InsertRowInRunTable:1:(Source : FULL Target : FULL) has finished execution with Failed status.2013-11-27 11:26:28.855 Acquiring Resources 2013-11-27 11:26:28.857 Acquired Resources  2013-11-27 11:26:28.858 INFORMATICA TASK:SILOS:SIL_InsertRowInRunTable:1:(Source : FULL Target : FULL) has started.   ANOMALY INFO::: Error while executing : INFORMATICA TASK:SILOS:SIL_InsertRowInRunTable:1:(Source : FULL Target : FULL)MESSAGE:::Irrecoverable Errorpmcmd startworkflow -sv BIA_IS -d Domain_oracle2go2.us.oracle.com -u Administrator -p ****  -f SILOS  -paramfile /home/oracle/Informatica/9.0.1/server/infa_shared/SILOS.SIL_InsertRowInRunTable.ORA_R1213_Flatfile.txt  SIL_InsertRowInRunTableStatus Desc : FailedWorkFlowMessage : Error Message : Unknown reason for error code 36331ErrorCode : 36331EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.IrrecoverableException
    com.siebel.analytics.etl.etltask.InformaticaTask.doExecute(InformaticaTask.java:254)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:477)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:372)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:253)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.run(GenericTaskImpl.java:655)
    com.siebel.analytics.etl.taskmanager.XCallable.call(XCallable.java:63)
    java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    java.util.concurrent.FutureTask.run(FutureTask.java:138)
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    java.util.concurrent.FutureTask.run(FutureTask.java:138)
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
    java.lang.Thread.run(Thread.java:619)
    (Number of retries : 1)
    pmcmd startworkflow -sv BIA_IS -d Domain_oracle2go2.us.oracle.com -u Administrator -p ****  -f SILOS  -paramfile /home/oracle/Informatica/9.0.1/server/infa_shared/SILOS.SIL_InsertRowInRunTable.ORA_R1213_Flatfile.txt  SIL_InsertRowInRunTable
    2013-11-27 11:26:59.923 INFORMATICA TASK:SILOS:SIL_InsertRowInRunTable:1:(Source : FULL Target : FULL) has finished execution with Failed status.
    ====================================================================================================================
    Could anyone please help me to resolve this error?
    Regards,
    Narottam

    Did you configured Informatica???

  • Initial load of DNL_CUST_CNDALL in running status

    Hi ALL,
    I am doing initial load of DNL_CUST_CNDALL, it is in running status in R3AM1.
    Prior to this I have downloaded several other customizing objects and those have been downloaded successfully. My system has highest support package implemented according to the SAP NOTE I found.
    Also there is no inbound queue in CRM and outbound queue in R/3 for this object.
    All the queues are registered also in both systems(smqr and smqs).
    After aborting the object in R3AM1, I have tried downloading the object again several times, still no entries are coming in any queues.
    PLEASE help me on this.

    Hi  Mahaadhevan,
    please check the table CRMRFCPAR as well where you store the entry for the logical system. Check as well notes 588701 – 76501and SAP Best Practice for Connectivity and Replication.
    Please reward with point if it helps.
    Regards,
    AndreA

  • Reports Stuck in Running Status

    We are having a problem where reports are stuck in running status. When looking on the server, the reports stuck in running status taking up a "jobserverchild" process, and does not complete.
    Has anyone experienced this issue, and how we can resolve it?

    Ok - here we go - we have been experiencing this issue for the longest time - it looks like we have found the culprit:
    1) Check the default printer of the report developer who placed this report into Enterprise.
    2) Ensure that exact same driver is on the Business Objects Server.
    3) Report should schedule fine.
    Another option is to set "No Printer" within the report design itself by clicking on File > Page Setup. This way there is no dependency on printer drivers.
    Edited by: Troy Underwood on Nov 25, 2009 11:29 PM

Maybe you are looking for

  • Problems opening camera raw files in photoshop need help

    I am having problems opening cr. files in photoshop and I'm not sure what to do. In case you couldn't tell I am really new to adobe products and am pretty clueless. Anyone know of a tutorial on this subject I would really appreciate any info. Yes I h

  • HELP!! my laptop says that I've lost my driver and I cannot use my dvd rw!!

    I cannot seem to find my driver!! It's gone! And I have tried reloading it,but my dvd drive doessb't work! How can I get it reinstalled? I have a Compaq presario ,CQ60 615DX

  • _system.date() as a date object?

    My system reports back _system.date() as mm/dd/yy which I could easily parse but I understand that the system date format is dependent on the user's OS, preference settings, language, mood ;-) There's got to be an easy way to get the system date as a

  • JDeveloper error when extending VO

    I have had problems yesterday and today attempting to create an extending VO with the "mandated" RowImpl class. When I click on the Finish button in the wizard, I get the following error: Error in updating the java files for component: AbcMyReqsVO Ex

  • Lightroom 1.4.1 does not recognize directories on external hard disks

    I just upgraded to Lightroom 1.4.1 and, now, cannot export files to external hard drives. Lightroom sees the external drive and it is displayed with the standard + indicating sub directories are available. when I click the +, it disappears and nothin