Cifs Initializing

Hi there
I'm strugling with the cifs instance not going to running state, I have tried rebooting the device and reinstalling the software and no luck. Does any one have ideas or had the same problem?
sabf-0057-mkuze-wae-1#sh alarms
Critical Alarms:
        Alarm ID                 Module/Submodule               Instance
   1 cifs_ao_down              wafs                         WAFS                    
Major Alarms:
None
Minor Alarms:
None
sabf-0057-mkuze-wae-1#sh acce
Accelerator     Licensed        Config State    Operational State            
cifs            Yes             Enabled         Initializing
epm             Yes             Enabled         Running   
http            Yes             Enabled         Running   
mapi            Yes             Disabled        Shutdown  
nfs             Yes             Enabled         Running   
ssl             Yes             Enabled         Running   
video           No              Enabled         Shutdown  
sabf-0057-mkuze-wae-1#sh version
Cisco Wide Area Application Services Software (WAAS)
Copyright (c) 1999-2011 by Cisco Systems, Inc.
Cisco Wide Area Application Services (accelerator-k9) Software Release 4.4.1 (build b12 May 27 2011)
Version: nme-wae-502-4.4.1.12
Compiled 08:42:00 May 27 2011 by damaster
Device Id: 00:1d:a2:ec:3f:91
System was restarted on Tue Jan  3 13:51:44 2012.
The system has been up for 1 day, 19 hours, 34 minutes, 28 seconds.
sabf-0057-mkuze-wae-1#

sabf-0057-mkuze-wae-1#sh disks detail
Physical disk information:
  disk00: Present    SB3D34EWHB1ZUS     (h01 c00 i00 l00 - Int DAS-SATA)
          114470MB(111.8GB)
Mounted file systems:
MOUNT POINT      TYPE       DEVICE                SIZE     INUSE      FREE USE%
/swstore         internal   /dev/sdb2            991MB     724MB     267MB  73%
/state           internal   /dev/sdb3           3967MB     134MB    3833MB   3%
/local/local1    SYSFS      /dev/sdb7           5951MB    1289MB    4662MB  21%
/sw              internal   /dev/sdb1            991MB     698MB     293MB  70%
/disk00-04       CONTENT    /dev/sdb6          96995MB   37330MB   59665MB  38%
.../local1/spool PRINTSPOOL /dev/sdb8            991MB      16MB     975MB   1%
No RAID devices present.
Disk encryption feature is disabled.
sabf-0057-mkuze-wae-1#
2012 Jan  3 10    35    20 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330036     Stop service 'cifsao' using     '/diamond/bin/CheckLicenseScriptAndStopAO.sh CIFS' with pid    9340
2012 Jan  3 08    41    29 NO-HOSTNAME Nodemgr     %WAAS-NODEMGR-5-330080     [Nodemgr_Admin] requests start service cifsao        
2012 Jan  3 08    41    29 NO-HOSTNAME ver_ao_common     %WAAS-CLI-5-170053     successful nodemgr start cifsao request        
2012 Jan  3 10    48    17 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330040     Start service 'cifsao' using     '/diamond/bin/CheckLicenseAndStartAO.sh CIFS /sw/actona/bin/cifs_start.sh' with pid    5893
2012 Jan  3 11    27    48 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330036     Stop service 'cifsao' using     '/diamond/bin/CheckLicenseScriptAndStopAO.sh CIFS' with pid    20230
2012 Jan  3 09    33    20 NO-HOSTNAME Nodemgr     %WAAS-NODEMGR-5-330080     [Nodemgr_Admin] requests start service cifsao        
2012 Jan  3 09    33    20 NO-HOSTNAME ver_ao_common     %WAAS-CLI-5-170053     successful nodemgr start cifsao request        
2012 Jan  3 11    35    31 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330040     Start service 'cifsao' using     '/diamond/bin/CheckLicenseAndStartAO.sh CIFS /sw/actona/bin/cifs_start.sh' with pid    4846
2012 Jan  3 13    47    10 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330036     Stop service 'cifsao' using     '/diamond/bin/CheckLicenseScriptAndStopAO.sh CIFS' with pid    18339
2012 Jan  3 11    54    54 NO-HOSTNAME Nodemgr     %WAAS-NODEMGR-5-330080     [Nodemgr_Admin] requests start service cifsao        
2012 Jan  3 11    54    54 NO-HOSTNAME ver_ao_common     %WAAS-CLI-5-170053     successful nodemgr start cifsao request        
2012 Jan  3 13    57    02 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330040     Start service 'cifsao' using     '/diamond/bin/CheckLicenseAndStartAO.sh CIFS /sw/actona/bin/cifs_start.sh' with pid    5779
2012 Jan  4 08    0    13 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330082     [Nodemgr_Admin] requests stop service cifsao.        
2012 Jan  4 08    0    13 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330036     Stop service 'cifsao' using     '/diamond/bin/CheckLicenseScriptAndStopAO.sh CIFS' with pid    31494
2012 Jan  4 08    0    13 sabf-0057-mkuze-wae-1 ver_ao_common     %WAAS-CLI-5-170053     successful nodemgr stopped cifsao request        
2012 Jan  4 08    0    45 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330080     [Nodemgr_Admin] requests start service cifsao        
2012 Jan  4 08    0    45 sabf-0057-mkuze-wae-1 ver_ao_common     %WAAS-CLI-5-170053     successful nodemgr start cifsao request        
2012 Jan  4 08    2    17 sabf-0057-mkuze-wae-1 Nodemgr     %WAAS-NODEMGR-5-330040     Start service 'cifsao' using     '/diamond/bin/CheckLicenseAndStartAO.sh CIFS /sw/actona/bin/cifs_start.sh' with pid    32332

Similar Messages

  • Sales order CIF performance

    Hello SCM experts,
    We are trying to CIF initial load of sales orders for 2 materials; Each material has Variant configuration. There are 25 APO relevant VC characteristics for each material.
    Therefore, total volume of VC materials = 2
    Each material has 30,000 sales orders per month.
    We are trying to integrate sales orders to SCM 7.0 version; Total volume of sales orders is 100,000 for the Initial load.
    Currently, the performance of this integration is worse and takes around 20 hours.
    I want to know of any other clients that have good sales order volumes.
    Has anyone seen a sales order volume of 100,000 with Variant configuration? What is the best performance that should be expected for this volume?
    Appreciate any inputs.

    Hi Louis,
    Have you considered uses of parallel processing while activating initial load integration model?
    I think by processing SOs in parallel processes, it will definitely improve performance.
    Please let us know your latest situation.
    Regards,
    Abhay Kapase

  • Initial settings from APO- ECC for CIF

    HI Gurus,
                   Please can any one help me what are the setting need to be connected in APO system to connect or move orders from APO -> ECC.Please let me know step by step if possible.
    Thanks for your help.
    Thanks& Regards,
    Kumar

    Hi Kumar,
    For connecting the ECC>>APO need to do following setting.
    Settings in R/3
    u2022     Check ALE settings or activate ALE settings
    u2022     Define the logical system --  BD54
    u2022     Assign logical system to a client -- SCC4
    u2022     Set the RFC destination ---SM59
    u2022     Assign target system and queue type--- CFC1
    u2022     Maintain the SAP APO release--- NDV2
    u2022     Activate BTEs for SAP APO integration ---BF11
    u2022     User parameters for CIF ---CFC2
    Settings in APO
    u2022     Check ALE settings or activate ALE settings
    u2022     Define the logical system --BD54
    u2022     Assign logical system to a client ---SCC4
    u2022     Set the RFC destination --SM59
    u2022     Set up business system group
    u2022     Create BSG--- /SAPAPO/C1
    u2022     Assign logical systems to BSG ---/SAPAPO/C2
    u2022     Maintain distribution definitions ---/SAPAPO/CP1
    u2022     User parameters for CIF--- /SAPAPO/C4
    Regards,
    MJ

  • Trying to resolve ntlmv errros mounting CIFS network shares via fstab

    Kernel: 3.4.2-2
    WM: Openbox
    About 6 months or so ago, which was after about a year on my current install with no issue, I began getting an ntlmv error when auto mounting samba shares at
    boot.  Everything still worked but I continued getting an error message.
    My fstab entry at that time looked like this:
    //<LAN_IP>/<share name>/ /mnt/Serverbox cifs credential=/path/to/file,file_mode=0777,dir_mode=0777 0 0
    The error I recieved looked like this:
    CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel release 3.3
    So I did what research I could on the error, found the "sec" option and discovered that adding the "sec=ntlmv2" option to my above noted fstab entry got
    rid of the error message and everything still worked perfectly; that is until this weekend.
    After upgrading both machines this weekend I noticed a new boot time error message and saw that my shares were no longer being mounted.
    relevant boot log:
    Mounting Network Filesystems [BUSY] mount error(22): Invalid argument
    Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
    relevant everything log:
    CIFS VFS: bad security option: ntlmv2
    /var/log/pacman from the the weekend's upgrade:
    [2012-06-16 13:03] Running 'pacman -Syu'
    [2012-06-16 13:03] synchronizing package lists
    [2012-06-16 13:03] starting full system upgrade
    [2012-06-16 13:10] removed dbus-python (1.0.0-1)
    [2012-06-16 13:10] upgraded linux-api-headers (3.3.2-1 -> 3.3.8-1)
    [2012-06-16 13:10] Generating locales...
    [2012-06-16 13:10] en_US.UTF-8... done
    [2012-06-16 13:10] en_US.ISO-8859-1... done
    [2012-06-16 13:10] Generation complete.
    [2012-06-16 13:10] upgraded glibc (2.15-10 -> 2.15-11)
    [2012-06-16 13:10] upgraded bison (2.5-3 -> 2.5.1-1)
    [2012-06-16 13:10] upgraded libpng (1.5.10-1 -> 1.5.11-1)
    [2012-06-16 13:10] upgraded cairo (1.12.2-1 -> 1.12.2-2)
    [2012-06-16 13:10] upgraded libwbclient (3.6.5-2 -> 3.6.5-3)
    [2012-06-16 13:10] upgraded cifs-utils (5.4-1 -> 5.5-1)
    [2012-06-16 13:10] upgraded sqlite (3.7.12.1-1 -> 3.7.13-1)
    [2012-06-16 13:10] upgraded colord (0.1.21-1 -> 0.1.21-2)
    [2012-06-16 13:10] installed pambase (20120602-1)
    [2012-06-16 13:10] upgraded pam (1.1.5-3 -> 1.1.5-4)
    [2012-06-16 13:10] upgraded libcups (1.5.3-4 -> 1.5.3-5)
    [2012-06-16 13:10] upgraded cups (1.5.3-4 -> 1.5.3-5)
    [2012-06-16 13:10] installed python-dbus-common (1.1.0-2)
    [2012-06-16 13:10] installed python2-dbus (1.1.0-2)
    [2012-06-16 13:10] upgraded dconf (0.12.1-1 -> 0.12.1-2)
    [2012-06-16 13:10] upgraded desktop-file-utils (0.19-1 -> 0.20-1)
    [2012-06-16 13:10] upgraded firefox (13.0-2 -> 13.0.1-1)
    [2012-06-16 13:10] upgraded freetype2 (2.4.9-2 -> 2.4.10-1)
    [2012-06-16 13:10] upgraded initscripts (2012.05.1-3 -> 2012.06.1-1)
    [2012-06-16 13:10] upgraded jre7-openjdk-headless (7.u4_2.2-1 -> 7.u5_2.2.1-1)
    [2012-06-16 13:10] upgraded jre7-openjdk (7.u4_2.2-1 -> 7.u5_2.2.1-1)
    [2012-06-16 13:10] upgraded jdk7-openjdk (7.u4_2.2-1 -> 7.u5_2.2.1-1)
    [2012-06-16 13:10] upgraded kdelibs (4.8.4-1 -> 4.8.4-2)
    [2012-06-16 13:10] upgraded libdrm (2.4.33-1 -> 2.4.35-1)
    [2012-06-16 13:10] upgraded libglapi (8.0.3-2 -> 8.0.3-3)
    [2012-06-16 13:10] upgraded liblrdf (0.4.0-9 -> 0.5.0-1)
    [2012-06-16 13:10] upgraded libmysqlclient (5.5.24-1 -> 5.5.25-1)
    [2012-06-16 13:10] installed khrplatform-devel (8.0.3-3)
    [2012-06-16 13:10] installed libegl (8.0.3-3)
    [2012-06-16 13:10] upgraded nvidia-utils (295.53-1 -> 295.59-1)
    [2012-06-16 13:10] upgraded libva (1.0.15-1 -> 1.1.0-1)
    [2012-06-16 13:10] upgraded mkinitcpio (0.9.1-1 -> 0.9.2-2)
    [2012-06-16 13:10] >>> Updating module dependencies. Please wait ...
    [2012-06-16 13:10] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
    [2012-06-16 13:10] ==> Building image from preset: 'default'
    [2012-06-16 13:10] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
    [2012-06-16 13:10] ==> Starting build: 3.4.2-2-ARCH
    [2012-06-16 13:10] -> Running build hook: [base]
    [2012-06-16 13:10] -> Running build hook: [udev]
    [2012-06-16 13:10] -> Running build hook: [autodetect]
    [2012-06-16 13:10] -> Running build hook: [pata]
    [2012-06-16 13:10] -> Running build hook: [scsi]
    [2012-06-16 13:10] -> Running build hook: [sata]
    [2012-06-16 13:10] -> Running build hook: [filesystems]
    [2012-06-16 13:10] -> Running build hook: [usbinput]
    [2012-06-16 13:10] -> Running build hook: [fsck]
    [2012-06-16 13:10] ==> Generating module dependencies
    [2012-06-16 13:10] ==> Creating xz initcpio image: /boot/initramfs-linux.img
    [2012-06-16 13:10] ==> Image generation successful
    [2012-06-16 13:10] ==> Building image from preset: 'fallback'
    [2012-06-16 13:10] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
    [2012-06-16 13:10] ==> Starting build: 3.4.2-2-ARCH
    [2012-06-16 13:10] -> Running build hook: [base]
    [2012-06-16 13:10] -> Running build hook: [udev]
    [2012-06-16 13:10] -> Running build hook: [pata]
    [2012-06-16 13:10] -> Running build hook: [scsi]
    [2012-06-16 13:10] -> Running build hook: [sata]
    [2012-06-16 13:10] -> Running build hook: [filesystems]
    [2012-06-16 13:10] -> Running build hook: [usbinput]
    [2012-06-16 13:10] -> Running build hook: [fsck]
    [2012-06-16 13:10] ==> Generating module dependencies
    [2012-06-16 13:10] ==> Creating xz initcpio image: /boot/initramfs-linux-fallback.img
    [2012-06-16 13:11] ==> Image generation successful
    [2012-06-16 13:11] upgraded linux (3.3.8-1 -> 3.4.2-2)
    [2012-06-16 13:11] upgraded lirc-utils (1:0.9.0-16 -> 1:0.9.0-18)
    [2012-06-16 13:11] upgraded mesa (8.0.3-2 -> 8.0.3-3)
    [2012-06-16 13:11] upgraded mysql-clients (5.5.24-1 -> 5.5.25-1)
    [2012-06-16 13:11] upgraded mysql (5.5.24-1 -> 5.5.25-1)
    [2012-06-16 13:11] upgraded nvidia (295.53-1 -> 295.59-1)
    [2012-06-16 13:11] upgraded opencl-nvidia (295.53-1 -> 295.59-1)
    [2012-06-16 13:11] upgraded pango (1.30.0-1 -> 1.30.1-1)
    [2012-06-16 13:11] upgraded pcmanfm (0.9.10-1 -> 0.9.10-2)
    [2012-06-16 13:11] upgraded psmisc (22.16-1 -> 22.17-1)
    [2012-06-16 13:11] upgraded smbclient (3.6.5-2 -> 3.6.5-3)
    [2012-06-16 13:11] upgraded thunderbird (13.0-1 -> 13.0.1-1)
    [2012-06-16 13:11] upgraded udisks2 (1.94.0-1 -> 1.94.0-2)
    [2012-06-16 13:11] upgraded unrar (4.2.3-1 -> 4.2.4-1)
    [2012-06-16 13:11] upgraded virtualbox-archlinux-modules (4.1.16-1 -> 4.1.16-2)
    [2012-06-16 13:11] In order to use the new version, reload all virtualbox modules manually.
    [2012-06-16 13:11] upgraded virtualbox-modules (4.1.16-1 -> 4.1.16-2)
    [2012-06-16 13:11] upgraded xine-ui (0.99.6-5 -> 0.99.7-1)
    [2012-06-16 13:11] Running 'pacman -Syy'
    [2012-06-16 13:11] synchronizing package lists
    [2012-06-16 13:12] Running 'pacman -Syu'
    [2012-06-16 13:12] synchronizing package lists
    [2012-06-16 13:12] starting full system upgrade
    [2012-06-16 13:13] upgraded lib32-freetype2 (2.4.9-1 -> 2.4.10-1)
    [2012-06-16 13:13] upgraded lib32-gnutls (3.0.19-1 -> 3.0.20-1)
    [2012-06-16 13:13] upgraded lib32-krb5 (1.10.1-2 -> 1.10.2-1)
    [2012-06-16 13:13] upgraded lib32-libpng (1.5.10-2 -> 1.5.11-1)
    [2012-06-16 13:13] upgraded lib32-libx11 (1.4.99.902-1 -> 1.5.0-1)
    [2012-06-16 13:13] upgraded lib32-nvidia-utils (295.53-1 -> 295.59-1)
    [2012-06-16 13:13] upgraded lib32-sqlite3 (3.7.11-1 -> 3.7.13-1)
    [2012-06-16 13:13] upgraded lib32-util-linux (2.21.1-1 -> 2.21.2-1)
    [2012-06-16 13:13] upgraded lib32-xcb-util (0.3.8-1 -> 0.3.9-1)
    [2012-06-16 13:13] upgraded wine (1.5.5-1 -> 1.5.6-1)
    Currently returning to the old fstab entry once again gives the initial error code about the security mechanism being upgraded in kernal release x.x (it always seemed to change with each kernel change) though the shares seem to mount just fine. I've looked through the wiki, man pages on die.net and googled everything I can think of and I find a lot pages mentioning ntlmv errors with no solutions, many telling me that ntlmv and ntlmv2 are mount options, but nothing that gives me any indication on why I might be getting this error or how to go about looking for a solution.
    I've looked through the pacman logs on both my desktop and my file server that I'm connecting to in an effort to determine what might have changed and I found that:
    the smbclient had been upgraded on both machines so I tried downgrading back to version 3.6.5-2 but there was no change when rebooting.
    I also found cifs-utils had been upgraded on the file server.  So I downgraded that as well to the previous version (5.4-1), rebooted both machines and I'm still getting the same invalid arguement error.
    I've now gone back and upgraded to the most recent versions of the downgraded packages on each machine but I'm at a loss as to what my next steps should be.  Where do I go from here to track this down and determine if this is a bug or configuration error.  Is there a cleaner way of mounting these shares that I should be using instead of fstab?
    Thank you.

    I had the same issue. After upgrading kernel to 3.4.5 today the cifs share mounted with original fstab settings. I believe it was caused by this bug:
    kernel changelog wrote:    The double delimiter check that allows a comma in the password parsing code is
        unconditional. We set "tmp_end" to the end of the string and we continue to
        check for double delimiter. In the case where the password doesn't contain a
        comma we end up setting tmp_end to NULL and eventually setting "options" to
        "end". This results in the premature termination of the options string and hence
        the values of UNCip and UNC are being set to NULL. This results in mount failure
        with "Connecting to DFS root not implemented yet" error.

  • 7310 - Problem with CIFS and Roaming Profiles since upgrade to 2010.Q1.0.2

    We have developed a strange problem with our environment which I'm pretty sure is down to the upgrade to 2010.Q1.0.2 from 2009.Q3.4.1 on our 7310 (all the previous 2009 releases had been fine) since nothing else has changed in our environment. I suspect some changes to the underlying CIFS server causing this?
    We have virtual Windows servers hosted on a VMWare VSphere cluster which are stored on the 7310 via iSCSI LUNs and also CIFS shares on the 7310 for home directories and separate CIFS shares for roaming profiles - all paths are correct in AD for each user - we also use folder redirection for XP Pro clients to force things like "Application Data", "My Documents" etc. onto the Homedir share.
    What we've been seeing recently (which only started happening after the upgrade) is a lot of failed logons to the domain for users. It looks like the usual corrupted profile problem that has plagued Windows forever ...the usual messages that it cannot log the user on with a copy of their roaming profile, and that it will use a temporary one. Some folder redirections (that are initiated via Group Policy) also don't get applied. Users don't see errors when logging off from a "good" profile, and NTUSER.DAT etc. seemingly gets written correctly - the next time they log on, around half the time the users will get these errors as described below:
    Event viewer logs show "cannot find the file specified" errors for NTUSER.DAT, along with "directory name is invalid" errors for some of the folder redirections.
    More worrying (and what I think might be the real reason for these failures) are the "offline caching is enabled on the roaming profile share" errors. I think that the client-side caching might not be working - possibly the profiles aren't getting flushed and written correctly upon logout?
    Now, unfortunately the MMC snap-in for managing shares doesn't seem to support changing the behaviour for client-side caching on the CIFS shares (as confirmed in the latest 7000-series Admin Guide on page 198).
    I've been thinking about unchecking the "Enable Oplocks" box which from the CIFS side would completely stop all client-side caching I presume?
    Is this likely to be the culprit here, or is there any other known behaviour that could be causing these errors? Is it also worth disabling "Cache device usage" altogether for the Profiles share itself?
    Can anyone help? It's a bit of a strange problem, and something I don't want to raise with Sun on our support contract just yet, since at first glance it looks like a Windows problem, but I suspect the storage could well be to blame...

    Unfortunately, this is still not working correctly...
    So, it looks like it's not related to the offline caching seeing as it all works on the Q2009 despite the warnings...
    Some more errors coming out of userenv.log on the affected Windows machines:
    USERENV(280.284) 10:07:33:230 ReconcileFile: GetFileAttributes on the source failed with error = 2
    USERENV(280.284) 10:07:33:230 CopyProfileDirectoryEx: ReconcileFile failed with error = 2
    and later:
    USERENV(280.284) 10:07:33:245 GetShareName: WNetGetConnection initially returned error 2250
    USERENV(280.284) 10:07:33:245 CopyProfileDirectoryEx: Leaving with a return value of 0
    USERENV(280.284) 10:07:33:245 RestoreUserProfile: CopyProfileDirectory failed. Issuing default profile
    ...which then forces the TEMP profiles.
    All other errors linked to this look like "file not found", "invalid path" etc. when the files are present and the paths are correct.
    Manually mapping drives using CIFS with UNC paths sporadically fails too now. We have a bunch of GPOs that map shares to users depending on their group memberships - these too are sporadically failing.
    It certainly looks to me like it could be a CIFS problem introduced in the Q2010 release.
    I'm going to raise a ticket with Sun...

  • CIF of material classification data

    Dear Experts,
    Since a recent technical upgrade to our systems, I have seen a change in behavior of the integration of material classification data from ECC to SCM.  Before the upgrade, classification data transferred with the initial CIF of the material.   If material classification did not exist on the ECC material at the time the material was added to a new integration model and activated, we "missed the boat" so to speak.  Classification would not automatically transfer (CIF) to APO when if finally was loaded in ECC, even if the application area/integration models were correct.   As a work-around, we would be forced to generate and activate a material model that specifically excluded the material, then generate and activate another material integration model that included it again.  Then the classification would transfer.
    After the upgrade, the opposite condition is true, the "initial CIF" of the material will not transfer material classification data at all, it will only transfer if the classification data is added/changed after the material itself is transferred to APO.  As a work around for this, if the classification data is already (correctly) loaded in ECC at the time of the initial material CIF, we are changing the status of the material classification from 1 to 3 and back to 1, just to trigger a change, then the classification is transferred instantly to APO.
    Are there configuration settings that will allow material classification data to CIF in both cases: with the initial CIF of the material to SCM and also in the event of a change and/or addition of classification data in ECC?  Thanks for any advice you can provide.
    Best Regards,
    Jeremy

    Dear Matt,
    What you are looking for is achieved by downloading the customizing objects from R/3 to SRM.
    This is a must before you download materials/services from R/3 to SRM.
    Customizing objects are 4:
    DNL_CUST_BASIS3 - This takes care of Basis settings in SRM for downloading.
    DNL_CUST_PROD0 - This takes care of Material number conversions from R/3 to SRM.
    DNL_CUST_PROD1 - This CO actually brings matl groups from R/3 to SRM as product categories.
    The result you can see in trx COMM_HIERARCHY (not in COMMPR01 which is for materials/services.
    There in R3MATCLASS you can see replicated and locally crated prod categories
    and in R3PRODSTYPE you can see the material typs from R/3
    DNL_CUST_SRVMAS - This takes care of Customizing: Service Master
    The transaction for this is R3As in SRM
    and you can monitor the same through R3AM1.
    also you can check QRFC of R3 and SRM with SMQ1 and SMQ2.
    The next step after this is material/service downloading from R/3 which is done with the same trx R3AS but business objects
    Material and
    Service_master
    You can set fileter in trx R3AC1 for this.
    BR
    Dinesh
    (Helpful? pl. award the points)

  • CIF to DP Planning Area

    Hello Everbody,
    Wishing you all in advance a great year end fun.
    Understand you will be in yearend relaxing mood, request quick answers to this.
    Two questions..
    - What is the technical and functional difference between tRFC and CIF.
    Second question related.
    I wanted to know if R/3 master data can directly be CIFéd to MPOS (planning object structure). Say the objective is not to use apo data mart objects (which I understand can be done manually or thru something called LIS structures tRFC to extract data from R/3 into cubes first and then then create planning objects and dependent areas,books and views).
    Though it is also understood that i can plan on any self-defined characters in DP as planning characters but largely as I imagine, these characters are subsets of Masters of R/3 (what can be possibly more than that), viz. prodt, location, region, sales area, sales org, cutomer, div, channel, custmer grp, sales grp, plant, stor loc, etc etc.. Could some one quote some planning characters thats does not exist in R/3 but may be required for demand planning in any IS scenario (in which case my concerns unfound). The concern is as I gather SNP characters shd match DP planning characteristics as a precondition to release key figures from DP to SNP and back. I tried reading some documentation but didn't quite understand this particular thing.
    Does this question (CIF-ing desired master data to MPOS as characters for demand planning makes sense. If so,  how do the key figures flow (logically the key figure values corresponding to a like character combinations in R/3 shd also correspond to the generated like characteristic combinations of MPOS. In this case most key figures may be even empty and and hence irrelevant for supply planning. .. am still reading the help files )
    Any additional concepts relevant to this question will be highly appreciated.
    Thanks and Regards,
    Loknath Rao
    Mumbai, India

    Hi Loknath,
    Let me make an attempt to respond to your queries.
    1. RFCs enable you to call and execute predefined functions in a remote system - or in the same system. They manage the communication process, parameter transfer and error handling 
    tRFC - transactional RFC (Remote Function Call)
    Link to
    <a href="http://help.sap.com/saphelp_scm41/helpdata/en/22/042abd488911d189490000e829fbbd/frameset.htm">SAP Help</a>
    CIF is the standard Interface which in turn uses qRFC technology for data transfer.
    From <a href="http://help.sap.com/saphelp_scm41/helpdata/en/84/23efe535644c4d9c92dc1abfcb3310/frameset.htm">SAP Help</a>:
    Communication between SAP R/3 and SAP APO is based on the asynchronous transfer technique of queued Remote Function Calls (qRFC). This technique is used when integrating SAP APO with an SAP R/3 System, both for the initial data supply and transfer of data changes (R/3->APO), as well as for the publication of planning results (APO->R/3). 
    2. I understand your doubts about using R/3 master data for planning in APO-DP. Characteristic is a datamart object which can be mapped to R/3 Master data like Product (or Material), Location (or Plant), Region, Sales Area, Customer, Division etc. When you CIF the master data from R/3 to APO the corresponding Master Data gets created in APO. However in DP we are working with Characteristics and more importantly "Values" (CVCs = Characteristic Value Combinations) of the characteristics. For exmaple you have CIFed 4 products Produkt1, Produkt2, Produkt3 and Produkt4 to APO. But in APO-DP the characteristic "values" need to be only Produkt1, Produkt2, Produkt3 and Produkt4.
    The most important master data in APO-DP is the CVCs - which is unique "values" of all the defined chacateristics in the Master Planning Object Structure. So a typical CVC will be like Produkt1-Customer1-Plant2-SalesArea3 an unique key of the possible master data.
    SNP on the other hand needs Product and Location combination (commonly Location Product /SAPAPO/MAT1) master data. As regards mapping DP to SNP, there is a setting to map the custom Product (ZPRODUKT) and Location (say ZLOCATION) characteristic used in DP to the SNP Location-Product Master Data 9AMATNR and 9ALOCNO.  
    As far as Keyfigures go - it is basically transaction data. Moreover APO-DP uses only Timeseries keyfigures which can not be "filled in" using CIF. CIF transfers transaction data which can be "filled in" only Orderseries type keyfigures which is used in SNP.
    Hope this clarifies your doubt.
    While you are exploring and getting good understanding from the Forum posts, suggest if you can compile your learning and put them up in the relevant SCM-APO <a href="https://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=5799">SCM-APO Wiki</a> pages.
    Regards,
    somnath
    Further links and explanation added - Somnath

  • How to calculate stock on hand initial based on VALUATED not Category group

    Dear Expert,
    Now, i want run SNP with initial stock on hand calculated by stock on hand based on VALUATED not category group like CC or CI. Because in ECC we used a lot of valuation types for materials.
    please show us.
    thanks so much
    hungth

    Hi,
    1) You need to answer the question whether you really need this 200 quantity in APO or not? Is your requirement only relevant to SNP? Or that even for GATP, PPDS, etc you don't want to get this 200 stock in APO. If you don't want this 200 quantity in APO at all, then what I said  earlier would be applicable. You could try to have enhancement in CIF so that valuation type AKDTCLT1(200 quantity) gets exluded in CIF and doesn't reach APO. I am not really sure how simple or complex this could be, but most likely should be possible.
    2) Once the stock is in APO, are you able to distinguish between the 2 stocks in anyway? As far as I know, it won't be possible to distinguish. You could see the version (batch from R/3), and the stock category. I am not sure if valuation type info would be available in any way in APO, but may be I am wrong. Even if you were somehow linking the stock valuation type to the storage location stock, I think only possibility for your would have been to block storage location specific stock to come to APO in CIF (for ALL storage locations), but the stock at Plant level would still come to APO.
    If somehow you are able to distinguish the stock based on valuation type, then Ada's solution could be used.
    Thanks - Pawan

  • Can I work backward from APO CIF function module to ECC Integration Model

    On our APO box, I have activated the "new-style" BAdI which is called by the CIF function module /SAPAPO/CIF_TLANE_INBOUND:
    CALL BADI lr_ref_cif_enhance->inbound_tlane
    How can I find out what the business analysts have to do in their ECC Integration Model
    to make sure that the CIF_TLANE_INBOUND function module is called when the Integration Model is activated (so that the BAdI will also fire)?.
    Our current Integration Model does fire the BAdI /SAPAPO/IF_EX_TR_TRANSFER , but not the "enhance inbound tlane" BAdI called by CIF_TLANE_INBOUND.
    So ...to restate the question:
    How do I figure out what I have to tell the business analysts  to do in their Integration Model to make sure that CIF_TLANE_INBOUND is called when the model is actrivated ???

    Hi David,
    I was away in a long meeting and just came back. A quick check in another R/3 (Enterprise) system showed the following:
    There is no FM in the R/3 Enterprise box as CIF_TLANE_SEND and in APO (SCM4.1)  
    /SAPAPO/CIF_TLANE_INBOUND. This is not surprising esp when I check the other system versions based on which I did the initial research. It was ECC 6.0 with Enterprise Extension DFPS switched on (checked SWF5) and the APO system SCM 5.0.
    Lucky me I checked the internal Sandbox systems instead of my client systems otherwise I would have posted there is no FM CIF_TLANE_SEND
    Forget the SD Scheduling Agreement object I mentioned in the previous reply. That master data object will not trigger FM CIF_TLANE_SEND. I found the correct FM for it CIF_SDLS_SEND on R/3 side and /SAPAPO/CIF_TPSCO_INBOUND on the APO side.
    So you have to necessarily use the BADI or User Exit enhancements for TPSRC and NOT TLANE.
    In any case if its a Contract or Scheduling Agreement or Purchasing Inforecord master in R/3 (or ECC any version) then TPSRC is the FM used to create External Procurement Relationships (txn /SAPAPO/PWBSRC1) in APO during CIF transfer. Also a corresponding Transportation Lane (txn /SAPAPO/SCC_TL1) gets created when the Procurement Relationship is added in APO. Otherwise you can always manually create a transportation lane in APO which is an important master data for External Procurement Order / Requisition creation in APO.
    Hope this helps. I am logging off for today - so no more further digging. But thanks to you I discovered another small "blackbox" of SAP. Will blog on this some time later - till then you can take a look at the [CIF Wiki|https://wiki.sdn.sap.com/wiki/display/SCM/CIF] page.
    Best regards,
    Somnath

  • CIF integration with APO and ERP

    Dear Experts,
    We have a requirement of setting up a links between SCM 7.0 (APO) system and an ERP 6.4 system in order to use Core Interface (CIF).
    Appreciate if you could kindly guide me as to how we should do the initial Basis configuration activities to enable this CIF integration.
    Is there a way that I can get hold of a link with the step-by-step instructions for the initial CIF configurations from the Basis point of view ?
    Thanks a lot in advance.
    Thanks and Regards
    Kushan

    Hello Kushan,
    please review the documents at SAP links:
    service.sap.com/scm -> Best Practices for Solution Operations <on the right > -> "Manage APO Core Interface in SAP APO (3.x) / SAP SCM (4.x, 5.x)" < ID 030 >
    service.sap.com/scm ->  Technology <on the left > -> "Integration overview"
    Regards, Natalia Khlopina

  • More than one CIF integration model possible for material?

    Hi Gurus,
    we want to have two different integration models (with non-overlapping selections) for materials to be able to have separate selections, but we cannot make it work.
    When we create them the first time and then activate them, everything works fine: both models send their products to APO in the initial load. However, if there are changes in products in both models, when we generate and then activate one of the models, the changes in the other one are lost: the generation-activation of the second model does not send any product to APO even if there were any changes in the master data (and we see the ALE pointers as processed in the table)
    Is there any workaround to solve this issue in a system without CIF pointers in BDCP2?? The ECC system is below the relevant releases and will not be upgraded in the near future.
    We could not find any mention to an existing limitation on the number of models being one neither in OSS notes nor the help pages.
    thanks a lot,
    Pablo

    Pablo,
    It is common to have multiple Integration Models for Materials.
    I have never heard of transaction BDCP2.  I also have never seen the problem you describe.
    I usually prefer not to use change pointers at all for Master data, such as Materials.  You can alter this behavior in CFC9, and instead use Business Transfer Events.  This means that all fields that are relevant for 'change transfer' to SCM will move across almost immediately after saving the changed material master.
    http://help.sap.com/saphelp_scm70/helpdata/EN/c8/cece3be9cd4432e10000000a11402f/frameset.htm
    Also read the links contained in this page.
    If you wish to actually perform a new 'initial load', then run the Integration model through program RIMODINI.  It may be a lengthy run.
    As always, check first in your Qual system before committing to production.
    Best Regards,
    DB49

  • Delta CIF of Material Master

    Hi Experts,
    We done some enhancment to have the material division to be CIF to APO through extra tab. The enhancment is added in CIFMAT001 user exit. We dont have issue during initial CIF of material master but during delta CIF the changes is not reflect or carried out to APO. It seems that the user-exit is not called during delta transfer. Is there anyway to have the changes move to APO with delta transfer for material amster?
    Thanks
    Regards,
    Lhon
    Edited by: lhon26 on Feb 14, 2012 9:17 AM

    Hi Lhon,
    Usage of the exit is a little bit different in initial and delta transfer, I also faced some issue once.
    But the changes needed in the code are quite technical, I am not sure what the ABAP person did in our case so that the enhancement became active for both the scenario of Initial transfer and delta change.
    If you talk with some good ABAPer who has worked on APO related enhancements, he should be able to help.
    Thanks - Pawan

  • CIF queue monitoring and alerting

    Hi Gurus,
    Is there any way we can have automatic monitoring of number of CIF queues .i.e if the CIF queues goes beyond a certain number, then I should get an email alert.
    Or if I can get the alert configuration for number of entries in SMQ1 or SMQ2, it'll be great.
    Thanks in advance ...
    Regards,
    Surendra

    Hi Surendra,
    I believe your APO Basis team might have an answer for the question.
    And in case you are talking about the post processing errors, you can use the transaction/SAPAPO/CPPA. Even using this, will not tell you the number of post processing queues, but, its capable to telling you the number of users that initiated the queues. This transaction is capable of sending an email and by varying settings, it sends the number of messages equal to number of initiators of the queues.
    Else, as already suggested, writing a report would be the option.
    Cheers,
    KC.

  • CIF Queue naming

    Hi,
    Can anyone guide me to any link where I could find the naming conventions for CIF queues (Like CFSTK* = Stocks queue, CFSLS* = Sales Orders queue etc..)
    Ashutosh Kulkarni

    Hi Ashutosh,
    Here are few of them with naming convention:
    Initial Data Transfer          CF_ADC_LOA
    Stocks                         CFSTK*
    Purchase Orders and Purchase R CFPO*
    Planned Orders/Production Orde CFPLO*
    Sales Orders                   CFSLS*
    Manual Reservations            CFRSV*
    Confirmations                  CFCNF*
    Planned Independent Requiremen CFPIR*
    Requirements Reduction of Plan CFMAT*
    Production Campaigns           CFPCM*
    Master Data for Classes        CFCLA*
    Master Data for Characteristic CFCHR*
    Shipments                      CFSHP*
    Planning Tables                CFCUVT*
    Hope this is what you require?
    Pramod

  • CIF queue type basic question

    Hi,
    I have one basic question regarding CIF.
    We are assigning queue type to scm system at two places.
    1. In SCM system with tcode-/SAPAPO/C2(Assign logical system to buisness system group)
    2. In Ecc system with tcode-CFC1(Define target system operation mode and queue type)
    Is it required both above setting should have same value?
    If it could have different value what is impact?
    Why it is required to assign queue type for SCM system at both places and not for ecc?
    Regards,
    Santosh

    Hi Santosh,
    let me try to explain cutomizing setting:
    /sapapo/c2 (APO): This TX  is more comprehensive than CFC1 on ECC site (as you know).
    Firstly you have to define the queuetype of the connected ECC system (or the connected ECC systems). Therefore you have to use the Log. System names of the ECC system and the settings are simmilar to CFC1 but it is used for SCM->ECC transfer.
    As you know, you have to define also an entry for your SCM system which looks like the entry in CFC1 and therefore it looks superfluous. The reason is, that the error handling can be defined here. You can activate the "postprocessing" for your SCM system and / or your ECC system. The CIF framework checks this entry and put faulty queues into postprocessing (TX /sapapo/cfp) or as SYSFAIL into SMQ1/SMQ2. Therefore it is important that you choose the same queue type as you did it in CFC1 for your SCM system.
    CFC1 (ECC): Here you have to assign the Log. System name of your SCM which is connected to your ECC and you have to define the Queue type (I - INBOUND, INITIAL - OUTBOUND). The setting is used for the transfer from ECC->SCM. It is possible (not recommended!!!
    Stefan

Maybe you are looking for