CIF questions

Hi,
       I have a couple of questions:
1. I created an integration model with selection take all materials in the plant P1. This model had about 4000 materials when I generated and activated it.
Now few more materials are created and when I go to the model and check in CFM2. it is still showing 4000 materials without the new ones included.When I went to CFM1 and created the model again, it then has the 4020 material. When I activate a new model is displayed.
In a production system, do I need to keep regenerating the model every one in a month or so to get the models updated? please let me know the best practice.
2. When I transfer a material to APO, I go to /SAPAPO/MAT1 and check if the material is assigned to a model. I see 000 assignment, which is good. But when I click on choose "choose planning version ", I enter 000 and it says "No simulation parameters could be set for the active version". but the same is good when I selected an inactive version 001.
Should this be 000?
Thanks.

1. Ideally you should be regenerating the models - there are a few programs that do it. So set it up with the broad coverage of products and locations and let it run periodically
2. what comes through to Mat1 is version independent and has 000 model assigned to it. Being the active version 000 planning version you cant change any parameters that you can do for the simulation version. Also so maintain for the simulation versions you need to do a version copy of the master data
(The model contains only master data, whereas the planning version contains master data and transaction data.)

Similar Messages

  • SAP ECC and  SCM 5.0,   CIF question

    I have set up ECC to connect to SCM 5.0 via  CIF, and have tested the RFC connections.   Test results say it's working.
    Now what are the next steps?   LiveCache?  I have a material in APO  which needs to talk to liveCache to withdraw some data.  Can someone give me some direction on the steps to follow after SCM  installation? Thanks!!!

    Mon_Ami,
    It depends upon what business requirement you are trying to fulfill.  You say you want to withdraw some data from livecache.  What data?  Why?
    Do you want to forecast your company's demand?  Do you want to create supply elements?  Do you want to perform Availability checks?  Do you want help in determining the best planning solution for a planning problem, based on constraints that you define?  Or.....?
    There are many business requirements that can use SCM as a solution.  Normally, one doesn't implement SCM without having a concrete business requirement.  What is yours?
    Best Regards,
    DB49

  • CIF queue type basic question

    Hi,
    I have one basic question regarding CIF.
    We are assigning queue type to scm system at two places.
    1. In SCM system with tcode-/SAPAPO/C2(Assign logical system to buisness system group)
    2. In Ecc system with tcode-CFC1(Define target system operation mode and queue type)
    Is it required both above setting should have same value?
    If it could have different value what is impact?
    Why it is required to assign queue type for SCM system at both places and not for ecc?
    Regards,
    Santosh

    Hi Santosh,
    let me try to explain cutomizing setting:
    /sapapo/c2 (APO): This TX  is more comprehensive than CFC1 on ECC site (as you know).
    Firstly you have to define the queuetype of the connected ECC system (or the connected ECC systems). Therefore you have to use the Log. System names of the ECC system and the settings are simmilar to CFC1 but it is used for SCM->ECC transfer.
    As you know, you have to define also an entry for your SCM system which looks like the entry in CFC1 and therefore it looks superfluous. The reason is, that the error handling can be defined here. You can activate the "postprocessing" for your SCM system and / or your ECC system. The CIF framework checks this entry and put faulty queues into postprocessing (TX /sapapo/cfp) or as SYSFAIL into SMQ1/SMQ2. Therefore it is important that you choose the same queue type as you did it in CFC1 for your SCM system.
    CFC1 (ECC): Here you have to assign the Log. System name of your SCM which is connected to your ECC and you have to define the Queue type (I - INBOUND, INITIAL - OUTBOUND). The setting is used for the transfer from ECC->SCM. It is possible (not recommended!!!
    Stefan

  • Question about accessing CIFS with LUM user (permissions)

    Hi there,
    I have a question related to accessing a CIFS mount throught a user in a linux box.
    First of all, its a system based in OES11 sp2 & SLES 11 SP3. I have a cifs mount
    in /media/nss, with novell cifs, coming from a NSS Filesystem. This mount is mounted
    with a username/password with password policy, etc. I have four users LUM-enabled,
    in the Linux box, which should access the CIFS mount, but I have a permission denied.
    I have set trustees for the primary group for the 4 lum-enabled users and also I have
    addedd permissions for a group in the NSS volume, and add this group to the membership
    of the users, but doesnt works.
    I guess Im missing something or Im doing something wrong. Could anybody give me a pointer?
    Thanks!

    Antoniogutierrez,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Question on use of shared memory objects during CIF executions

    We have a CIF that runs in background via program RIMODACT that is invoked from our external job scheduler.  (The schedulere kicks off a job - call it CIFJOB - and the first step of this job executes RIMODACT.)
    During the execution of RIMODACT, we call a BAdI (an implementation of SMOD_APOCF005.)
    In the method of this BAdI, we load some data into a shared memory object each time the BAdI is called. (We create this shared memory object the first time the BAdI is called.)
    After program RIMODACT finishes, the second step of CIFJOB calls a wrapper program that calls two APO BAPI's.  
    Will the shared memory object be available to these BAPIs?
    Reason I'm asking is that the BAPIs execute on the APO app server, but the shared memory object was created in a CIF exit called from a program executing on the ECC server (RIMODACT).
    Edited by: David Halitsky on Feb 20, 2008 3:56 PM

    I know what you're saying, but it doesn't apply in this case (I think.)
    The critical point is that we can tie the batch job to one ECC app server.  In the first step of this job (the one that executes RIMODACT to do the CIF), we build the itab as an attribute of the "root" shared memory object class.
    In the second step of the batch job, we attach to the root class we built in the first step, extract some data from it, and pass these data to a BAPI that we call on the APO server.  (This is what I meant by a "true" RFC - the APO BAPI on the APO server is being called from a program on the ECC server.)
    So the APO BAPI never needs access to the ECC shared memory object - it gets its data passed in from a program on the ECC server that does have access to the shared memory object.
    Restated this way, is the solution correct ???

  • CIF of master data

    Hi,
    My question is related to the CIF of master data.
    I built an integration model to transfer the materials from R/3 to APO, and then the delta modifications are CIF by a background process at night.
    I would like to transfer all the delta master data, but having one excetption.
    I do not need to transfer the safety stock delta changes in R/3 to APO, so that I maintain these values independently in APO. The issue is that the safety stock values in R/3 are also changed but with another purpose and the R/3 values are overwriting the values that I am uploading directly in APO after the delta CIF transfer.
    Is there any way that I can CIF the delta master data changes in R/3, but restricting the delta safety stock changes?
    Thanks a lot!

    Yes it is possible. You have to add custom logic in the material CIF user exit to turn off the update of safety stock fields in APO. Actually there are a lot of threads in this forum that discuss this issue.
    You can start by referring to the thread below
    Re: CIF problem with Dynamic and Static Stock Method

  • How do i connect to a network with cifs

    I'm looking to buy a Netgear NAS drive and I have an issue trying to figure out how I'm going to connect with it.
    I have macs running 10.6 and 10.8 and according to the manufacturers the only way to connect is using the CIFs protocols.
    Does anyone have experience and knowledge to share on this kind of set up and if this is the best way to set up a home NAS systems?
    The other question I have is I read that a CIFs configured networks can only handle file sizes under 4gb, is this true?

    Yes 4 GB is the limit for CIFS or SMB.
    Once you know the IP use like...
    smb://10.0.1.5
    smb://192.168.1.3
    cifs://10.0.1.5
    cifs://192.168.1.3
    I'd avoid a NAS drive despite the Rage, I'd opt for a Mac Fornatted external drive.

  • Questions before an internal lab POC (on old underperformant hardware)

    Hello VDI users,
    NB: I initially asked this list of questions to my friends at the
    SunRay-Users mailing list, but then I found this forum as a
    more relevant place.
    It's a bit long and I understand that many questions may have
    already been answered in detail on the forum or VDI wiki. If it's
    not too much a burden - please just reply with a link in this case.
    I'd like this thread to become a reference of sorts to point to our
    management and customers.
    I'm growing an interest to try out a Sun VDI POC in our lab
    with VirtualBox and Sunrays, so I have a number of questions
    popping up. Not all of them are sunray-specific (in fact, most
    are VirtualBox-related), but I humbly hope you won't all flame
    me for that?
    I think I can get most of the answers by experiment, but if
    anyone feels like sharing their experience on these matters
    so I can expect something a'priori - you're welcome to do so ;)
    Some questions involve best practices, however. I understand
    that all mileages vary, but perhaps you can warn me (and others)
    about some known-not-working configurations...
    1) VDI core involves a replicated database (such as MySQL)
    for redundant configuration of its working set storage...
    1.1) What is the typical load on this database? Should it
    just be "available", or should it also have a high mark
    in performance?
    For example, we have a number of old Sun Netra servers
    (UltraSPARC-II 450-650MHz) which even have a shared SCSI
    JBOD (Sun StorEdge S1 with up to 3 disks).
    Would these old horses plow the field well? (Some of them
    do run our SRSS/uttsc tasks okay)
    1.2) This idea seems crippled a bit anyway - if the database
    master node goes down, it seems (but I may be wrong) that
    one of the slave DB nodes should be promoted to a master
    status, and then when the master goes up, their statuses
    should be sorted out again.
    Or the master should be made HA in shared-storage cluster.
    (Wonder question) Why didn't they use Sun DSEE or OpenDS
    with built-in multimaster replication instead?
    2) The documentation I've seen refers to a specific version
    of VirtualBox - 2.0.8, as the supported platform for VDI3.
    It was implied that there were specific features in that
    build made for Sun VDI 3 to work with it. Or so I got it.
    2.1) A few versions rolled out since that one, 3.0 is out
    now. Will they work together okay, work but as unsupported
    config, not work at all?
    2.2) If specifically VirtualBox 2.0.8 is to be used, is there
    some secret build available, or the one from Old Versions
    download page will do?
    3) How much a bad idea is it to roll out a POC deployment
    (up to 10 virtual desktop machines) with VirtualBox VMs
    running on the same server which contains their data
    (such as Sun Fire X4500 with snv_114 or newer)?
    3.1) If this is possible at all, and if the VM data is a
    (cloned) ZFS dataset/volume, should a networked protocol
    (iSCSI) be used for VM data access anyway, or is it
    possible (better?) to use local disk access methods?
    3.2) Is it possible to do a POC deployment (forfeiting such
    features as failover, scalability, etc.) on a single
    machine alltogether?
    3.3) Is it feasible to extend a single-machine deployment
    to a multiple-machine deployment in the future (that
    is, without reinstalling/reconfiguring from scratch)?
    4) Does VBox RDP server and its VDI interaction with SRSS
    have any specific benefits to native Windows RDP, such
    as responsiveness, bandwidth, features (say, microphone
    input)?
    Am I correct to say that VBox RDP server enables the
    touted 3D acceleration (OpenGL 2.0 and DX8/9), and lets
    connections over RDP to any VM BIOS and OSes, not just
    Windows ones?
    4.1) Does the presence of a graphics accelerator card on
    the VirtualBox server matter for remote use of the VM's
    (such as through a Sun Ray and VDI)?
    4.2) Concerning the microphone input, as the question often
    asked for SRSS+uttsc and replied by RDP protocol limits...
    Is it possible to pass the audio over some virtualized
    device for the virtual machine? Is it (not) implemented
    already? ;)
    5) Are there known DO's and DONT's for VM desktop workloads?
    For example, simply office productivity software users
    and software developers with Java IDEs or ongoing C/C++
    compilations should have different RAM/disk footprints.
    Graphics designers heavy on Adobe Photoshop are another
    breed (which we've seen to crawl miserably in Windows
    RDP regardless of win/mstsc or srss/uttsc clients).
    Can it be predicted that some class of desktops can
    virtualize well and others should remain "physical"?
    NB: I guess this is a double-question - on virtualization
    of remote desktop tasks over X11/RDP/ALP (graphics bound),
    as well as a question on virtualization of whole desktop
    machines (IO/RAM/CPU bound).
    6) Are there any rule-of-thumb values for virtualized
    HDD and networking filesystems (NFS, CIFS) throughput?
    (I've seen the sizing guides on VDI Wiki; anything else
    to consider?)
    For example, the particular users' data (their roaming
    profiles, etc.) should be provisioned off the networked
    storage server, temporary files (browser caches, etc.)
    should only exist in the virtual machine, and home dirs
    with working files may better be served off the network
    share altogether.
    I wonder how well this idea works in real life?
    In particular, how well does a virtualized networked
    or "local" homedir work for typical software compile
    tasks (r/w access to many small files)?
    7) I'm also interested in the scenario of VMs spawned
    from "golden image" and destroyed after logout and/or
    manually (i.e. after "golden image"'s update/patching).
    It would be interesting to enable the cloned machine
    to get an individual hostname, join the Windows domain
    (if applicable), promote the user's login to the VM's
    local Administrators group or assign RBAC profiles or
    sudoer permissions, perhaps download the user's domain
    roaming profile - all prior to the first login on this
    VM...
    Is there a way to pass some specific parameters to the
    VM cloning method (i.e. the user's login name, machine's
    hostname and VM's OS)?
    If not, perhaps there are some best-practice suggestions
    on similar provisioning of cloned hosts during first boot
    (this problem is not as new as VDI, anyways)?
    8) How great is the overhead (quantitative or subjective)
    of VM desktops overall (if more specific than values in
    sizing guide on Wiki)? I've already asked on HDD/networking
    above. Other aspects involve:
    How much more RAM does a VM-executing process typically
    use than is configured for the VM? In the JavaOne demo
    webinar screenshots I think I've seen a Windows 7 host
    with 512Mb RAM, and a VM process sized about 575Mb.
    The Wiki suggests 1.2 times more. Is this a typical value?
    Are there "hidden costs" in other VBox processes?
    How efficiently is the CPU emulated/provided (if the VBox
    host has the relevant VT-x extensions), especially for
    such CPU-intensive tasks as compilation?
    *) Question from our bookkeeping team:
    Does creating such a POC lab and testing it in office's
    daily work (placing some employees or guests in front of
    virtual desktops instead of real computers or SR Solaris
    desktops) violate some licenses for Sun VDI, VirtualBox,
    Sun Rays, Sun SGD, Solaris, etc? (The SRSS and SSGD are
    licensed; Solaris is, I guess, licensed by the download
    form asking for how many hosts we have).
    Since all of the products involved (sans SGD) don't need
    a proof of license to install and run, and they can be
    downloaded somewhat freely (after quickly clicking thru
    the tomes of license agreements), it's hard for a mere
    admin to reply such questions ;)
    If there are some limits (# of users, connections, VMs,
    CPUs, days of use, whatever) which differentiate a legal
    deployment for demo (or even legal for day-to-day work)
    from a pirated abuse - please let me know.
    //Jim
    Edited by: JimKlimov on Jul 7, 2009 10:59 AM
    Added licensing question

    Hello VDI users,
    NB: I initially asked this list of questions to my friends at the
    SunRay-Users mailing list, but then I found this forum as a
    more relevant place.
    It's a bit long and I understand that many questions may have
    already been answered in detail on the forum or VDI wiki. If it's
    not too much a burden - please just reply with a link in this case.
    I'd like this thread to become a reference of sorts to point to our
    management and customers.
    I'm growing an interest to try out a Sun VDI POC in our lab
    with VirtualBox and Sunrays, so I have a number of questions
    popping up. Not all of them are sunray-specific (in fact, most
    are VirtualBox-related), but I humbly hope you won't all flame
    me for that?
    I think I can get most of the answers by experiment, but if
    anyone feels like sharing their experience on these matters
    so I can expect something a'priori - you're welcome to do so ;)
    Some questions involve best practices, however. I understand
    that all mileages vary, but perhaps you can warn me (and others)
    about some known-not-working configurations...
    1) VDI core involves a replicated database (such as MySQL)
    for redundant configuration of its working set storage...
    1.1) What is the typical load on this database? Should it
    just be "available", or should it also have a high mark
    in performance?
    For example, we have a number of old Sun Netra servers
    (UltraSPARC-II 450-650MHz) which even have a shared SCSI
    JBOD (Sun StorEdge S1 with up to 3 disks).
    Would these old horses plow the field well? (Some of them
    do run our SRSS/uttsc tasks okay)
    1.2) This idea seems crippled a bit anyway - if the database
    master node goes down, it seems (but I may be wrong) that
    one of the slave DB nodes should be promoted to a master
    status, and then when the master goes up, their statuses
    should be sorted out again.
    Or the master should be made HA in shared-storage cluster.
    (Wonder question) Why didn't they use Sun DSEE or OpenDS
    with built-in multimaster replication instead?
    2) The documentation I've seen refers to a specific version
    of VirtualBox - 2.0.8, as the supported platform for VDI3.
    It was implied that there were specific features in that
    build made for Sun VDI 3 to work with it. Or so I got it.
    2.1) A few versions rolled out since that one, 3.0 is out
    now. Will they work together okay, work but as unsupported
    config, not work at all?
    2.2) If specifically VirtualBox 2.0.8 is to be used, is there
    some secret build available, or the one from Old Versions
    download page will do?
    3) How much a bad idea is it to roll out a POC deployment
    (up to 10 virtual desktop machines) with VirtualBox VMs
    running on the same server which contains their data
    (such as Sun Fire X4500 with snv_114 or newer)?
    3.1) If this is possible at all, and if the VM data is a
    (cloned) ZFS dataset/volume, should a networked protocol
    (iSCSI) be used for VM data access anyway, or is it
    possible (better?) to use local disk access methods?
    3.2) Is it possible to do a POC deployment (forfeiting such
    features as failover, scalability, etc.) on a single
    machine alltogether?
    3.3) Is it feasible to extend a single-machine deployment
    to a multiple-machine deployment in the future (that
    is, without reinstalling/reconfiguring from scratch)?
    4) Does VBox RDP server and its VDI interaction with SRSS
    have any specific benefits to native Windows RDP, such
    as responsiveness, bandwidth, features (say, microphone
    input)?
    Am I correct to say that VBox RDP server enables the
    touted 3D acceleration (OpenGL 2.0 and DX8/9), and lets
    connections over RDP to any VM BIOS and OSes, not just
    Windows ones?
    4.1) Does the presence of a graphics accelerator card on
    the VirtualBox server matter for remote use of the VM's
    (such as through a Sun Ray and VDI)?
    4.2) Concerning the microphone input, as the question often
    asked for SRSS+uttsc and replied by RDP protocol limits...
    Is it possible to pass the audio over some virtualized
    device for the virtual machine? Is it (not) implemented
    already? ;)
    5) Are there known DO's and DONT's for VM desktop workloads?
    For example, simply office productivity software users
    and software developers with Java IDEs or ongoing C/C++
    compilations should have different RAM/disk footprints.
    Graphics designers heavy on Adobe Photoshop are another
    breed (which we've seen to crawl miserably in Windows
    RDP regardless of win/mstsc or srss/uttsc clients).
    Can it be predicted that some class of desktops can
    virtualize well and others should remain "physical"?
    NB: I guess this is a double-question - on virtualization
    of remote desktop tasks over X11/RDP/ALP (graphics bound),
    as well as a question on virtualization of whole desktop
    machines (IO/RAM/CPU bound).
    6) Are there any rule-of-thumb values for virtualized
    HDD and networking filesystems (NFS, CIFS) throughput?
    (I've seen the sizing guides on VDI Wiki; anything else
    to consider?)
    For example, the particular users' data (their roaming
    profiles, etc.) should be provisioned off the networked
    storage server, temporary files (browser caches, etc.)
    should only exist in the virtual machine, and home dirs
    with working files may better be served off the network
    share altogether.
    I wonder how well this idea works in real life?
    In particular, how well does a virtualized networked
    or "local" homedir work for typical software compile
    tasks (r/w access to many small files)?
    7) I'm also interested in the scenario of VMs spawned
    from "golden image" and destroyed after logout and/or
    manually (i.e. after "golden image"'s update/patching).
    It would be interesting to enable the cloned machine
    to get an individual hostname, join the Windows domain
    (if applicable), promote the user's login to the VM's
    local Administrators group or assign RBAC profiles or
    sudoer permissions, perhaps download the user's domain
    roaming profile - all prior to the first login on this
    VM...
    Is there a way to pass some specific parameters to the
    VM cloning method (i.e. the user's login name, machine's
    hostname and VM's OS)?
    If not, perhaps there are some best-practice suggestions
    on similar provisioning of cloned hosts during first boot
    (this problem is not as new as VDI, anyways)?
    8) How great is the overhead (quantitative or subjective)
    of VM desktops overall (if more specific than values in
    sizing guide on Wiki)? I've already asked on HDD/networking
    above. Other aspects involve:
    How much more RAM does a VM-executing process typically
    use than is configured for the VM? In the JavaOne demo
    webinar screenshots I think I've seen a Windows 7 host
    with 512Mb RAM, and a VM process sized about 575Mb.
    The Wiki suggests 1.2 times more. Is this a typical value?
    Are there "hidden costs" in other VBox processes?
    How efficiently is the CPU emulated/provided (if the VBox
    host has the relevant VT-x extensions), especially for
    such CPU-intensive tasks as compilation?
    *) Question from our bookkeeping team:
    Does creating such a POC lab and testing it in office's
    daily work (placing some employees or guests in front of
    virtual desktops instead of real computers or SR Solaris
    desktops) violate some licenses for Sun VDI, VirtualBox,
    Sun Rays, Sun SGD, Solaris, etc? (The SRSS and SSGD are
    licensed; Solaris is, I guess, licensed by the download
    form asking for how many hosts we have).
    Since all of the products involved (sans SGD) don't need
    a proof of license to install and run, and they can be
    downloaded somewhat freely (after quickly clicking thru
    the tomes of license agreements), it's hard for a mere
    admin to reply such questions ;)
    If there are some limits (# of users, connections, VMs,
    CPUs, days of use, whatever) which differentiate a legal
    deployment for demo (or even legal for day-to-day work)
    from a pirated abuse - please let me know.
    //Jim
    Edited by: JimKlimov on Jul 7, 2009 10:59 AM
    Added licensing question

  • CIF interface: Error transfering material plant

    Hello Experts,
    I am traying to transfer a material plant across CIF interface. The problem is, when I activate the integration model, it fails. The error message is exactly this: "Location does not exist for external location 1790 , type 1040, and BSG BS"
    Any idea what the reson could be?
    Many thanks
    Aban

    Hello Aparna,
    I created a new IM and trying to CIF only location. But this time I get another error message complaing that location 1790 type 1002 already exists:
    "Text:        Location 1790 of type 1002 (BSG BSG1) already exist"
    I checked the table /SAPAPO/LOC in APO and realized that location 1790 exists as type 1001. Now I changed it and would like to CIF it as type 1002.
    How should I proceed if I change the location type from 1001 to 1002?
    Another question: I tried to debug the CIF transfer by implementing an endlos loop in include "ZXCIFU08" (enhancement CIFLOC01) but this seems to have no effect. Is this the right enhancement (on ECC side)?
    Thank you for your help
    Aban

  • Users logging on via CIF's lose login capability

    Multiple Netware 6.5 sp7 servers, with CIF's configured to offer windows
    shares to computers that do not use Novell Client.
    For years now, I've been able to set up XP Pro x64, Vista Business (x86 &
    x64) and Windows 7 (x86 & x64) systems so that they log in and map drives
    via the "Net Use" command. For instance, if I have a Netware server located
    at 10.10.20.5, it has a share named M_Drive and user BOB has been set up
    using iManager so that he has access to said server and folder, I could map
    a drive on one of the aforementioned systems by using the following command
    Net Use M: \\10.10.20.5\M_Drive /u:bob {bob's password}
    Running that command would result in the appearance of an M drive on the
    workstation with Bob's access rights.
    If you were to go to the Netware server that Bob just logged into and go to
    the connections screen for the Monitor NLM (or access connections via Remote
    Desktop) you would see that user BOB had a connection; however, no
    information regarding the IP address that BOB was connected from would be
    available. This connection would be the type that isn't preceded by a "*".
    eg the connection is listed as "BOB.CONTEXT" as opposed to "*BOB.CONTEXT"
    Anyways, starting yesterday, random users on my network are losing the
    ability to connect using the aforementioned method. Bob will try to log in
    and get an error message. Trying to access \\10.10.20.5 from Windows
    Explorer returns an error. If I check Connections, I find that there are
    multiple "*BOB.CONTEXT" connections, but no "BOB.CONTEXT" one. Clearing the
    "*" connections will have no effect in permitting BOB to connect.
    If I go to other machines, even ones with the Novell Client, and try and
    login as BOB or use the aforementioned drive mapping technique, Iit fails.
    If on a machine that won't allow BOB to connect to the Novell server I use
    another users' name and password, the connection works.
    Thus, the problem is at the Novell server and is particular to a given user.
    The "problem" user will change from day to day.
    If several hours later I try logging in as my problem user, it works.
    1) Anyone encountered a problem like this one?
    2) Is there a setting within Netware 6.5 sp7 which controls the length of
    time that a disconnected user's connection is kept "live"? If the system
    "naturally" eventually releases these problem users, maybe I can do
    something to make the release happen earlier.
    3) Does performing a DSRepair sound like a good idea?
    4) Is the Netware 6.5 sp8 update still available? Wondering if installing
    that might resolve the problem.
    I look forward to your response.

    On Fri, 14 May 2010 17:56:18 +0000, Phillip Armitage wrote:
    > 3) Does performing a DSRepair sound
    > like a good idea?
    No. You've not, so far, provided any information that points to a problem
    in eDirectory.
    > 4) Is the Netware 6.5 sp8 update still available?
    Yes.
    > Wondering if installing that might resolve the problem.
    It might. I'd certainly try it, and the post-sp8 updates, before spending
    much more time troubleshooting this.
    David Gersic dgersic_@_niu.edu
    Novell Knowledge Partner http://forums.novell.com
    Please post questions in the newsgroups. No support provided via email.

  • For Purchase order CIF Problem

    Hi APO experts
    For PO, to  change the INCO Terms Fields and this Field has added in User Exit of CIFPUR01 of structure custom Structure CIFPUORCUSe (No other Fields has changed)  then CIF the PO INCO Terms Has not updated in APO System..
    Suppose If iI change the QTy value then PO Data has updated in APO System ..
    Please let me know what is best solutions..
    Thanks
    Kanth

    Hi Vikas,
    For INCO Terms fields in not in User exit  EXIT_SAPLMEAP_001 (enchancement Name is CIFPUR01 we can add the custom fields in structure CIFPUORCUS I added Custom fields in both systems
    My question is suppose if I change the INCO terms only Then CIF won't trigger and IF i change the QTy value then CIFis trigger ...
    Please let me know how to do CIF Trigger if I change the INFO Terms no other field..
    Thanks,
    Kanth

  • A extra field, which not evaluate in APO - R/3 via CIF Interface

    Hello all,
    in one of our project a question came up:
    Scenario:
    A customer send us the requirements for weeks as an xml file. This xml file contains a field with a specific number for this requirements (each file as the same number). We send all information to APO - BW. And than via CIF - Interface into the R/3 System.
    So far so good, but now we have to send back a suggestion to the customer include the certain amount and delivery date and the specific number field. The certain amount and the delivery date ...-> no problem. But we lost the specifc number. The reason behind is, that our APO Specialist mentioned, that are no possibilities to bring the  field from the APO-BW via CIF to R/3. But I couldn't belive it.
    So my question is now, maybe it is possible to put the number in a field, which we can pass-through features to R/3.
    Thanks
    Kind regards
    Stephan

    Hi Stephan - I also agree that this is not standard functionality in SAP. However you can move customer specific data from APO into R3 using the CIF with CIF enhancements. Most or all transactional data enhancements have extension tables where SAP customers can move non-SAP defined data through the CIF. Look in customizing for enhancements and also identify the APO-BW tables where your data is stored. A developer should be able to take it from there. One thing to think about is where in R3 you want to store this data - make sure there is a field available or some logic to receive the data.
    Regards
    Andy

  • [SOLVED]Setting up Arch to read CIFS shares but no smb.conf

    I am trying to ACCESS CIFS shares from another computer. Other clients can access these shares without issue. I am NOT trying to host files from Arch.
    Reading I have done: wiki pages on SAMBA and SMBCLIENT.
    Reading the SMBCLIENT wiki page, it apparently states that only the package smbclient should be installed, which I did.
    But, following the SMBCLIENT wiki, it shows to issue:
    #smbclient -L
    #smbclient
    both error saying smb.conf can not be read. BTW, is the -L to just list the shares available from any place?
    Questions:
    1. So, should smb.conf be created to use smbclient? This package evidently does not install smb.conf in any form.
    2. Does the SAMBA package need to be installed to use smbclient? Samba does install a smb.conf.something.
    Any tips you have so I can read CIFS shares will be appreciated.
    Thanks
    steve.
    Last edited by stevepa (2012-11-30 03:22:49)

    Reporting my results at resolving my issues
    1. installing smbclient provides the described ftp-like environment while accessing CIFS shares. It works fine. I used
    $smbclient //OMV/steve -Usteve%omv
    to access my share on the OMV server. I could list, get and put files. I still get the message about no smb.conf file but it works.
    2. Installing gvfs-smb package allowed Thunar to display the shares. In my case, I do a Ctrl-L and then smb://OMV/steve and I can display the shared content perfectly! Click to remember password or it apparently does not work.
    Hope this helps someone.
    Steve.

  • Viewing CIF errors in SMQ2 - Technical - ABAP

    Hello All - I am CIFing the Master data from ECC to APO and i am getting the error. I was debugging the CIF from SMQ2 and i donot see any issue in the code but i am still getting the error in SMQ2 and i could not see the complete error message in SMQ2.Because of the limited space for the field Status  in SMQ2.
    My question is there any way to read/view the entire message in QRFC ? What is the Tcode ?  or Where can i see the CIF error messages in APO..
    Appreciate your help.
    Thanks,
    Venky.
    Edited by: GVP2011 on Mar 21, 2011 3:42 AM

    Hi Venky,
                   You can see the CIF error messages in /SAPAPO/C3 transaction in APO and CFG1 in ECC.Here you can directly enter the External ID from SMQ2 or you can see all the error messages via Object "CIFSCM".
    I hope this will help you.
    Regards,
    Saurabh

  • CIF blocks with multiple R/3 systems

    Hi,
          We are in SCM V5 and we have two R/3 systems connected to one APO system. Theya re two different BSGs, with totally different integration models as well.
    We are using APO outbound queues for both the R/3 systems.
    The question is, does a CIF block from one R/3 system block the order from other R/3 system?
    Example:
    I have a planned order PO1 created in APO that is bound to go to R/3 system R1. This order is now stuck in SMQ1 in APO.
    I create another planned order PO2 that is bound to go to other R/3 system R2 from the same APO system.
    Does the PO1 block PO2?? or are they separate?
    I would appreciate if you can substantiate that with some reference.
    Thanks.
    Edited by: Raj G on Nov 5, 2008 8:24 AM

    I have not seen such a configuration myself offlate, but based on theoretical knowledge one outbound queue block to one R3 system should not interfere with another outbound queue to another R3 system. The queues are RFC destination specific. INtegration models are also specific to a destination R3 system. However there may be elements that are common - mostly BASIS level settings. Not sure if it interferes with the application.

Maybe you are looking for

  • BI end routine at transformation to populate info object by vlookup attribu

    Hi , I am APO consultant working in Bi routines and I have the follwoing situation and need some guidance in ABAP code (routine) . We sales info from markets as flat filea snd lod them into cubes. One of the filed is file is External sales Group: ZEX

  • Photoshop elements 4.0 for macs

    working on my website... i heard that elements 4.0 for macs is the same as 5.0 for pc - does anyone know if thats the case?? thanks...

  • Can't launch version 2.1

    I've just installed SQL Developer version 2.1 on a Linux box for the first time. The installation appeared to be successful. When I launch sqldeveloper.sh, I'm prompted for the location of my jdk. Once I specify the jdk, the application fails with th

  • Job work in SAP SD

    Dear All My client have a requirement.... they have got empty bottles and their logo we have to just paste them and send it back, so its a matter of JOB WORK in SAP SD, here the customer and Vendor is same, so how can I map this in sap. Matter is mos

  • New desktop

    I am in the process of getting a new desktop and being a computer novice, what do i have to do to reconnect my wireless router?