CUA configuration question

Hi guys,
I am in the process of "refreshing" our sandbox SRM 4.0 environment using R/3 as a backend. In order to allow for a realistic design where a SRM is added to an existing R/3 infrastructure, I decided to remove the previous (incorrectly installed) CUA schema (srm was central here), and to setup CUA again with R/3 backend acting as central system.
Everything is quite clear, I just have some understanding problems with the issue of naming the RFC destinations exactly as the logical systems.
In our environment, those rfc destinations already existed before using a login with SAP_ALL privileges for remote logon. The SAP documentation advises to use newly created users with limited privileges or to add the CUA roles/profiles to the existing users (SAP_BC_USR_CUA_***).
This obviously doesn't make sense if the respective logon already has SAP_ALL.
So to questions arise:
1. What to do for this concrete issue (e.g. not creating any additional users or roles, just stick with superuser as RFC remote logon) ?
2. What's the general preferred design for RFC destinations ? Does one create multiple RFC destinations to the same logical system, using different user depending on the distribution model ?
Thx in advance
Nick

Hello Bapujee,
You are certainly right. Infact i was rethinking on it after I posted my answer.Probably my way of explanation was not correct. It is definitely not a rule to have logical system name same as that of RFC though it is highly advisable and resolves any confusion. My answer to your second question will further clarify it.
Regarding your second question where you have pointed out that you didnot understand my sentence the answer is simple. A large use of logical system is for data distribution between two SAP and ALE ditribution is an important mechanism in this. So let me explain this with the help of ALE model. Let us assume your host system is abc and also that we you have another SAP system XYZ. You can create any number of of RFC destinations XYZ,XYZ1,XYZ2 etc for system XYZ.
Now suppose the logical system for XYZ is XYZ. Now when we create an ALE model for data distribution between our system ABC and XYZ then we need to use logical systems ABC and XYZ.  Lets also assume that the data is flowing from ABC to XYZ.
Now when you try to do generate the partner profile  the model view SAP will look by default for RFC destination XYZ. If it finds it then it will generate the partner profiles successfully and also will allow you to distribute the ALE model view. If it doesnot find XYZ it won't allow you to generate the partner profiles and then you need to do it manually through WE20 and WE21 which is very tedious. You can try to do this by creating a dummy logical system in SALE and then a dummy ALE model view in BD64. It will really help you to understand the scenario. First just create a logical system TEST and don't create an RFC destination TEST for it. In second step create RFC destination TEST and then check for the results. In the third scenario create another RFC destination TEST1 which would be a copy of TEST and check again.
Also one more and very important aspect of this is that every client of an SAP system should have a logical system assigned to it naturally. Now lets us take a scenario where system XYZ has  client 100 . Let us say we have a logical system XYZ100 assigned to client 100 of XYZ. Now you can again create any number of RFC destinations pointing to client 100 of XYZ but SAP by default will pick only that RFC destnation which is name as XYZ100 . If you don't have any such RFC destination created then you again need to do manual work as described above.
However when no logical system is involved the issue become pretty simple. For example you have an ABAP program which fetches data through RFC calls from other systems.Suppose you are executing the program in ABC to fetch data from XYZ. here you can use any RFC destination XYZ, XYZ1 or XYZ2 since you will be feeding the same information while creating the RFC destinations. Here there is no need for a unique RFC destination.
I hope this resolves your questions. Please let me know if you have any more questions on this topic. You are most welcome.
Ands if you are satisfied with the answers please award points accordinly if possible for you to do so.
Regards.
Ruchit.

Similar Messages

  • SAP-JEE, SAP_BUILDT, and SAP_JTECHS and Dev Configuration questions

    Hi experts,
    I am configuring NWDI for our environment and have a few questions that I'm trying to get my arms around.  
    I've read we need to check-in SAP-JEE, SAP_BUILDT, and SAP_JTECHS as required components, but I'm confused on the whole check-in vs. import thing.
    I placed the 3 files in the correct OS directory and checked them in via the check-in tab on CMS.   Next, the files show up in the import queue for the DEV tab.  My questions are what do I do next?
    1.  Do I import them into DEV?  If so, what is this actually doing?  Is it importing into the actual runtime system (i.e. DEV checkbox and parameters as defined in the landscape configurator for this track)? Or is just importing the file into the DEV buildspace of NWDI system?
    2.  Same question goes for the Consolidation tab.    Do I import them in here as well? 
    3.  Do I need to import them into the QA and Prod systems too?  Or do I remove them from the queue?
    Development Configuration questions ***
    4. When I download the development configuration, I can select DEV or CON workspace.  What is the difference?  Does DEV point to the sandbox (or central development) runtime system and CONS points to the configuration runtime system as defined in the landscape configurator?  Or is this the DEV an CON workspace/buildspace of the NWDI sytem.
    5.  Does the selection here dictate the starting point for the development?  What is an example scenarios when I would choose DEV vs. CON?
    6.  I have heard about the concept of a maintenance track and a development track.  What is the difference and how do they differ from a setup perspective?   When would a Developer pick one over the over? 
    Thanks for any advice
    -Dave

    Hi David,
    "Check-In" makes SCA known to CMS, "import" will import the content of the SCAs into CBS/DTR.
    1. Yes. For these three SCAs specifically (they only contain buildarchives, no sources, no deployarchives) the build archives are imported into the dev buildspace on CBS. If the SCAs contain deployarchives and you have a runtime system configured for the dev system then those deployarchives should get deployed onto the runtime system.
    2. Have you seen /people/marion.schlotte/blog/2006/03/30/best-practices-for-nwdi-track-design-for-ongoing-development ? Sooner or later you will want to.
    3. Should be answered indirectly.
    4. Dev/Cons correspond to the Dev/Consolidation system in CMS. For each developed SC you have 2 systems with 2 workspaces in DTR for each (inactive/active)
    5. You should use dev. I would only use cons for corrections if they can't be done in dev and transported. Note that you will get conflicts in DTR if you do parallel changes in dev and cons.
    6. See link in No.2 ?
    Regards,
    Marc

  • Configuration question on css11506

    Hi
    One of our vip with 4 local servers, currently has https. the http is redirected to https.
    Now, my client has problem which a seriel directories need use http, not https. some thing like. quistion:
         1. If there is any possible, I can configure the vip to filter the special directories and let them to use http not https. and rest pages and directories redirect to https?
         2. If not, I can make another vip to use same local servers, but, is possible to only limited to special directories? and with wild code? some like the directories are partially wild coded, something like, http://web.domain/casedir*/casenumber?
         3. if not on both option, is any way I can fix this problem?
    Any comments will be appreciated
    Thanks in advance
    Julie

    I run my Tangosol cluster with 12 nodes on 3
    machines(each machine with 4 cache server nodes). I
    have 2 important configuration questions. Appreciate
    if you can answer them ASAP.
    - My requirement is that I need only 10000 objects to
    be in cluster so that the resources can be freed upon
    when other caches are loaded. I configured the
    <high-units> to be 10000 but I am not sure if this is
    per node or for the whole cluster. I see that the
    total number of objects in the cluster goes till
    15800 objects even when I configured for the 10K as
    high-units (there is some free memory on servers in
    this case). Can you please explain this?
    It is per backing map, which is practically per node in case of distributed caches.
    - Is there an easy way to know the memory stats of
    the cluster? The memory command on the cluster
    doesn't seem to be giving me the correct stats. Is
    there any other utility that I can use?
    Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
    I started all the nodes with the same configuration
    as below. Can you please answer the above questions
    ASAP?
    <distributed-scheme>
    <scheme-name>TestScheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>10000</high-units>
    <eviction-policy>LRU</eviction-policy>
    <expiry-delay>1d</expiry-delay>
    <flush-delay>1h</flush-delay>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Thanks
    RaviBest regards,
    Robert

  • Configuration Question on  local-scheme and high-units

    I run my Tangosol cluster with 12 nodes on 3 machines(each machine with 4 cache server nodes). I have 2 important configuration questions. Appreciate if you can answer them ASAP.
    - My requirement is that I need only 10000 objects to be in cluster so that the resources can be freed upon when other caches are loaded. I configured the <high-units> to be 10000 but I am not sure if this is per node or for the whole cluster. I see that the total number of objects in the cluster goes till 15800 objects even when I configured for the 10K as high-units (there is some free memory on servers in this case). Can you please explain this?
    - Is there an easy way to know the memory stats of the cluster? The memory command on the cluster doesn't seem to be giving me the correct stats. Is there any other utility that I can use?
    I started all the nodes with the same configuration as below. Can you please answer the above questions ASAP?
    <distributed-scheme>
    <scheme-name>TestScheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>10000</high-units>
    <eviction-policy>LRU</eviction-policy>
    <expiry-delay>1d</expiry-delay>
    <flush-delay>1h</flush-delay>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Thanks
    Ravi

    I run my Tangosol cluster with 12 nodes on 3
    machines(each machine with 4 cache server nodes). I
    have 2 important configuration questions. Appreciate
    if you can answer them ASAP.
    - My requirement is that I need only 10000 objects to
    be in cluster so that the resources can be freed upon
    when other caches are loaded. I configured the
    <high-units> to be 10000 but I am not sure if this is
    per node or for the whole cluster. I see that the
    total number of objects in the cluster goes till
    15800 objects even when I configured for the 10K as
    high-units (there is some free memory on servers in
    this case). Can you please explain this?
    It is per backing map, which is practically per node in case of distributed caches.
    - Is there an easy way to know the memory stats of
    the cluster? The memory command on the cluster
    doesn't seem to be giving me the correct stats. Is
    there any other utility that I can use?
    Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
    I started all the nodes with the same configuration
    as below. Can you please answer the above questions
    ASAP?
    <distributed-scheme>
    <scheme-name>TestScheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>10000</high-units>
    <eviction-policy>LRU</eviction-policy>
    <expiry-delay>1d</expiry-delay>
    <flush-delay>1h</flush-delay>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Thanks
    RaviBest regards,
    Robert

  • Closed loop configuration question

    I have a motor(with encoder feedback) attached to a linear actuator(with end limit switches).
    The motor has a commercially bought servo drive for control. 
    The servo drive will accept either a step/direction (2 seperate TTL
    digital pulse train inputs) or an analog -10 to 10vdc input for
    control. 
    The purpose is to drive a linear actuator(continiously in and out) in
    closed loop operation utilizing a ( (SV) Setpoint variable)value from a
    file converted to a frequency to compare with an actual ( (PV) Position
    variable) measured frequency.
    I have created and experimented with individual vi's allows analog
    control and digital pulse train control (thankfully with the help of
    examples). 
    Before I pose my question, I would like to make the following
    observations:  It is my understanding that Closed loop control
    means that I dont need to know an exact position at which to drive, but
    constant comparision of PV and SV through PID applictation. 
    Without getting into any proprietery information I can say that the
    constant positioning of the linear actuator will produce a latency of 2
    to 3 seconds based on the time the actuator moves to a new position and
    when the PV will change.  While experimenting with the analog
    input, i noticed imediate response to motor velocity, but after the
    motor is stopped, position is not held in place.  However, while
    experimenting with the Digital pulse train input, I noticed that the
    servo drive can only accept one command at one time; if, halfway
    through a move, position error produces a response to move the linear
    actuator in the opposite or different direction, the origional move
    must finish first. 
    Can anyone recommend the proper configuration for the closed loop control i have described?
    If I can make the system work with the servo drive/motor I plan to use
    the simple (pci 6014) daq card with the Analog out, or utilize the
    digital out.
    If I cant get this to work, we do have a pxi with 7344 motion card(I
    would like to exhaust all efforts to use the PCI 6014 card).
    Depending on where I go from here, I planned to use the PID vi's for the loop control.
    Thanks,
    Wayne Hilburn

    Thanks for the reply
    Jochen.  I realize there is a built-in latency with windows but I
    think the I/O control would be ok.  A change in actuator position
    will not result in an immediate change in process variable;  Is
    there a way to measure the latency or is it calculated?  A
    satisfactory reaction time could be from 1 to 1.5 sec.
    Use of the PCI-6014 is to supply the control output to the servo
    drive/amp, and not to drive the motor itself.  As stated earlier,
    while using the 6014 board, I have the choice of digital or analog
    output.
    Currently I am at a point where I must choose which configuration,
    analog control or digital control(in the form of digital pulse train),
    (i am inserting from first message)
    While experimenting with the analog
    input, i noticed imediate response to motor velocity, but after the
    motor is stopped, position is not held in place.  However, while
    experimenting with the Digital pulse train input, I noticed that the
    servo drive can only accept one command at one time; if, halfway
    through a move, position error produces a response to move the linear
    actuator in the opposite or different direction, the origional move
    must finish first.  .
    I dont claim to understand all the limitations with the
    specific boards, however, i am using an approach that is showing me the
    characteristics(a couple are listed in the above paragraph)  of
    the hardware and software configurations.
    So I am really back to my origional question;  Which configuration
    would be better for closed loop control, analog or digital pulse train?
    Thanks,
    Wayne Hilburn

  • Multiple Oracle Configuration Question

    We have a typical environment setup. I will explain it below:
    Our application works in Online and Offline Mode. For Online mode we connect to Oracle 10g enterprise server and a Local instance of Access and In offline application totally works in Access.
    Now we want to move away from Access and have Oracle PE instead just because we want to use stored procedure and same set of code for offline and online processing.
    So a typical user machine will have a PE instance and a Oracle Client. Currently we use LDAP.ora for Configuring connections. Now i have few questions
    1. How do we ensure that Oracle PE will work when we don't have network connection. Can we have like PE setup with Tnsnames.ORA
    2. What can be the smallest possible package for PE.
    3. Can I use one client to access both PE and Server databases.
    Any help will be highly appreciated.
    Thanks in advance.

    Assuming the "Xcopy installation" refers to using the Windows xcopy command, can you clarify what, exactly, you are installing via xcopy? Are you just using xcopy to copy the ODP.Net bits? Or are you trying to install the Oracle client via that approach?
    If you are concerned about support, you would generally want to install everything via the Oracle Universal Installer (barring those very occasional components that don't use the OUI). Oracle generally only supports software installed via the installer because particularly on Windows, there are a number of registry entries that need to get created.
    You can certainly do a custom install of the personal edition on the end user machines. There are a few required components that I believe have to be installed (that the installer will take care of). I assume your customization will take the form of a response file to the OUI in order to do a silent install?
    Justin

  • CCMS configuration question - more than one sapccmsr agent on one server

    Hello all,
    this might be a newbie question, please excuse:
    We have several SAP systems installed on AIX in several LPARs. SAP aplication server and SAP database is always located in different LPARs, but one LPAR can share application server of several SAP systems or databases of several SAP systems.
    So I want to configure SAPOSCOL and CCMS-Agents (sapccmsr) on our databse LPARS. SAPOSCOL is running - no problem so far. Due to the circumstance that we have DBs for SAP systems with kernel 4.6d, 6.40 (nw2004), 7.00 (nw2004s) I want to use two different CCMS-Agents (Version 6.40 non-unicode to connect to SAP 4.6d and 6.40 + Version 7.00 unicode to connect to SAP 7.00).
    AFAIK only one of these can use shared memory segment #99 (default) - the other one has to be configured to use another one (e.g. #98) but I don't know how (could'nt find any hints on OSS + Online Help + CCMs-Agent manual).
    Any help would be appreciated
    regards
    Christian
    Edited by: Christian Mika on Mar 6, 2008 11:30 AM

    Hello,
    has really no one ever had this kind of problem? Do you all use either one (e.g. windows) server for one application (e.g. SAP application or database) or the same server for application and database? Or don't you use virtual hostnames (aliases) for your servers, so that in all mentioned cases one CCMS-Agent on one server would fit your requirements? I could hardly believe that!
    kind regards
    Christian

  • Master iPad configurator question concerning cart syncing with different versions of iPads.

    I have a question concerning configurator syncing.Can the master iPad be a different version of iPad than the other synced iPads? For instance can iPad 2 be the master iPad to a group of iPad Air's? The iPad 2 has some fewer capabilities than the Air, would some settings or restrictions be left off of the iPad Airs if they were set up this way?  Thanks.

    There is no such thing as 'master iPad'.  If you're using Configurator or Profile Manager control of the setup is done from a Macintosh.

  • Setup/ Configuration Question

    The network setup I'm using is a wireless G router, Belkin brand, and I have four WRE54G expanders throughout the warehouse. I don't have WEP turned on so I used the auto configuration on all of them. They all made connection and they all work. However, each one of them tends to lose connection from time to time. At least once every couple of weeks I have to reset one of them. When they stop working the red light comes on and I know the link is gone. I'll reset it and everything will be fine for a while. I thought I'd try to get into the web utility to check if any settings are off, however because it has been auto configured I'm not sure what the ip address is. It's not the default. We use a 10.x.x.x range and I've scanned the entire range and none of them show up on an IP. The only thing that does, besides the computers connecting, is the router. I've run the linksys setup utility and had it do a site survey but it keeps coming back saying that the site survey failed. I hate to take them all down from where they are mounted and physically connect them to check things, but I'm not sure it that's the problem. Any ideas would be appreciated. Thanks. If the question I am asking here has already been addressed, please point me to the related thread.
    (Edited subject to keep page from stretching. Thanks!)
    Message Edited by JOHNDOE_06 on 06-21-2007 08:57 AM

    Still kinda funny that the site survey fails. Are you using a Vista computer? Saw that using a XP computer solved someone's problem.
    If you can't get the setup software to work you really need to find out what IPs were assigned to the REs to access the IE interface or hard wire them (v2 and 3). I know it is alot of work. I guess you have to assess how much of a hassle it is.
    I don't know if the RE shows up on ipconfig/all. I guess it should since it has a unique IP address. Mines the default so I'll check and repost.
    I'm wondering what the effect is of having 4 REs in relatively "close" proximity is? When mine loses connection (infrequently), the light turns red but immediately turns blue cuz it connects to router again (got only 1 RE). Would your's connect to the router or to another RE? If it connects to another RE, I guess you lose half the speed again. Interesting....
    Also, other than the blue lights, do have any other indication that the REs are working, e.g. increased signal?
    Message Edited by Luckydog on 06-21-2007 11:53 AM
    Message Edited by Luckydog on 06-21-2007 12:02 PM
    Message Edited by Luckydog on 06-21-2007 12:13 PM

  • Oracle hardware and storage solution configuration questions

    Hi all,
    I am configuring hardware and the storage solution for a project and am hoping to have some questions answered about using Oracle as the storage solution.
    The current setup will have 2 Dell NX3100 NAS gateways each with dual quad core processors, 24GB of RAM, 12x2TB data disks, and running Windows Storage Server 2008 64bit as the OS.
    Will also have direct attached storage of 2 Dell PowerVault MD1200 disk arrays, each disk array with 12 x 2Terabyte SAS disk drives giving a total of 36TB of storage space for each NAS Gateway.
    Based on this information, is there any problem with two Oracle Standard Edition installation (1 per NAS) holding up to 36TB of data (mostly high res images) in this hardware configuration?
    Does Oracle have a built in solution for replicating data between the 2 NAS heads and down to the disk arrays? Where the application sever will write to one of the NAS+disk arrays and then that data is written from the first NAS to the 2nd NAS+disk array? Currently I've used DoubleTake in other projects but am wondering if Oracle has something similar that is built in.
    Finally, will Backup Exec Oracle agent work with this configuration for backing up the data to a Dell PowerVault ML6020 Tape backup device?
    Thanks in advance for any insight.

    Hi,
    Does Oracle have a built in solution for replicating data between the 2 NAS heads and down to the disk arrays? Where the application sever will write to one of the NAS+disk arrays and then that data is written from the first NAS to the 2nd NAS+disk array? Currently I've used DoubleTake in other projects but am wondering if Oracle has something similar that is built in.NAS - I still doubt during the network issues (In case of RAC - all nodes would get afftected), I would not suggest certainly for this. Let the other experts reply back.
    - Pavan Kumar N

  • SQL Server 2012 Failover Cluster configuration questions

    Hi,
    I have few questions on , SQL Server 2012 Failover cluserting pleasse provide suggestions:
    1) In SQL Server 2012 is there a configuration for active / passive Failover Cluster installation? If so how is it done? if you could provide any links or articles that would help.
    OR 
    Is this been replaced by Always ON Availability Groups?
    2) Also in our environment we have done active/active installation but at a time my understanding is only node has the ownership to the shared storage versus both the nodes, is that correct? If not please provide an explanation?
    Any additional information would be valuable in clearing my doubts?
    Thank youy
    Malini=

    Hi malinisethi,
    If you install SQL Server in a cluster and configure Active-Passive cluster. In the first node, select “New SQL Server Failover Cluster installation” option , when we are installing Active-Passive cluster, we have to specify one virtual/network name. (Note:
    For Active-Active clustering you have specify different network names as per the number of nodes). on the other nodes is similar to installing on the first node except that we have select the Add Node to a SQL Server failover cluster option from the initial
    menu. For more information, there is a similar issue about SQL Server 2008 Active-Passive/Active-Active cluster installation, you can review the following article.
    http://sqldbpool.com/2009/10/07/sql-server-2008-active-passive-cluster-installation/
    About Active/Active SQL Cluster, two clustered SQL Server instance are created in different nodes. Then apply Active /Active configuration of both instance. There is an example about creating an Active/Active SQL Cluster using Hyper-V. you can review the
    following article.
    http://blogs.msdn.com/b/momalek/archive/2012/04/11/creating-an-active-active-sql-cluster-using-hyper-v-part2-the-clustered-instances.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Remote configuration question

    Hello! I have several questions about configuration in my system that is shown on picture.
    I need to have a possibility to program the second FPGA(Virtex-6) and its flash memory. I'm going to send mcs or bit file via ethernet to FPGA 1(Kintex-7) then programming in JTAG-mode FPGA 2 or its flash. First of all, i need to develop JTAG configuration logic at FPGA-1. Then, i have a questions:
    1) Is it possible to detect in the jtag chain a flash memory of the  FPGA 2 and programm it via FPGA-1? Or its only possible to programm it only using Impact and jtag-programmer?
    2) I would like to know if my jtag-configuration logic (at FPGA-1) will have mistakes, is it possible to damage fpga-2  by sending wrong sequences of bits during configuration it "on fly"?
     

    XSVF is something like taking a straight-forward iMPACT programming process, and then recording the transitions of the JTAG signals.  Then what you do is to "play back" the recording to make the same thing happen within your target system.  Pretty much anything you do in iMPACT including indirect flash programming (SPI or BPI) can be converted into XSVF.
    You could also roll your own JTAG conversion code, but I think that will take a lot more effort.  I would not be too worried about damaging the FPGA, however.  Typically errors in the configuration process are detected as CRC errors and prevent the part from running bad codes.

  • ALE Configuration Questions

    Hello Experts,
    I have a few questions regarding ALE Configuration, help appreciated
    1) Is it necessary to create user for ALE transfer in both sender and receiver system ?
    2) Is it necessary to create logical system for both sender and receiver in both the systems ?
    3) Can tRFC created in sender system have a different port name than the logical system name ?
    Is it ok, if only configuration is done in the sender system, if I am looking for one way transfer of Idocs from sender to receiver ? The receiver will receive the Idocs and process internally, sender is responsible only for sending in my case.
    I have referred the below link,
    7 Steps For ALE Configuration - ABAP Development - SCN Wiki
    Best Regards,
    Ameya B.

    Hi Mili,
    You do not need to generate partner profile during the ditribution model view process (BD64). You can add manually the partner profile via WE20 for inbound and outbound parameters. As long as distriburon model has been setup and distribute, you can continue the next step for example generating IDoc and so on.
    Sometimes, you may face an issue to generate partner profile from BD64 for some reason. I found by generating manually partner profile always quick, safe and stay away from any trouble shooting for an error.
    Hope this will help.
    Regards,
    Ferry Lianto
    Please reward points if helpful.

  • Email configuration question

    Hi-
    I'm having a problem configuring my email accounts for my iPhone. I currently use an Exchange email account through school, linked to Outlook 2007 on my computer (PC). I also have a couple of Gmail and Yahoo accounts that I like to download to my iPhone and computer. My iPhone recently had to be restored and now I'm trying to figure out what I have wrong with my new settings.
    Currently, my emails all download and display properly on my computer. However, emails display twice on my phone, making it appear that they are showing up as being in both the original account and the Exchange account. This could also be an issue with how my Outlook is setup- I'm not sure- but, given the option, I'd prefer not to store outside emails on my Exchange account.
    I apologize for the long-winded question but does anyone have any suggestions?
    Thanks!

    You need to set the outgoing mail settings to your SMTP server and port. Check you outlook settings, the standard SMTP server port is 225.

  • PI Configuration question....

    Hi,
    It is for PI SP12 on AIX and Oracle.
    I am configuring the CRM 5.0 to PI 7.0. The CRM 5.0 is ABAP+J2EE system. I have created two RFCs in CRM ABAP side (SAPSLDAPI and LCRSAPRFC). Both of these RFCs are pointing to PIHOST and PIGATEWAY.
    The application system has a J2EE as well so there must be a setting in visual administrator as well. So I go to Server -> Services - JCo RFC Provider.
    As far as I have understood I have to maintain two entries for two RFCs with the name of the program ID (e.g. LCRSAPRFC_<SID>).
    My question is that the values in the section <b>RFC Destination</b>  and <b>Repository</b> should point to PIHOST and PIGATEWAT or it should point to CRMHOST and CRM-GATEWAY.
    I think it should point to PIHOST and PIGATEWAY and the value of “Gateway Host” and “Application Server host” will be same.
    Please correct me if I am wrong.
    I will appreciate your help.
    Regards.
    Sume

    Thank you for a very helpful answer.
    One more confusion/question.
    Suppose my SLDHOST and SLDGATEWAY is different than PIHOST and PIGATEWAY then
    Will your above reply  still be true?
    Because when we create an entry in SLDAPICUST in Application System we enter the HOST and PORT of system where SLD is running. We do not enter the host and port of PI system. And yes, this PI should also be poitinng to same SLD.
    Please reply.
    I will appreciate your reply.
    regards.
    Sume

Maybe you are looking for

  • Box and Size stmt in sapscript

    Hi all a very basic question can anybody plz tell me whatz the difference between box and size stmt in sapscript Thanks in advance

  • Retriving data from database using JDBC and mysql.

    st=con.createStatement(); st.executeUpdate("INSERT INTO temperature(`temp`) VALUES ('"+name+"')"); ResultSet rs = st.executeQuery("Select * from temperature"); String s1=""; while(rs.next()) s1= rs.getString(1); document.write(s1+"-------------------

  • Help in customizing Folder Layout

    All, any ideas on how we can get the folder layout to be vertically aligned top (i.e valign=top)? If you click on Folder Layout (in Content area->Folder Style), you see a space between the top of the folder and the Quick Picks. When we publish the fo

  • Fatal error after upgrade to Mountain Lion

    Got a fatale Error after the restart on a MiniMac 2009. Even after many Power Shotdown, still Fatal Error. Now i have to install 10.7 again. It makes me mad.

  • How can i backup files in Disk utilities, files from HD to exHD?

    I'm having some issue with my Macbook Pro, it turns to be slow and lag. I would like to backup my data and format it, but i could log into it. i can only enter Mac OS X Utilities.