Avoid multiple start of VM on different servers ?  (data corruption)

Hello,
we are using OracleVM without manager.
Setup: 2 servers connected to one SAN device. The SAN device is connected with 2 FC links, handles by device mapper multipath. The file system /OVS is located on OCFS2 filesystem which is located in SAN. Both nodes have /OVS mounted to make live migration possible.
When testing this setup, we found that it is possible to start VM1 on both servers at the same time. This IS GOING TO lead to data corruption for sure.
Is there a way to avoid that is is possible to start the same VM on multiple servers at the same time ? Any hints ?
Regards,
Robert

I was investigating this issue some more and found the very useful DLMFS coming with OCFS2 (see http://oss.oracle.com/projects/ocfs2/src/branches/ocfs2-1.2/dlmfs.txt)
I played around with it and found the following:
1) The O_NONBLOCK operating mentioned is not working (open always fails)
2) Locks are cleared if a machine leaves the cluster (disconnected or failes)
3) open call waits until a log is cleared
So I was thinking about the following solution:
Write a wrapper for "xm" that calls "xm.orig". This wrapper creates a daemon process along with the virtual machine that holds a lock file /dlm/xen/MachineName open as long as the machine is running. When you shutdown the virtual machine, this daemon is ending and the lock is released now the machine can be started on a another machine and this machine is locking again.
The problem is a crashing machine. If a machine crashes, the lock is not cleared because the deamon is still running.
Is there a way to get notified of a crashing machine in OracleVM or is there some way to "hook" into the "oncrash" routines of Xen ?

Similar Messages

  • Golden gate extract from multiple oracle db's on different servers

    Hello,
    I am new to golden gate and I want to know if is it possible to extract data from an oracle database which is on a different server? Below is the server list
    Linux server 1: has oracle database (11.2.0.4) (a1db) and golden gate installed (11.2.1.0.3).
    Linux server 2: has oracle database (11.2.0.4) (b1db)
    a1db and b1db are not clustered, these are 2 separate instances on 2 different servers.
    Is it possible to capture change data on b1db from GG installed linux server 1? I am planning to use classic capture.
    architecture like below, can it be done? If so what option I will be using in the extract
    Thanks,
    Arun

    Here is something from my personal notes; hope this helps:
    Standby or Off Host Environment
    GoldenGate extracts, data pumps and replicats can all work with database environments accessed using TNS.  When one of these processes needs to work with a database environment over TNS then instead of the following USERID specification:
    setenv (ORACLE_SID = “GGDB")
    USERID ggsuser, PASSWORD encrypted_password_cipher_text
    The following USERID specification would be used:
    USERID ggsuser@GGDB, PASSWORD encrypted_password_cipher_text
    When this specification is used the setenv line is not required since the process will connect over TNS.
    When a data pump or replicat is running in a standby or otherwise an off host environment the USERID specification above is the only special requirement.  It is recommended that the TNS entry contains the necessary failover and service name configuration so that if or when switch over or fail over occurs, the process may continue once the environment is available again.  if the data pump is using the PASSTHRU parameter then a USERID specification is not required.  When the data pump is operating in PASSTHRU mode, it does not need a database connection to evaluate the metadata.
    When a source extract is running in a standby or otherwise off host environment the USERID specification above is required as well as Archive Log Only mode.  It is recommended that the TNS entry contains the necessary failover and service name configuration so that if or when switch over or fail over occurs, the process may continue once the environment is available again.  The source extract requires a database connection in order to evaluate the metadata that occurs in the archived redo log.  Since the source extract is running in an environment separate from the source database environment it is unable to read the online redo logs.  Therefore it is required to be configured in Archive Log Only mode.  If the environment that the source extract is running in is a standby environment, then it will evaluate the archived redo logs through switchover. 
    The standby or off host environment has minimal requirements. The requirements that need to be met are Oracle software availability and storage for archived redo logs.  If the database environment where GoldenGate will be running is a standby database environment then GoldenGate can utilize the required shared libraries from the standby database environment.  However if GoldenGate is being executed from a server environment does not contain a database environment, a client installation is required at a minimum.  This will provide GoldenGate with the necessary shared libraries in order to satisfy it’s dynamically linked library dependencies.  The archived redo logs must also be available for GoldenGate to read.    They can be made available using a shared storage solution or with a dedicated storage solution.  A standby database environment works well for this purpose as it is receiving archived redo logs on a regular basis.  GoldenGate is able to leverage these archived redo logs and not impose any additional infrastructure requirements in order to evaluate and capture the data changes from the source database.  In order for GoldenGate to be utilized with a standby database environment for archived redo log access, only a minimal standby database is required.  Specifically the standby database needs to be be mountable so that it can accept archived redo logs.  Since GoldenGate will connect to the primary database to evaluate the metadata contained in the archived redo logs, a complete standby database is not required.

  • How to avoid multiple DataConnections with LCD ES2

    Hi, we are just starting using the data connections to connect via a database with LiveCycle Designer. It seems we are missing something important regarding to manipulation of data from the PDF.
    At first we have tried to apply a INSERT command to our first table and be able to browse through the items in the database. We are able to browse through the database only before we have inserted something.
    If we insert an item and then try to browse through the database with (Next, Previous, Last or First) it will crash and have an error of the following :
    (Next, Previous, Last or First) failed. Multiple-step operation generated errors. check each status value [ID:@11]
    So then we have decided to create a second data connection which would have each of the column in the database except for the ID which appears to make everything crash.
    E.G.:  Table_1
    DataConnection1 --> ID, Field1, Field2, Field3, Field4          SELECT Command connection
    DataConnection2 --> Field1, Field2, Field3, Field4               INSERT Command connection
    Seems like we can't have a SELECT and INSERT in the same DataConnection because with those 2 different connections it works fine....
    Then we are trying to show multiple data's in a table which is linked with the ID of the Selected ID in DataConnection1.. To show the data is all fine using a 3rd connection for that Table_2, then we are making sure a blank row is always there at the end of the table to be able to add this new entry to the database with a Add button to INSERT into the Table_2... Unfortunately we do not have a 2nd connection to that table because we cannot link those fields with the database for the purpose of a multiple entry view.
    We have tried to make a 2nd connection for the purpose of the INSERT but it doesnt work at all.
    We are basing ourselves on the sample provided by Stefan Cameron in his blog http://forms.stefcameron.com/2006/12/18/databases-inserting-updating-and-deleting-records/
    I am wondering if we are using the right functionalities and if it is the simplest way to work with databases...
    If anyone can help, it would be greatly appreciated!!
    Thanks in advance!
    Mag

    Don't forget to activate the RESOURCE_LIMIT parameter, which default is FALSE :
    alter system set RESOURCE_LIMIT = true;
    Laurent, I had a similar problem some time ago : I didn't want to avoid multiple access, but only control who was doing what. That's because moving from Client/Server to Web the TERMINAL column in V$SESSION becomes useless.
    I tried your solution, but I had to give up with it, because in my Forms9i application some forms call Reports, which generate a new session.
    I decided to use DBMS_APPLICATION_INFO, and this is satisfactory for my requirements, but I'm interested to discover other solutions.
    P.S. with my solution I'm able to limit accesses, because in the CLIENT_INFO string I put, among other things, the
    application user, so I can control if an user is already connected. The problem is that existing applications have to be modified .....:-(

  • CFLOGIN works, but not simultaneously on different servers/broswers

    I'm using CFLOGIN with application.cfc which works great when I test it - I'll call it login session A in browser window 1.
    When I simultaneously log into the same app on a different server with a different username (login session B in browser window 2), I can't login - unless I log out out of session A/browser window 1 first. Something in my new code is preventing me from logging into my app more than one time, even when the apps are on separate servers and I'm using different usernames.
    We have the same app on various servers (test/development/production), and I used to be able to login on 2-3 browsers or servers at a time - and I never had a problem until recently when I made some changes to the application.cfc and login code.
    I updated the code because before the session scopes and login credentials were not being initiated and terminated together (upon login/logout). Before, a user was clicking 'logout' and it was clearing the session scopes without invoking CFLOGOUT. Now, I fixed that, but I have another problem, which is that I can't log into the application on two different browsers or servers at the same time (even if I'm using different login usernames). Any suggestions would be appreciated.
    <cfcomponent displayname="Application" output="false">
        <cfset this.name = 'SampleApp'>
        <cfset this.SessionManagement = true>
        <cfset this.SetClientCookies = true>
        <cfset this.SessionTimeout = CreateTimeSpan( 0, 0, 5, 0 ) />
    <cffunction name="onSessionStart" access="public" returntype="void" output="false">
        <cfset session.hostname = 'http://'&#CGI.HTTP_HOST#&'/'>
        <cfset session.dbase = 'localdb'>
        <cfset session.roles = ArrayNew(1)>
        <cfreturn>
    </cffunction>
    <cffunction name="onApplicationStart" access="public" returntype="boolean" output="false">
        <cflog file="SampleApp" type="information" text="Application started." />
        <cfreturn true>
    </cffunction>
    <cffunction name="onApplicationEnd" returntype="void" output="false" hint="Executes on session timeout or if server shuts down.">
        <cfcookie name="CFID" value="#CFID#" expires="now">
        <cfcookie name="CFTOKEN" value="#CFTOKEN#"  expires="now">
        <cfreturn>
    </cffunction>
    <cffunction name="onRequestStart" access="public" returntype="void" output="true" hint="Executes before each page processes.">
        <cfargument name="targetPage" type="String" required="true"/>
            <cfsilent>
            <cfif GetAuthUser() NEQ ''>
                <cfif NOT isDefined('session.uname')>
                    <cfif CGI.HTTP_REFERER DOES NOT CONTAIN "login.cfm">
                        <cfinclude template="expired.cfm">
                        <cfabort>
                    </cfif>
                </cfif>
            </cfif>
            <cflogin>  
                <!--- Flash Remoting setCredentials() passes cflogin.user and cflogin.password using checklogin.cfc --->
                <cfif IsDefined('cflogin')>        
                    <cfquery name="qValidateLogin" datasource="#session.dbase#" username="#cflogin.name#" password="#cflogin.password#">
                        SELECT    role
                        FROM    session_roles
                    </cfquery>
                    <cfif qValidateLogin.RecordCount GT 0>
                        <cfloginuser name="#cflogin.name#" password="#cflogin.password#" roles="#qValidateLogin.ROLE#">
                        <cflog text="User - #cflogin.name#" type="Information" file="Filename" date="yes" time="yes">
                    <cfelse>
                        <cfinclude template="login.cfm">
                        <cfabort>
                    </cfif>
                <cfelse>
                    <cfif right(arguments.targetPage,10) is "logout.cfm">
                        <cflocation url="index.cfm">
                        <cfabort>
                    <cfelse>              
                        <cfinclude template="login.cfm">
                        <cfabort>  
                    </cfif>          
                </cfif>
            </cflogin>
            </cfsilent>      
    </cffunction>
    </cfcomponent>

    Oh no, that was my mistake. Thank you for catching that! That query is currently not being used on my Application.cfc page, which is why I wasn't having problems with it, but I'm glad you pointed it out to me. I had that code in my Application.cfc file just incase I wanted to check login from somewhere else, but my login query is actually being called with Flash Remoting using setCredentials() to connect to login.cfc. This is the correct query in my login.cfc file:
                <cfquery name="qValidateLogin" username="#session.uname#" password="#session.pword#" datasource="#session.dbase#">
                    select role from session_roles where role like 'xxxxx%'     
                </cfquery>
    I do think I figured out a solution to my problem though. I found out how to use applicationToken, which I understand if it is not set, by default will be the value of application.Name. If you want users to be able to log into multiple instances of  your application at the same time, you would give the applicationToken the same value. This would be great for clustered servers or sites with sub-domains.
    If you wanted to force only one login for the three different sites, you could give this a different value. VERY useful. So I have:
    <cfcomponent displayname="Application" output="false">
        <cfset this.name = 'SampleApp'>
        <cfset this.SessionManagement = true>
        <cfset this.SetClientCookies = true>
        <cfset this.SessionTimeout = CreateTimeSpan( 0, 0, 5, 0 ) />
        <cfset this.loginStorgage = "session">
        <cfset this.applicationtoken = 'SampleAppSub'>
    Now, I can log into this site on my development machine with multiple browsers pointing to the same site hosted on different servers - with no problem. I never had an issue with this before, but something else I recently added into my code in Application.cfc made this not work. I should probably also mention that I work on many different applications that all use the same application.Name even though they are different sites - we do this so the same settings can be deployed on different servers.
    With the applicationToken settings, I have it working again. Perhaps what made it break was setting this.loginStorage = "session"? Before this was not set and was using the default value of  "cookies" which I didn't want - because my site is used by different people on the same box and we have clustered servers.

  • DAQmx in Event structure / avoid multiple events

    Hi all
    I've started creating program for acquisition and analysis of data . I don't have much experience in LabVIEW, but it's important to me to start creating this application in good way. Can you give me some advice about things below?
    I prepared template with Event structure, based on some article .
    First question is how exactly place DAQmx blocks in it. Now they are all on one state of case structure "test", but I'm not sure that's correct because most of the example on NI side have initialization (like sample rate, number of saple) outside of the loop? It's important to make it possible to change parameter between measurements.
    Second. How to avoid multiple events. In example: button TEST is pressed, measurement is taken. This measurement take a long time. I've started bored, and some other buttons (maybe TEST ) accidentally were pressed. When measurement is completed  it would be good idea to destroy this events associated with accidently pressed buttons. It is possible to do it? What's the best way when there's a lot of buttons(when analysis part of program will be added)?
    Best Regards
    PS. sorry about my english
    Attachments:
    pgm.vi ‏33 KB
    Enum_Events.ctl ‏5 KB

    Dear Finch!
    Welcome to NI Forums!
    My first advice regarding your code would be to use shift registers instead of Queues as your state storage, since (as you've said) there is no reason to store multiple events that have happened. The state machine design pattern, which I strongly recommend in this scenario is built into LabVIEW, you can use it as a template if you go to the New.. menu.
    Please check out these materials for further discussion about state machines.
    You are correct in that most of the DAQmx VIs can be placed outside of the loop, only DAQmx Read (the function we actually use more then once) must be placed inside the loop, the rest can stay out. If you want to modify some parameters (like Timing) mid-execution, you only have to stop the task, set them, then start it again.
    This can be easily done in a different state, which can execute only when some parameters have changed.
    Also, if you want your user to be unable to interact with certain controls while the test is being taken, you can programmatically disable them with a Property Node. 
    Please get back to me if you have any other questions.
    Best regards:
    Andrew Valko
    NI Hungary
    Andrew Valko
    National Instruments Hungary

  • What are the entities that can be re-used in different servers, SI App, SI instance? And how?

    Greetings,
    What are the entities that can be re-used in different servers, SI App, SI instance? And how?
    e.g. can I use a deployed IQStreamable@app1  into app2?
    can I use a deployed observable/app1/siInstance1/Server1 into another query/app3/siInstance3/server2?
    On the presentation titled "04 – Installing, Deploying and Maintaining the SQL Server 2008 R2 StreamInsight Runtime Engine" with file name SQL10R2UPD05-DECK-04.pptx on ecn.channel9.msdn.com/o9/learn/SQL2008R2TrainingKit/Presentations/SQL10R2UPD05-DECK-04/SQL10R2UPD05-DECK-04.pptx
    It is mentioned one of the deployment option is "Deployment: Standalone Server"
    and it mention the following:
    "Use this option for the following scenarios:
    - Metadata objects need to be shared between applications
      - Event Types
      - Adapter Types
      - Query Templates
    - A data source registered with the server provides an event stream for another existing application"
    Could you please provide good example that explain the above statement?
    Cheers, Muhammad

    First, that statement - and those materials - refer to the "legacy" StreamInsight query/adapter model. They do not refer to how things work with the Reactive model introduced in version 2.1. Specifically, it talks about Dynamic Query Composition (DQC).
    You cannot use a deployed Observable in another instance of StreamInsight. You may be able to use them across applications in the same instance - off the top of my head, I'm not sure. I'm getting ready to get on a plane but will take a look at it later.
    Typically, however, applications act as containers (comparable to .NET AppDomains) so I don't think that you'd be able to do this easily. That said, the code and assemblies
    can be reused across multiple instances/applications. You would have separate instances of the classes involved but you would be able to reuse the query logic. That's a common use case.
    Can you be more specific about your use case and what you are trying to accomplish here? It's possible that there are alternative ways to do what you are trying to do.
    DevBiker (aka J Sawyer)
    Microsoft MVP - Sql Server (StreamInsight)
    If I answered your question, please mark as answer.
    If my post was helpful, please mark as helpful.

  • How to avoid multiple call to function:

    In our datawarehouse we have a huge receipt row table where all metrics ar stored in the local currency. On top over that we have views which calculate metrics to the desired currency.
    So basically all views looks like this
    select geo_region,
    product_group,
    customer_group,
    metric1 * (select get_exchange_rate(currency_id) from dual) metric1,
    metric1 * (select get_exchange_rate(currency_id) from dual) metric2,
    metric1 * (select get_exchange_rate(currency_id) from dual) metricx,
    group by..
    As we have about 20 metrics we notices that the function is called 20 times per row.
    Is there really anyway to avoid that? Shouldn't it be it's just the exact same call with the same in-parameters over and over again.
    We've tried with local sys_context and the performance is better but the call to the context is still performed 20 times. Any Ideas?

    Can you avoid multiple function calls? Maybe, if as in your example all the function calls values are computing the same result. If they operate on different columns then you'll have to perform the function call anyway.
    Either way you should be able to eliminate the (near as I can tell) pointless subquery from dual
    You might be able to avoid the repeated function calls if the values are always the same. If every computation you could save the function call (and subquery!) by doing it once and then using assignments after the initial query using variables after the initial query, perhaps using NULL in the query as placeholders to select into a record - something like
    select inital_region,
             product_group,
             customer_group,
             metric1 * exchange_rate(currency_id) metric1,
             null metric2,
    v_metric2 := metric1;
    ...Message was edited by (fixed typo):
    riedelme

  • Distribution Monitor for 2 different servers from 2 different sites

    Hello all,
    We are trying to use Distribution Monitor during a parallel Unicode Conversion on a SAP 4.7 system.
    The source system and target system are 2 different servers located on 2 different sites (more than 500Kms distant).
    Questions:
    1. Can we use Distribution Monitor with 1 source server dedicated for the Export and 1 target server dedicated for the import of a package?
    2. If it is not possible, what are the constraints in fact?
    3. Can we have a scenario where Distribution Monitor is used on the source system in order to use the parallelism benefit and Migration Monitor used on the target system?
    Thanks for your help & feedback,
    Chris

    Hi Chris,
    1. Can we use Distribution Monitor with 1 source server dedicated for the Export and 1 target server dedicated for the import of a package? The Answer is No
    In order to use Distribution monitor, u need minimum two application servers on source systems  and correspondingly atleast minimum two application servers on target system 
    For example let us say Application server A  and Application server B on sources systems and Application Server C and Application server D on target ytem
    Then configure Distribution monitor properties to include two application servers as source systems and two application servers as target systems.  When u exeute distribution monitor preparation, first it scan database servers in source system nd target system  and then scan CI servers in source and target system. Then Packages will be distributed in two application servers A and B
    Run Export from Application server A  for first fifty packages  , at the same time Run import  these first fifty packages in Application Server C
    Run Export from Application Server B  for other remining packages and at the same time , run import the remaining packages into Application Server D.
    (that is one to one correspondence)
    2. If it is not possible, what are the constraints in fact? - There is no constraints. However there is lots of time consuming during Distribution monitor preparation and checking.
    3. Can we have a scenario where Distribution Monitor is used on the source system in order to use the parallelism benefit and Migration Monitor used on the target system? - The Answer is No.
    You cannot mix distribution monitor tool for source system and Migration tool used on target system.
    You have to use any one tool depending on the size of database used.
    if your database size used is very very large then  recommend to use distribution monitor where u in sou can have multiple R3load jobs in each application server. Say Application server A use 20 R3load jobs and Application server B use 15 r3load jobs).
    Thanks
    APR

  • Can Ironport support 2 different servers within 1 domain?

    Hi All,
    The situation is:
    Our company's Ironport is using firmware AsyncOS 7.6 and currently is connected to Lotus Notes Server.
    However, we are now planning to add 1 more mail server - MS Exchange.
    The questions are:
    1. Is it possible to connect both Notes and Exchange with 1 domain only?
    2. If yes, can we set some filtering to seperate then the incoming email can fall into the designated server?
    3. How can we achieve connecting 2 different servers under 1 domain?
    Please give any other comments if you have. Thanks!!
    Thanks and Regards
    Krav

    Krav,
    You should be able to do this. However, a curious question, are you planning to migrate off of Lotus notes or is this going to be a permanent solution? Are the mailboxes for both mail servers going to be the same (maybe clustered).
    1. Yes, you can have multiple servers assigned to a domain, by specifying the ip address as an additional entry in SMTP routes.
    2. This may prove to be the big issue. There is no filtering mechanism that can distinguish lotus notes bound mail from exchange bound mail. For example; if you set both the servers with the same priority in SMTP routes they will round robin, meaning some mail will goto Lotus notes and other mail will goto Exchange. So if this is just to test, you can possibly use the priority option in the SMTP routes. However, this may also be more of a question as to whether you could cluster an Exchange and lotus notes server, which is beyond my understanding.
    3.In the SMTP Routes section click on the domain and add in the ip address of the other server. Be mindful that if you keep the priority the same, mail will round robin between the devices. However if you set the first device to 0 and the second device to 10, mail will primarily goto the device with the 0 priority. You will also need to specify the ip address of the second server in your HAT table, if you are using the Relaylist.

  • Global Index on several partitions with each partition on different servers

    Hi,
    I have a table divided into 4 partitions. Each partition is on a different server. Currenlty the indexes are set per partition. I would like to create a global index which would work on all partitions. How could I create global index which will work on all 4 partitions/servers ? (My support team is telling me that it is not possible with different servers. It only works for several partitions on 1 physical server. Is it true ?)
    Thanks,
    Nicolas

    harry76 wrote:
    Hi,
    Are you sure this is an Oracle database. I think SQL Server has this kind of architecture in some cases?
    Not quite - in SQL Server a single instance can control multiple databases and a partitioned object can have different partitions in different databases; but the SQL Server partitioning strategy is always the equivalent of "local partitioned indexes".
    Maybe this system is using partitioned views. It is possible to create clone table structures with disjoint data sets across multiple databases and then create a UNION ALL view of the tables with a predicate on each query block identifying the data in each database. The optimizer can then do "partition elimination" if your query specifies the column(s) used in the defining predicates.
    Regards
    Jonathan Lewis

  • Multiple Start Activities

    Hello all,
    Does anyone succeed in deploying a process with multiple start activities as describe in section 16.4 of the bpel 1.1 specification ? i.e. something like
    <sequence>
    <flow name="flow-1">
    <receive name="receiveInput" partnerLink="client1"
    portType="as:MSApt1" createInstance="yes"
    operation="op1" variable="input1">
         <correlations>
         <correlation set="c" initiate="yes"/>
         </correlations>
    </receive>
    <receive createInstance="yes" name="receive-1"
    partnerLink="client2" portType="as:MSApt2"
    operation="op2" variable="input2">
         <correlations>
         <correlation set="c" initiate="yes"/>
         </correlations>
    </receive>
    </flow>
    I got the following error :
    main:
    [bpelc] bpelc> validating "C:\eclipse\workspace\MultipleStartActivities\MultipleStartActivities.bpel" ...
    [bpelc] BPEL validation failed.
    [bpelc] BPEL source validation failed, the errors are:
    [bpelc]
    [bpelc] [multiple create instance activity]: in line 45 of "C:\eclipse\workspace\MultipleStartActivities\MultipleStartActivities.bpel", Conflicting createInstacne="yes". Instance is already created by another activity.
    [bpelc]      Potential fix: Remove createInstance="yes" attribute from this activity.
    [bpelc]
    [bpelc] [try to initialize an initialized correlation set]: in line 47 of "C:\eclipse\workspace\MultipleStartActivities\MultipleStartActivities.bpel", correlation set "c" is already initialized, cannot initialize it again.
    [bpelc]      Potential fix: Change attribute to initiate="no".
    [bpelc] .
    BUILD FAILED: C:\eclipse\workspace\MultipleStartActivities\build.xml:28: Validation error
    Thanks
    Nicolas.

    Hi Edwin,
    I am using Oracle BPEL for one of our solutions. Now, i have a particular usecase in which I require to have multiple entry points for one single BPEL process.
    In my requirement the BPEL process can be initiated by two different sources. In the first case, one third party application places a xml file at some location and using the file adapter, we pick the xml file and start the process. After receiving the xml file, we do some verification confirmation and some calculations. In the second case, our web application initiates the web application which does not require verification and calculations.
    Can you please give me a clue on how to go about this situation?
    Regards,
    Varun

  • Logical systems assignment when SRM and ECC are on 2 different servers

    Hi SRM guru's,
    What is the standard practice for Logical system creation and assignment when  ECC and SRM are on 2 different servers.
    I could not able to find which i created a logical system  in SRM server which i created in ECC
    Same for the ECC also.
    how do i assign logical system of SRM to ECC
    vice versa.
    your prompt response is appreciated.
    Regards

    Hi Sai,
    Logical System Naming/Creation procedure is not dependent on the servers but it is just a logical representation of a system.
    As you are trying to name logical systems for ECC & SRM, please follow the below procedure which is usually recommeded by SAP and followed.
    Logical System : <SID>CLNT<Client Number>
    For Example if you have a client 300 in system DE0 (also called SID), the logical system name will be DE0CLNT300. And it will be same for SRM system.
    You can refer the below document which eplains the creation of logical system and RFC destinations etc
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/CABFAALEQS/CABFAALEQS.pdf
    Note: Only while configuring the RFC Destination, the servers will play a crucial role.
    Regards
    Kathirvel

  • I do not see where to enter IP addresses in the Open VPN setup. Also, how can I set it up so that I can choose different servers in the same way as I can currently choose them with my VPN app but for PPTP?

    I think I have it working on my iPhone 5. But, I do not see how I can control the exit point that I would like for the VPN. Are all the exit points shown in the VPN setting now going to work with Open VPN, or do they remain PPTP? If I am reading correctly, they look like they remain PPTP. If I cannot control the exit point for open VPN, which exit point is the default in the profile you provided me?I note that Open VPN Connect does not work with any of the new 64 bit devices like the iPhone 5S, the iPad Air, and the new iPad MIni. Is there any chance that you guys will come up with an update for your app so that open VPN can be made to work on all iOS devices? That would be nice, particularly if the Open VPN Connect app does not give me a choice of exit points.Thanks,
    I do not see where to enter IP addresses in the Open VPN setup. Also, how can I set it up so that I can choose different servers in the same way as I can currently choose them with my VPN app but for PPTP?
    Just a quick note to tell you that Open VPN has updated their app so that it is compatible with 64 bit ARM devices like the iPhone 5S, the iPad Air, and the iPad Mini Retina.That does not resolve the problem of how to easily choose among the various possibilities for the exit server. We need to find an easy way to choose.

    Thank you for trying the new Firefox. I'm sorry that you’re unhappy with the new design.
    I understand your frustration and surprise at the removal of these features but I can't undo these changes. I'm just a support volunteer and I do not work for Mozilla. But you can send any feedback about these changes to http://input.mozilla.org/feedback. Firefox developers collect data submitted through there then present it at the weekly Firefox meeting
    I recommend you try to adjust to 29 and see if you can't make it work for you before you downgrade to a less secure and soon outdated version of Firefox.
    Here are a few suggestions for restoring the old design. I hope you’ll find one that works for you:
    *Use the [https://addons.mozilla.org/en-US/firefox/addon/classicthemerestorer/ Classic Theme Restorer] to bring back the old design. Learn more here: [[How to make the new Firefox look like the old Firefox]]
    *Use the [https://addons.mozilla.org/en-US/firefox/addon/the-addon-bar/ Add-on Bar Restored] to bring back the add-on bar. Learn more here: [[What happened to the Add-on Bar?]]

  • Setting mail with Cox (or other services with different servers for pop and smtp)

    My Cox mail account uses different servers for pop and smtp (my personal ISP 1&1 does too).
    The mail applet on my Blackberry Curve 8330 with Verizon does not allow to define different servers for pop and smtp, and further, for SSL smtp mail, the port is fixed at 995 and cannot be changed, while Cox (and 1&1) want to use 587 or something like that.
    As a result, I can only receive mail with these services.
    I also have yahoo mail, which works fine, so I can send mail with it and it is not a life-and-death situation, but I would like to be able to just reply to email sent to my Cox address.
    I called Verizon and they said BlackBerry provides the mail access through their servers and the applet, so there is nothing they can do.
    Is there a way to set it such that I can not only receive but also send mail through either of these services?
    Thanks in advance,
    Didier
    PS: Other than that, the Curve on Verizon rocks!!! so much better down here than AT&T it's not even funny.

    OK, thank you for the input.
    The problem I have with this solution is like the one I have now using yahoo.
    There are 2 problems:
    1) mailing lists want the mail to come from the account that is subscribed, so if the cox account is subscribed, I can't contribute from the blackberry, and if the blackberry account is subscribed, I don't get my mail in Outlook. Neither is good for me.
    2) people who send me mail to the Cox account and get replies from me from the blackberry continue responding to the account that can send from the blackberry (not Cox), and from that point on I do not have that mail on the computer.
    The issue of having two copies is no big deal, I just delete the mail I do not need. I would rather have two than none.
    Really, Blackberry should modify the email service so that they directly support mail systems like those of Cox and 1&1. I am sure there are others. They should also allow the use of another port for SSL than 995. I have not seen anyone using 995 for SSL.
    Until recently, I had a BB provided by my employer, and we had a BES, and that worked really well. I would like to emulate as much of that functionality as possible without having to pay somebody another $10 or $20 a month just for the priviledge of having an account on a private BES server.
    Anyway, thanks for the exchange and suggestions.
    Didier

  • Mail: email retrieval from 2 different servers

    I retrieve emails to MAIL for my 2 email addresses from 2 different servers (gmail and 'workspace'). I try to keep the email on the servers to a minimum by deleting emails, but I want all the emails to stay in "Mail" on my Macbook pro.  When I delete email on the gmail server and then 'check mail' in Mail, if it's not on the gmail server, it gets deleted in Mail.  Under 'preferences:accounts:advanced' I've selected 'all messages and their attachments' under 'keep messages for offline viewing'.  I thought this would preserve all emails in Mail, but it doesn't.  What should I do to make sure all the emails I want to stay in Mail, stay there regardless of whether they are still on the servers or not?  Thanks.

    try, like this
    SQL> select name from v$database;
    NAME
    DBDEMO
    SQL> create user testing identified by passwd;
    User created.
    SQL> grant connect,resource to testing;
    Grant succeeded.
    SQL> conn testing/passwd;
    Connected.
    SQL> create table test_tb (id number);
    Table created.
    SQL> insert into test_tb values(123);
    1 row created.
    SQL> /
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from test_tb;
      COUNT(*)
             2
    SQL>
    on dblink created machine:-
    SQL> select name from v$database;
    NAME
    DB2
    SQL> create user nic identified by nic;
    User created.
    SQL> grant connect,resource, create database link to nic;
    Grant succeeded.
    SQL> conn nic/nic;
    Connected.
    SQL> create database link test_db_link connect to testing identified by passwd using '(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dbdemo)))
      2  ';
    Database link created.
    SQL> select count(*) from test_tb@test_db_link;
      COUNT(*)
             2
    SQL>

Maybe you are looking for