Managing Datasource specification when migrating between environments

Hi,
We have a Data Services 3.2 realtime job that uses many Base match transforms.
These Base match transforms are used as simple candidate selectors which do not perform any matching and fire native SQL queries.
As we are currently migrating from DQXI 11.6 to DSXI 3.2 we were running this job in our development environment.
So the datasource specified in these candidate selection transforms were pointing to our development database.
We are now migrating to our test environment which has a separate database of its own having the same tables.
So, is there a central place where we can update the database details, so that its reflected in all the candidate selection transforms ?
Or, do we need to manually drill down into each candidate selection transform and change the datasource specification there before pushing the jobs realtime?
In our old DQXI system, each project had a XML file which we migrated between environments and we modified all occurrences of the old datasource with the new one.
Can something similar be done in DSXI ?
Please let me know. Any inputs will be helpful.
Thanks,
Saurav.

you can define multiple configuration for the Datastore that you are using in candidate selection, each for DEV, PROD etc env
and Define a System Configuration for each env, in System Config, you can specify which Datastore configuration to use
you can run the job using the System Config of that env
refer techincal manual for multiple Datastore Configuration and System Configuration

Similar Messages

  • OBIEE Application Role Migration between environments in WLS

    Hi,
    Is there a way to migrate Application Roles etc., from one environment like Dev to Prod from WLS. Currently we are manually doing it between environments.
    Thanks for your time and help.

    Hi,
    Can u please try this once..
    Just copy the xml file system-jazn-data file from the following path from ur dev environment to prod environment..
    D:\MWHOME\user_projects\domains\bifoundation_domain\config\fmwconfig
    And restart the services..
    Please mark if it correct/helpful....

  • UDF migration between environments in OIM 11g

    Hello experts,
    We are using OIM 11.1.1.5. We have created many UDF attibutes in Dev environment. We are going to migrate those UDF in our higher environments. What is the best way to do ?
    I heard manual creation is the good way as UDF export/import in 11g causing several issues. Can any one succeeded doing that ? Can you share your experience ?
    Thanks
    DK

    The only issue I've seen with export and import of user metadata in 11.1.1.5 BP02 is the order might not be the same once they are imported. Other than that, the deployment manager has been successful for our current deployment from one environment to the next.
    -Kevin

  • Best practice for migrating between environments and versions?

    Hi to all,
    we've got a full suite of solutions custom developed in SAP BPC 7.0, SP 7. We'd like to understand if
    - there are best practice in order to copy this applications from an environment to another environment (another client)
    - there are best practice in case the client has got a newer version of SAP BPC (they should install the 7.5, while we're still stucked with the 7.0).
    Thank you very much
    Daniele

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

  • OBI - Migration between environments

    Does anyone have a best practice document or even a document on how to migrate reports from OBI 10.1.3.4 between a dev to prod environment?
    Can't seem to find anything on the Oracle website.

    If only BIpublisher reports means u have to move the C:\OracleBI\xmlp\XMLP for the first time.
    Later what all the reports you are doing u can migrate that reports its enough.
    But During the BIPublisher migration you have to create all the JDBC connection name as same as DEV environment and point them to QA instance or PROD. It will be easy for u .else after migration u have to change manually .
    Thanks

  • HRMS Payroll: Migrating Element Links between Environments/Instances

    Hi,
    I need to migrate element links between environments i.e, I need to move my element links from one instance to another.
    Apart from doing this manually or using dataloader, what are the other options/best practices that I can look at?
    Regards,
    Santhosh Jose

    We've previously done this successfully using Data Pump to process the migration of Element Types, Input Values, Extra Info and Links - once set up it works very smoothly, but it does require quite a bit of setup so I'd only advise using if you're migrating a lot of elements several times.
    Data Pump is just a set of tables and concurrent programs that wrap around Oracle API's for migration of data/setup.
    The setup involves getting data into these tables...
    Key tables are:
    hr_pump_batch_lines
    hr_pump_batch_headers
    hr_pump_batch_line_user_keys
    Using API's like:
    hrdpp_create_element_type.insert_batch_lines
    hrdpp_create_element_extra_inf.insert_batch_lines
    hrdpp_create_element_link.insert_batch_lines
    hrdpp_create_input_value.insert_batch_lines
    hrdpp_update_input_value.insert_batch_lines
    Conc Program: Data Pump Engine
    There should be plenty of documentation around the net and MoS about this if you want to have a read.
    Alternatively you can call the API's yourself.
    or, you can have a go with iSetup - we tested this briefly but didn't get too far with it, several quirky issues with loading Work Structure data so went back to using Data Pump.
    Regards, Jay

  • Migration of ESB between environments

    Is there a way of migrating ESB between environments, e.g. Development, Test, Production in the same way you can use Ant in BPEL.
    The issue I have is that all my endpoints are pointing to my development server and I need to chnage the links to another enviornment, this is the same for the DB adapaters as I need to point to the production database.
    Any links to some documentation would be greatful.
    cheers
    James

    Have a look in this thread:
    Dealing with changing service wsdl locations  in the ESB

  • We have problems in abap rules when migrate the infosource

    We are having problems to do the migration of some objects of version
    3.x to version 7.
    There are some objects standard like Update Rule, InfoSource and
    Datasource that when we migrated the rules ABAPS contained in the
    Update Rule and Infosource are not migrate properly.
    We are using the method of automatic migration that when clicking the
    right button on the object, choosing the option additional functions,
    create transformation and input the name of the new infosource. The
    same way is necessary to migrate the transfer structure. After this we
    migrated the Datasource and we tried to activate all objects, but
    several erros happened in the abap rules.
    Example: In the new Transformation based n Upadate Rule 0PS_C08 in the
    key figure 0AMOUNT, the routine show me the follow error:
    “E:Field "COMM_STRUCTURE" is unknown. It is neither in one of the
    specified tables nor defined by a "DATA" statement. "DATA" statement
    "DATA" statement.”
    This is one example, but this conversion happened for several
    transformations with abap rules.
    Which is the recommendation for the standard objects in this case and
    the others cases ? For objects Z* there some recommendation too?
    Old Routine in Upadte Rule:
    "PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: ...
    DATA:   ...
    $$ end of global - insert your declaration only before this line   -
    FORM compute_data_field
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
               RESULT_TABLE STRUCTURE /BI0/V0PS_C08T
      USING    COMM_STRUCTURE LIKE /BIC/CS0CO_OM_NAE_1
               RECORD_NO LIKE SY-TABIX
               RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
               ICUBE_VALUES LIKE /BI0/V0PS_C08T
      CHANGING RETURNCODE LIKE SY-SUBRC
               ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
      type-pools: PSBW1.
      data: l_psbw1_type_s_int1 type psbw1_type_s_int1.
      data: lt_spread_values type PSBW1_TYPE_T_ACT_SPREAD.
      field-symbols: .
    füllen Rückgabetabelle !
        move-corresponding  to RESULT_TABLE.
        check not RESULT_TABLE-amount is initial.
        append RESULT_TABLE.
      endloop.
    if the returncode is not equal zero, the result will not be updated
      RETURNCODE = 0.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    New Routine - Based on Update - DTP:
    "PROGRAM trans_routine.
          CLASS routine DEFINITION
    CLASS lcl_transform DEFINITION.
      PUBLIC SECTION.
    Attributs
        DATA:
          p_check_master_data_exist
                TYPE RSODSOCHECKONLY READ-ONLY,
    *-    Instance for getting request runtime attributs;
        Available information: Refer to methods of
        interface 'if_rsbk_request_admintab_view'
          p_r_request
                TYPE REF TO if_rsbk_request_admintab_view READ-ONLY.
      PRIVATE SECTION.
        TYPE-POOLS: rsd, rstr.
      Rule specific types
    $$ begin of global - insert your declaration only below this line  -
    ... "insert your code here
    $$ end of global - insert your declaration only before this line   -
    ENDCLASS.                    "routine DEFINITION
    $$ begin of 2nd part global - insert your code only below this line  *
    $$ end of rule type
        TYPES:
          BEGIN OF tys_TG_1_full,
         InfoObject: 0CHNGID ID de execução de modificação.
            CHNGID           TYPE /BI0/OICHNGID,
         InfoObject: 0RECORDTP Categoria de registro.
            RECORDTP           TYPE /BI0/OIRECORDTP,
         InfoObject: 0REQUID ID requisição.
            REQUID           TYPE /BI0/OIREQUID,
         InfoObject: 0FISCVARNT Variante de exercício.
            FISCVARNT           TYPE /BI0/OIFISCVARNT,
         InfoObject: 0FISCYEAR Exercício.
            FISCYEAR           TYPE /BI0/OIFISCYEAR,
         InfoObject: 0CURRENCY Código da moeda.
            CURRENCY           TYPE /BI0/OICURRENCY,
         InfoObject: 0CO_AREA Área de contabilidade de custos.
            CO_AREA           TYPE /BI0/OICO_AREA,
         InfoObject: 0CURTYPE Tipo de moeda.
            CURTYPE           TYPE /BI0/OICURTYPE,
         InfoObject: 0METYPE Tipo de índice.
            METYPE           TYPE /BI0/OIMETYPE,
         InfoObject: 0VALUATION Perspectiva de avaliação.
            VALUATION           TYPE /BI0/OIVALUATION,
         InfoObject: 0VERSION Versão.
            VERSION           TYPE /BI0/OIVERSION,
         InfoObject: 0VTYPE Ctg.valor para reporting.
            VTYPE           TYPE /BI0/OIVTYPE,
         InfoObject: 0WBS_ELEMT Elemento do plano da estrutura do projeto
    *(elemento PEP).
            WBS_ELEMT           TYPE /BI0/OIWBS_ELEMT,
         InfoObject: 0COORDER Nº ordem.
            COORDER           TYPE /BI0/OICOORDER,
         InfoObject: 0PROJECT Definição do projeto.
            PROJECT           TYPE /BI0/OIPROJECT,
         InfoObject: 0ACTIVITY Tarefa do diagrama de rede.
            ACTIVITY           TYPE /BI0/OIACTIVITY,
         InfoObject: 0NETWORK Diagrama de rede.
            NETWORK           TYPE /BI0/OINETWORK,
         InfoObject: 0PROFIT_CTR Centro de lucro.
            PROFIT_CTR           TYPE /BI0/OIPROFIT_CTR,
         InfoObject: 0COMP_CODE Empresa.
            COMP_CODE           TYPE /BI0/OICOMP_CODE,
         InfoObject: 0BUS_AREA Divisão.
            BUS_AREA           TYPE /BI0/OIBUS_AREA,
         InfoObject: 0ACTY_ELEMT Elemento operação diagram.rede.
            ACTY_ELEMT           TYPE /BI0/OIACTY_ELEMT,
         InfoObject: 0STATUSSYS0 Status do sistema.
            STATUSSYS0           TYPE /BI0/OISTATUSSYS0,
         InfoObject: 0PS_OBJ Tipo de objeto do PS.
            PS_OBJ           TYPE /BI0/OIPS_OBJ,
         InfoObject: 0VTSTAT Código estatístico para ctg.valor.
            VTSTAT           TYPE /BI0/OIVTSTAT,
         InfoObject: 0AMOUNT Montante.
            AMOUNT           TYPE /BI0/OIAMOUNT,
         Field: RECORD Nº registro de dados.
            RECORD           TYPE RSARECORD,
          END   OF tys_TG_1_full.
    Additional declaration for update rule interface
      DATA:
        MONITOR       type standard table of rsmonitor  WITH HEADER LINE,
        MONITOR_RECNO type standard table of rsmonitors WITH HEADER LINE,
        RECORD_NO     LIKE SY-TABIX,
        RECORD_ALL    LIKE SY-TABIX,
        SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS.
    global definitions from update rules
    TABLES: ...
    DATA:   ...
    FORM routine_0001
      CHANGING
        RETURNCODE     LIKE sy-subrc
        ABORT          LIKE sy-subrc
      RAISING
        cx_sy_arithmetic_error
        cx_sy_conversion_error.
    init variables
    not supported
         icube_values = g.
         CLEAR result_table. REFRESH result_table.
      type-pools: PSBW1.
      data: l_psbw1_type_s_int1 type psbw1_type_s_int1.
      data: lt_spread_values type PSBW1_TYPE_T_ACT_SPREAD.
      field-symbols: .
    füllen Rückgabetabelle !
        move-corresponding  to RESULT_TABLE.
        check not RESULT_TABLE-amount is initial.
        append RESULT_TABLE.
      endloop.
    if the returncode is not equal zero, the result will not be updated
      RETURNCODE = 0.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    ENDFORM.                    "routine_0001
    $$ end of 2nd part global - insert your code only before this line   *
          CLASS routine IMPLEMENTATION
    CLASS lcl_transform IMPLEMENTATION.
    *$*$ begin of routine - insert your code only below this line        *-*
      Data:
        l_subrc          type sy-tabix,
        l_abort          type sy-tabix,
        ls_monitor       TYPE rsmonitor,
        ls_monitor_recno TYPE rsmonitors.
      REFRESH:
        MONITOR.
    Runtime attributs
        SOURCE_SYSTEM  = p_r_request->get_logsys( ).
    Migrated update rule call
      Perform routine_0001
      CHANGING
        l_subrc
        l_abort.
    *-- Convert Messages in Transformation format
        LOOP AT MONITOR INTO ls_monitor.
          move-CORRESPONDING ls_monitor to MONITOR_REC.
          append monitor_rec to MONITOR.
        ENDLOOP.
        IF l_subrc <> 0.
          RAISE EXCEPTION TYPE cx_rsrout_skip_val.
        ENDIF.
        IF l_abort <> 0.
          RAISE EXCEPTION TYPE CX_RSROUT_ABORT.
        ENDIF.
    $$ end of routine - insert your code only before this line         -
      ENDMETHOD.                    "compute_0AMOUNT
          Method invert_0AMOUNT
          This subroutine needs to be implemented only for direct access
          (for better performance) and for the Report/Report Interface
          (drill through).
          The inverse routine should transform a projection and
          a selection for the target to a projection and a selection
          for the source, respectively.
          If the implementation remains empty all fields are filled and
          all values are selected.
      METHOD invert_0AMOUNT.
    $$ begin of inverse routine - insert your code only below this line-
    ... "insert your code here
    $$ end of inverse routine - insert your code only before this line -
      ENDMETHOD.                    "invert_0AMOUNT
    Please, HELP!!!!
    Thanks,
    Mateus.

    Hi,
    I checked the code and as I saw you're using return tables. This feature is not yet implemented in transformations! You have to find a workaoround for a code in start- or endroutines that appends the data.
    In general you have to replace comm_structure and icube_Values by new class attributes/variables.
    On which SP are you currently?
    Regards,
    JUergen

  • Problem when migrate MS Exchange to virtual machine

    Hi,
    I have a SMTP node defined in my Solution Manager which send mails to external MS Exchange host without problems.
    There is a project for migrate Exchange to a virtual machine. When I modify the SCOT configuration for send mails to new virtual host, i get an error connection message.
    I check the exchange relay for my SAP machine and it looks right. There is no problems with net connections between sap system host and exchange host.
    Is necesary any special configuration or adaptation in SAP when migrate exchange server to a virtual machine???
    Thanks for all.

    Hi Pablo
    Please check if the following links help you:
    http://virtualmachine.searchvmware.com/virtual/kw;VMwaremigrationandimplementation/VMwaremigrationandimplementation/vmware.htm
    and
    http://virtualmachine.searchvmware.com/virtual/kw;VMwareMigration/VMwareMigration/vmware.htm
    and
    http://virtualizationresources.searchservervirtualization.com/virtualization/kw;VirtualMachineMigration/VirtualMachineMigration/virtualization.htm
    I hope this helps
    Regards
    Chen

  • Making event scripts generic between environments

    At present we have a couple of FDM event scripts which construct some MaxL statements and then execute them against the Essbase database (to clear data before loading). The problem we have is that the MaxL scripts require hard-coded usernames, passwords and servernames (to log in to ESSMSH). When migrating the scripts between environments, these scripts will have to be manually updated each time for the given environment to which they are being migrated.
    Is there a more elegant solution to this? One idea would be to have the values stored in a single plain text file (in a password-protected share) which would then be accessible by any of the various scripts in a given environment. But I would be interested to learn if there are alternatives to this.

    If you migrate environments, you are going to have to do some configuration pretty much anyway you go about it. The question is what is the easiest/most straight foward so that it doesn't get missed during a migration, IMHO.
    I would go with a well defined/documented "Global" config file that contains all of the information that may change. At a minimum, Usernames and passwords should be obfuscated/encrypted; however, it would be just as easy to do the whole file. Realize, that the decryption routines would be in your script code so this isn't 100% secure; however, it would prevent the casual user from quickly seeing usernames / passwords, etc. Furthermore, if you encrypted the whole file, they may not even realize there is a username/password in the file, etc.
    In my mind this is the way to go...
    Some other thoughts though, depending on what exactly you want to do....
    Option 2
    Username / Password / Server could be some variation on the server name so that the script always knows how to generate them. The negative side is that you would always need to create a new user / password anytime server changes, so you are still not escaping manual labor. (Though maybe you could use an API call to create it, but then you probably need a username/password to create the user... AAaaaahhhhhhhhh)
    While neat, this seems like it would suffer from James Bond syndrome. I'm going to kill you now Mr. Bond, but first I'm going to tell you all my intricate plans for world domination, put you in some ridiculous trap with an semi-obvious flaw, AND leave so that you have time to work on your escape......
    Option 3
    Make a Web Service and then have your scripts make a Web Call (i.e. http://www.mycompany.com/GetConfigInfo.asp?secretcode=12341234123) which would return all the info you need. (username, password, servers, etc). AS LONG AS the webservice machine doesn't move around, this might be a decent way of doing it. Anyone sniffing around the local machines would find the config info; however, anyone looking at the scripts would see the call and could just open it in a web browser. You could of course secure that by looking at the source of the request and if it's not an approved server, not return the requested info; however, now you are getting into more maintenance work........
    The time spent making this and potentially maintaining it probably isn't worth it.

  • Live Migration between two WS2012 R2 Clusters with SCVMM 2012 R2 creates multiple objects on Cluster

    Hi,
    I'm seeing an issue when migrating VM's between two 2012 R2 Hyper-V Clusters, using VMM 2012 R2, that have their Storage provided by a 4 Node Scale Out File Server Cluster that the two clusters share.
    A migration between the two clusters is successful and the VM is operation but I'm left with two roles added to the cluster the VM has moved to instead of the expected one.
    For example: Say I have a VM that was created on cluster A with SCVMM, resulting in a name of : "SCVMM Test-01 Resources"
    I then do a live migration to Cluster B which has access to the same storage and then I end up with two new roles instead of one.
    "SCVMM abw-app-fl-01 Resources" and "abw-app-fl-01"
    The "SCVMM abw-app-fl-01 Resources" is left in an unknown state and "abw-app-fl-01" is operational.
    I can safely delete "SCVMM abw-app-fl-01 Resources" and everything still works but it looks like something is failing during the process.
    Has anyone else seen this?
    I'll probably have one of my guys open a support ticket in the new year but was wondering if anyone else is seeing this.
    Kind regards,
    Jas :)

    In my case the VMs where created in VMM in my one and only Hyper-V cluster (that's been created and is managed by VMM).
    All Higly Available VM:s have a FCM role named "SCVMM vmname" where vmname is the name if the VM in VMM. On top of that a lot
    of VM:s, but not all,  have a second role name named vmname. Lots of name in that sentence.
    All VMs that have duplicates are using the role named vmname.
    I thought it had to do with whether a VM had been migrated so I took one that never had been migrated and did. It did not get a duplicate.
    Is there any progress on this?

  • Graphic distortion when switching between external and built-in display (rMBP)

    Recently (within the past two weeks or so) I've noticed a strange issue when switching between my external display (Thunderbolt) and my built-in display on my 15" rMBP. The following issue seems to be specific to Photoshop CS6.
    Typically, I'll have a PS document open on my external display and I'll wind up taking my to another location. If I close the PS document while on my built-in and then re-open it, I get all kinds of distortion and pixellation. Closing out PS and reopining, restarting, logging on/off; none of it seems to work. It seems like the issue is with the PSD itself, but it doesn't make any sense to me. Any ideas? Screenshot: http://i.imgur.com/iCipSc7.jpg

    Do you have intel gpy as well as other graphic card?  You may be viewing document on different gpu's.  PS does not like multiple gpu's.

  • How do I manage Lightroom photos when using 2 computers, keeping all edits made on either one?

    Based on http://forums.adobe.com/thread/1308132?tstart=0 I decided to add each question seperately:
    Hello, I'm quite interested in buying Lightroom 5.2. I tried the RC which ran out now. Yet, I have a several questions that I can't really find good conclusive answers to, that I'd like to get answered before buying LR. Please don't write maybe like this or that (assumptions), since I don't want to start my whole workflow and then realize that I have to change everything around, so please answer, if you know for sure that something works and you are, preferably, using that method too.
    This is the biggest question, where I mainly want a conclusive answer:  How do I manage Lightroom photos when using 2 computers, keeping all edits made on either one of them, using the same photos for editing. I won't use DNG. Details: I mainly use my older MacBook Pro, but would like to be able to use my PC as it's way better (Specs: i5 2500K, 16GB RAM, SSD, USB3, nVidia GTX 560 TI etc.). I have 2 external HDs that I could use, one for Backup and one for the actual Photos/Edits. I'll probably need to use it as my internal HDs are quite full, and I can't just delete stuff or move it to an (Developer programs, Lossless music, etc.).
    Based on this, how do I back up the whole thing e.g. Photos folder (all photos and edits, and preferably presets too)?

    I believe it should be possible to work cross-platform without having to relink files each time, or without having to keep exporting/importing the catalog, by keeping the single catalog and the image library on the one external drive which is then switched between systems as needed.
    Obvious first requirement is an external drive that is formatted in such a way (e.g FAT32) that it can be used on both platforms in read/write mode. Given that, if the catalog AND the images parent folder are both established at the same level in one overall parent folder, then it should be possible to take advatage of Lightroom's ability to use relative paths rather than absolute paths to detect the images, no matter if the drive is named (Mac) or lettered (PC). This is how "export as Catalog" works, i.e. it creates a "package", aka a parent folder, containing the catalog and a replica of the exported images folder hierarchy alongside the catalog. Take that "package" to another system (same OS or not) and "it just works" even if the drive letter is different or the OS is different....because the relative path from catalog to images is still the same.
    I haven't tested this cross-platform (though I have between different PC systems with different drive letters) so for me it's still just a theory, but there may be others who have done this successfully.

  • I'm a Dropbox user and I migrate between my iMac and a Toshiba notebook.  Is there Apple software that will work on the notebook allowing me to open Pages, Numbers or Keynote documents that I've placed in Dropbox?

    I'm a Dropbox user and I migrate between my iMac and a Toshiba notebook.  Anyone know if there is Apple software that will work on the Toshiba that will allow me to open Pages, Numbers or Keynote documents when I access them in Dropbox? 
    Maybe I should just always export from Pages into Word  (and likewise for the other types) before I upload to Dropbox?  I hope not!!

    If you don't want to export files and consider the Microsoft formats to be your basic working format, then you need to consider either using MsOffice for Mac or LibreOffice instead of the iWork suite.
    You could always run a virtualised OSX on the Toshiba, but I don't think Apple will appreciate me telling you how to do that.
    Peter

  • Server 2012 R2 Hyper-V Cluster, VM blue screens after migration between nodes.

    I currently have a two node Server 2012 R2 Hyper-v Cluster (fully patched) with a Windows Server 2012 R2 Iscsi target.
    The VMs run fine all day long, but when I try to do a live/quick migration the VM blue screens after about 20 minutes. The blue reports back about a “Critical_Structure_Corruption”.
    I’m being to think it might be down to CPU, as one system has an E-2640v2 and the other one has an E5-2670v3. Should I be able to migrate between these two systems with these type of CPU?
    Tim

    Sorry Tim, is that all 50 blue screen if live migrated?
    Are they all on the latest integration services? Does a cluster validation complete successfully? Are the hosts patched to the same level?
    The fact that if you power them off then migrate them and they boot fine, does point to a processor incompatibility and the memory BIN file is not accepted on the new host.
    Bit of a long shot but the only other thing I can think of off the top of my head if the compatibility option is checked, is checking the location of the BIN file when the VM is running to make sure its in the same place as the VHD\VHDX in the
    CSV storage where the VM is located and not on the local host somewhere like C:\program data\.... that is stopping it being migrated to the new host when the VM is live migrated.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Maybe you are looking for

  • Adobe creative cloud failed to install

    hi im using windows 8.1 and everything was working great up untill a few days ago. i had the creative cloud running and apps downloaded and working fine. then suddnly i couldnt open the apps getting an error message saying: *name of adobe app*.exe- b

  • IPhone Apps in Landscape Mode on iPad

    With the new high resolution display, why won't the small iPhone apps rotate to landscape mode? This is incredibly annoying when I have my iPad docked and I need to run one of these apps. There's plenty of resolution to rotate the app to landscape!

  • Help Required - Oracle AS 10g

    hello, i have just downloaded oracle 10g softwares ... i installed database then developer suite ... everything was fine ... until i was unable to run forms ... a friend of mine told me that i need to install application server in order to run forms

  • Horizontal Plan Suggestions and Calculations

    Hi, I am facing an issue in understanding the ASCP Horizontal Plan calculations for EBS ver 12.1.3. I have made 1 FG Item, 1 RM Item and BOM of FG with component requirement 1 for 1 FG Item. Entered the Forecast of FG = 100 bucket type days. Made 1 r

  • Thanks Verizon, you made it easy

    One month and I am out of here. Verizon does not care about it customers or anyone else. I am porting my number out in 2 weeks. Can't even get a straight answer, Customer Service is rude and flat out lies to your face about everything. Goodbye Verizo