Data Centre - Moving between

Hi,
I read somewhere "Business Catalyst does have plans to develop functionality that will allow templates to be transferred across data centres, but that is not currently available"
Is this true and does anyone have an idea of whether this is on the agend? E.g within 1 year or more like 5+ years away?
We have a private template that we would like on a local data centre but BC say they can't move it across and in terms of SEO we will be penalised as it's hosted in another country. If BC are working on being able to move then we are happy to wait but if they aren't i'd imagine the cost to recreate it on a new site would be a lot of work. Has anyone done this before?
Thank you

Hi there,
I know that site migration feature was worked on, but never completed. I don't why the development was postponed and when the feature will be released. You're probably looking at 2+ years.
In any case, since you can transfer almost all assets via FTP, you can get 80% of work done that way. The remaining 20% will be modules you have to recreate. and update the module IDs.
Cheers,
Mario

Similar Messages

  • WAAS Mobile HA between 2 Data Centres

    We have to deploy WAAS Mobile between 2 Data Centres, with remote user connecting to either DC across VPN & then connecting to a local WAAS Mobile server. We are trying to understand the best way to configure this from the available documentation on CCO.
    We are a bit confused re the role of the WAAS Mobile Manager Server.
    Is this similar to the role of Central Manager on normal WAAS, i.e configuration/management etc, or does it have any function in the selection of the server a client will connect to.
    Regarding HA & Load balancing of the connections between the Data Centres, this is how we think we should deploy it!
    Deploy a Server Farm at each DC & use the Latency based method of farm selection. This way the client should connect to the local server farm, based on which DC the VPN connects to?
    Is this correct, has anyone deployed WAAS mobile in this way or have any advice?
    Thanks
    Colin

    Fabricpath is L2; not related to the L3 technology you want to use; if VRF are in use you can just use VLANs which is described in your first scenario : "use 2 routers with VRF lite configuration in each DC, then dot1q on the trunk through the Fabric Path"

  • FabricPath & Layer-3 VPNs (VRF) between 2 Data Centres

    Hi there,
    I'm looking at deploying FabricPath for layer-2 extension between 2 Data Centres.
    We also have the requirement for providing layer-3 services between the 2 DC, as in Layer-3 VPN (MPLS VPN).
    The alternative technology was MPLS, with full blown Layer-3 VPN, and Layer-2 VPNs through AToM or VPLS.
    My question is, how can we provide VRF support over FabricPath?? Can we use 2 routers with VRF lite configuration in each DC, then dot1q on the trunk through the Fabric Path? Or just VRF Lite on the layer-3 terminating routers, with a specific VLAN for interconnecting the different VRFs?
    Thanks,

    Fabricpath is L2; not related to the L3 technology you want to use; if VRF are in use you can just use VLANs which is described in your first scenario : "use 2 routers with VRF lite configuration in each DC, then dot1q on the trunk through the Fabric Path"

  • How to access a Network Share between two servers in same data centre

    I have two dedicated servers (both Windows 2012 Server) hosted in a data centre somewhere.   I want to share a folder on one server with the other server, but it's obviously not as straight forward as one might think.  My servers are called "Maximus"
    and "Apprentice".
    On Maximus I shared a folder by right clicking on it and choosing "Share with... / Specific People", I then typed in the name of a local user account which also exists on Apprentice with the same name and password.  (so each server has a local
    user account with the same name and password).
    So then on Apprentice, I was hoping I could access the share (while being logged in as this user with whom the folder was shared) by simply typing  "\\ipaddress\sharename" into the address bar in file explorer.  Unfortunately it comes
    back with "Windows can not access [ip address]".
    Now, I do have a website setup on the IP address for Maximus.  This is actually the reason I want to create this share.  I need the second server for load balancing and need to share IIS config as well as the website itself.  (So I will need
    two shares eventually, but for now I'm just trying to get one to work).  I don't know if the fact that the ip address is pointing to the website is causing me problems here or if it's something else.
    Are there any network gurus out there who can tell me what the issue is and how to resolve it?

    I can ping both servers in either direction, but I believe I may have found the problem.  Apparently my host is blocking port 445 which Windows wants to use to connect to the share and they will not unblock it.
    Is there a way to connect to the share through a different port?  
    To my knowledge, you cannot change the port. However, you can try disabling your security software for testing. If this fixes the problem then you need to adjust your security software configuration so that traffic on this port is not blocked or filtered.
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Moving Data Centre

                       Due to a planned closure of a data centre we are having to relocate one of our Ironport C370 Servers.  This is one of two servers within a cluster.  The second server is within a different data centre and therefore not affected.  The move will obviously require new IP addresses.  Does anyone have a planned procedure or advice on how to move the server without causing any outage?
    I am also concerned about getting a poor Senderbase score on the new IP address.
    Any advice would be greatly appreciated.  Thank you.

    Hey Jonathan,
    Is your cluster set up using hostnames or IPs?
    If its hostnames, you should be able to do the following (basically sort of like doing an upgrade):
    clusterconfig disconnect machinename
    Move it to new data center
    Change IP's
    Update Firewalls/DNS A records
    Flush DNS caches on your other ESA boxes
    custerconfig reconnect
    If you have the cluster configured using IPs you may have to remove the box from the cluster and then re-add it...
    Hope that helps,
    Ken

  • SQL 2012 AlwaysOn Dual Data Centre (an instance in each data centre with a secondary in each other respectively)

    Hi, hopefully someone will be able to find my scenario interesting enough to comment on!
    We have two instances of SQL, for this example I will call them 'L' and 'J'. We also have two data-centres, for this example I will call them 'D1' and 'D2'. We are attempting to create a new solution and our hardware budget is rather large. The directive
    from the company is that they want to be able to run either instance from either data centre. Preferably the primary for each will be seperated, so for example:
    Instance 'L' will sit in data centre 'D1' with the ability to move to 'D2', and...
    Instance 'J' will sit in data centre 'D2' with the ability to move to 'D1' on request.
    My initial idea was to create a 6-node cluster - 3-nodes in each data centre. Let's name these D1-1, D1-2, D1-3 and D2-1, D2-2, D2-3 to signify which data centre they sit in.
    'L' could then sit on (for example) D1-1, with the option to move to D1-2 (synchronously), D2-1,D2-2 (a-synchronously)
    'J' could sit on D2-3, with D2-2 as a synchronous secondary and D1-3,D1-2 as the asynchronous secondaries.
    Our asynchronous secondaries in this solution are our full DR options, our synchronous secondaries are our DR option without moving to another data centre site. The synchronous secondaries will be set up as automatic fail-over partners.
    In theory, that may seen like a good approach. But when I took it to the proof of concept stage, we had issues with quorum...
    Because there are three nodes at each side of the fence (3 in each data centre), then neither side has the 'majority' (the number of votes required to take control of the cluster). To get around this, we used WSFC with Node and File Share majority - with
    the file share sitting in the D1 data centre. Now the D1 data centre has 4 votes in total, and D2 only has 3.
    This is a great setup if one of our data centres was defined as the 'primary', but the business requirement is to have two primary data centres, with the ability to fail over to one another.
    In the proof of concept, i tested the theory by building the example solution and dropping the connection which divides the two data centres. It caused the data centre with the file share to stay online (as it had the majority), but the other data centre
    lost it's availability group listeners. SQL Server stayed online, just not via the AG listener's name - i.e. we could connect to them via their hostnames, rather than the shared 'virtual' name.
    So I guess really I'm wondering, did anyone else have any experience of this type of setup? or any adjustments that can be made to the example solution, or the quorum settings in order to provide a nice outcome?

    So if all nodes lost connectivity to the fileshare it means that there are a total number of 6 votes visible to each node now. Think of people holding up their hands and each one can see the hand. If the second link between the two sites went down then each
    node on each side would only see 3 hands being held up. Since Quorum maximum votes =7, the majority needed to be seen by a node would be 4. So in that scenario, every node would realize it had lost majority and would offline from the cluster.
    Remember that quorum maximum (and therefore majority), never changes *unless* YOU change node weight. Failures just mean then is one less vote that can be cast, but the required majority remains the same.
    Thanks for the complement btw -very kind! I am presuming by your tag that you might be based in the UK. If so and you are ever nearby, make sure you drop by and say hello! I'll be talking at the
    London SQL UG two weeks from today if you are around.
    Regards,
    Mark Broadbent.
    Contact me through (twitter|blog|SQLCloud)
    Please click "Propose As Answer" if a post solves your problem
    or "Vote As Helpful" if a post has been useful to you
    Come and see me at the
    PASS Summit 2012

  • Is there any documentation on the BC Data Centre's?

    Hi,
    I have been a premium reseller of BC for over two years but I am struggling to find any documentation about the BC Data Centre's. I have not been asked before but I am launching a delegate Registration Microsite for Symantec and as with all larger companies their procurement process requires evidetiary documentation to be supplied.
    Specifically they are asking for specification or documentation on:
    ISO 27001 certificate and Network Penetration test information.
    Any help or guidance to this information would be greatly appreciated.
    Cheers
    Rob

    Hi Sidney,
    Thanks for the helpful responses. I have had some progress with my queries and have now recieved PCI compliance documentation that BC sent through after your suggested ticket. Moving forward, I think that the shift to AWS will resolve all certification and compliance issues - I imagine this doesn’t come up too often with a product that is aimed firmly at SME’s.
    My issue arose as I work for SME’s who in turn work for Blue Chips. Companies like Symantec, Sony & Canon all have extremely stringent procurement procedures even though they are budget conscious and the inference of documentation falls down the chain to us the technical supplier – the National Account Managers and Product managers just set up the deals and promotions and the legal/financial teams then throttle everyone with paperwork. They have simply been told that any websites which contain staff information must reach a very high standard of security and auditory compliance which is understandable but cannot be answered with ‘well its Adobe… of course they are secure!’.
    I will follow up to the community when I have finished my research and have a result!
    Thanks
    Rob

  • Data centre connectivity options

    Hello
    I am currently investigating a dual data centre design running
    in active/active mode. The data centres will each have connectivity to
    our WAN (MPLS) and to the Internet. They will have also have dedicated
    links to each other for site replication etc.
    Having read a few of the Cisco SRND's what i am still a little unclear
    about is whether it is better to connect the two data centres over the
    dedicated link using layer 2 or layer 3 and what the pros and cons are of
    each. I would appreciate any experiences (good and bad) that people have had
    in this area.
    My instinct is to go layer 3 eliminating a potential spanning tree issue
    that could affect both data centres but i am sure there are more issues
    than this to take into account.
    Many thanks

    i have redundant data centers and they have been setup as follows for specific reasons:
    (these data centers are not separated by a WAN, if they were, a T3 or better would be required in my case but i'd opt for a metro fiber type of solution to provide GB+)
    using the 3 hierarchial network design: core, distribution, access
    1) the CORE is L3/routed; we do not want a L2/switched core for a few reasons. one is to allieviate STP and its inherent problems.
    (the core should be moving packets as fast and predictable as possible; stp can interrupt this and cause complete packet forwarding delay or worse; with todays routers, they can route packets just as fast as switching them, or faster in some cases)
    2) the distribution layer is switched with fully meshed GB or greater trunks to both the cores. also provides redundant intra VLAN routing for all the VLANs controlled in their specific 'distribution blocks'; i have 5 fully redundant distribution blocks with VLAN routing and VLAN load balancing via HSRP.
    (i channel upto 6 GB trunks in a given link)
    3) the access layer is switched with fully meshed GB or greater trunks to at least two distribution switches per access switch; one trunk to each core, at least.
    (there is no routing performed at the access layer)
    other reasons such as the routing operation, location and number of distribution switches, administration and speed affect the design.

  • Dual Data Centres 200miles apart - L2 trunking/Gigabit, is this ok?

    Hello All,
    We've 2 data centres, one primary the other DR, we've also 2 Gigabit fibre links between the 2, 200 miles apart.
    To keep the PIX Active/Standby roles across the sites, I'm investigating the idea of linking the 2 datacentres with trunks at L2.
    Is this good practice?, I know in the pre-Gigabit days this was frowned upon but these days it would make sense and make design much easier - any thoughts appreciated
    Regards Tony

    Be careful with spanning tree and L2 links between your primary and DR. If spanning-tree were to have issues at your primary datacentre causing an outage it could also take secondary with it and vice versa.
    --Phil
    Please remember to rate useful posts

  • Replicate File Server from Europe to Singapore data centre

    Hi we would like to store 1 tb of blob data in Azure.
    We would like to upload the data in London and then have a read only copy of the data be replicated to ideally the Singapore azure data centre.
    Is this possible.
    thanks 

    Hi Jonnly,
    Irrespective of whether you opt for Geo-redundant storage type OR  Read-access geo-redundant storage type for your storage account, the secondary region is determined based on the primary region, and cannot be changed.
    You may refer the following link to know the Primary region and their respective Secondary region:
    http://msdn.microsoft.com/en-us/library/azure/dn727290.aspx
    However, you can create two different storage account in London and Singapore separately and copy the data between two Storage accounts.
    http://blogs.msdn.com/b/mast/archive/2014/06/28/how-to-copy-files-to-from-azure-storage.aspx
    Regards,
    Manu Rekhar

  • How to Exchange Data and Events between LabVIEW Executable​s

    I am having some trouble determining how to design multiple programs that can exchange data or events between each other when compiled into separate executables. I will layout the design scenario:
    I have two VIs, one called Status and the other Graph.  The Status VI displays the serial number and status of each DUT being tested (>50 devices).  The Status VI has one timed loop along with a while loop that contains an event structure.  If the user clicks on the DUT Status Cluster the event structure needs to pass the serial number to the Graph VI.  The Graph VI when called fetches the records for the DUT based on the Serial Number and time frame.  This VI is a producer/consumer so the user can change the time frame of the records to display onto the front panel graph.
    I have a couple reasons the VIs need to be separated into independent applications. One being the underlying database fetches tends to slow the threads down between the two VIs; the other is that they may be distributed into separate systems (don't consider this in the design criteria).
    I have been researching the available methods to pass the serial number to the Graph VI.  My initial idea was to us a command line argument, but this does not allow the Status VI to pass another Serial Number to the Graph once it has started (I do not want to allow the user to execute multiple Graph applications because the database query can load down the server).
    Is there a program method that I can implement between the two VIs that will allow me to compile them as two executables, and allow the Status program the repeatedly send an updated serial number event to the Graphs program.
    I have reviewed many methods; Pipes (oglib_pipe), Action Engine, and Shared Variable.  I am not sure which method will give me the ability to use a Event driven structure to inform the Graphs program when to update the serial number.  Any suggestions and tutorials or examples would be greatly appreciated.

    I have used the Action Engine (aka: functional global) approach for many years and it works well. Something to think about is that if you make the event's datatype a variant the only code that will need to know the actual structure of the event will be the function that needs to respond to it. Hence, a single event can service multiple functions.
    Simply create a cluster containing an enum typedef that is a list of the functions that the event will service, and a variant that will be the function event data. From anywhere in the code you can fire the event (via the functional global) by selecting the function from the enum and converting function specific data to a variant. On the receiving end the event handler uses the enum to determine the function that is to get the data and sends the variant to it. The event handler doesn't know or care what the actual event data is so you could in theory add new functions without modifying the event handler.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • How do you set defaults in pages such as font and moving between cells in a table?

    How do you set defaults in Pages such as font and moving between cells in a table?

    Default layout and styles can be set by setting up a sheet with your settings and saving as a user template (File > Save As Template).  I don't know of a way to set defaults for moving between cells in a table.

  • SAP DATA CENTRE Shifting

    Dear Experts,
    As of now our data centre is shifting to other site. As per OS vendor they are asking a downtime of 4 days for shifting/deinstallation and reinstallation.
    But our clinet is not accepting this and asking to provide workaround for this activity. As they are asking for business continuty and should not affect their business for 4 days.
    All the server Racks, Tape library , EVA and etc are shifting in this activity. We dont have Disaster Recovery centre to provide alternatively to client.
    Kindly suggest the action to be taken to provide workaround to client.
    Regards,
    Sandeep

    Hi,
    This process is called homogenous system copy, Please go thr belo link you will get all the doc related to system copy
    System Copy and Migration
    Thanks,
    Sreeni.

  • Date must be between year 1 and year 9999 - error message

    I am receiving the following error message "Dates must be between year 1 and year 9999".  The highest value in the next date value in the DB is 12/31/9999.  The parameters being entered are 01/1900 and 12/9999.  The following formula is the cause of the error:
    EvaluateAfter ({@Start Date});
    Global DateVar EndDate;
           Global StringVar  strEndMonth:=Mid({?End Month/Year} ,1, 2) ;
           Global StringVar  strEndYear:=Mid({?End Month/Year} ,4, 4) ;
           Global NumberVar  nNextMonth:=CDbl (strEndMonth) + 1 ;
           Global StringVar  strEndDay:=ToText(Day(DateSerial (CDbl (strEndYear) , nNextMonth, 1-1) ) , 0) ;
           Global StringVar  strEndDate:=Right ("00" + strEndMonth, 2 ) + "/"Right ("00"strEndDay,2) +"/"  +strEndYear ;
           Global DateVar EndDate:=CDate(strEndDate) ;
    The bolded section is the section of the formula highlighted in Crystal.  I've determined if the user enters 11/9999 as the end date instead of 12/9999 the report runs fine.  Is this a problem with the formula or a limitation in Crystal or little of both?  I don't understand what the formula is doing but at the end of the year it looks like it counts forward to January and then back to December and obviously 9999 is the highest year possible so the attempt to go a month ahead tanks.  Is there a change I can make in the formula to avoid this error?

    The error lies in the formula.
    Global NumberVar nNextMonth:=CDbl (strEndMonth) + 1 ;
    When you enter 12/9999 it adds 1 to the month of 12 giving it 13.
    Global StringVar strEndDay:=ToText(Day(DateSerial (CDbl (strEndYear) , nNextMonth, 1-1)) , 0) ;
    This calculates the last day of the month.  Since from above the variable nNextMonth is 13, it tries to calculate 9999, 13, 1-1 which is invalid since there is no 13th month in any year.
    Make this change and it should work fine.
    Global DateVar EndDate;
    Global StringVar strEndMonth:=Mid({?End Month/Year} ,1, 2) ;
    Global StringVar strEndYear:=Mid({?End Month/Year} ,4, 4) ;
    Global NumberVar nNextMonth:=if CDbl (strEndMonth) = 12 then CDbl (strEndMonth) else CDbl (strEndMonth)+ 1 ;
    Global StringVar strEndDay:=ToText(Day(DateSerial (CDbl (strEndYear) , nNextMonth,
    if CDbl (strEndMonth) = 12 then 31 else 1-1)) , 0) ;
    Global StringVar strEndDate:=Right ("00" + strEndMonth, 2 ) + "/"+Right ("00"+strEndDay,2) +"/" +strEndYear ;
    Global DateVar EndDate:=CDate(strEndDate) ;
    Edited by: Sanjay Kodidine on Apr 7, 2009 11:19 AM

  • How to use "Days to keep historical data" & "Maximum time between logs(hh:mm:ss)

    Iam using LabVIEW DSC. Values are being logged continously into citadel.
    Is it possible to retain data of just one month and delete the earlier data as fresh data is being logged into citadel?
    Is it possible to achieve this feature with "Days to keep historical data" & "Maximum time between logs(hh:mm:ss)" options in the history menu of Tag configuration editor ?

    Yes, Days to keep historical data does what you are looking for. After the specified number of days, the old data gets overwritten with new data. So, you always have only the specified number of days' of data in Citadel.
    Note: You may sometimes see that old data doesn't get overwritten till after a day or so of your setting (depending on how much data is being logged). This is because Citadel logs in "pages" and waits till the current page it's logging to is full before it starts overwriting the old ones.
    You do not have to use the 'Max time between logs' option for this. This option forces Citadel to log data every so-many hh:mm:ss regardless of whether or not the data has changed. Note that this is NOT a way to "log data on
    demand". Because, this periodic logging of data would change for a particular tag when its data changes. So, even with this setting all data may not get logged at one shot. Anyways, as I said, you do not have to use this setting for what you're trying to do.
    Regards,
    Khalid

Maybe you are looking for

  • Search for records by Remote Key or Remote System in Data Manager

    Hi all, We have a requirement to search for records by Remote key and/or Remote System at the main table level. Is it possible to do this using the Data Manager? If so, how? Thanks in advance! Lavanya.

  • Can i stop a film download to my ipod

    Please help i started to down load film to ipod touch not enough space on i pod how can i stop down load

  • Lenovo_Recovery (Q:) drive newbie questions

    I've had my Lenovo W701 for a couple months now and I've tried to backup my system to the Lenovo_Recovery (Q drive and I get a message that there is not enough space to complete the backup. I've tried following the supplied computer prompts for freei

  • Upgrading Mac Pro (2009) from to NVIDIA Quadro 4000, what PCI slot to use ?

    I am upgrading my Mac Pro (2009) from the Nvidia GT120 (stock) to the Nvidia Quadro 4000 for Mac. The GT120 was installed in the PCI Slot #1 and I would like to install the Nvidia Quadro 4000 in the PCI Slot #2, that way the Fan in the Graphics Card

  • Budget empty - possible to post invoices

    Hello, I'm using ECC 6 - Former budget. I use the transaction FR50 to encode the original budget. But there are budgets where the amount is empty. When the budget is empty, for me, it means there is no budget entered. However, I can post funds reserv