Infrastructure Question: setup a 100TB database/data center

Hi guys,
Looking for your guideline and assistance, in setting up, this big environment, these numbers are not accurate, but just to give some idea, I'll mention, which are the correct numbers (correct).
The environment is an Oracle/Linux, but no VMWare.
250,000 transactions/day (*correct*) - at least for 1 year => 260 (working days in 2010) * 250000 = 65,000,000.00
Each transaction size will be, max = 2mb
2mb * 250,000 = 500,000 mb or .5 tb (approx) per day data will be coming to our data center
98% it'll be Inserts only
the rest can be divided into SELECT or UPDATE, almost no DELETE at all.
What I'm looking for, some guideline, assistance & help, in taking up this task and proceeding and forwarding in the right direction.
Will highly appreciate your help & guidance.
Best regards

The setup of such a massive system could almost take as much as a year by itself unless you can somehow rent a pre-configured system.
Oracle is recently throwing everything onto cloud computing. They have a lot of experience in this,
You can also start thinking about companies online with a 'rent a cloud' policy like vmware.com, but since it is an Oracle database I would definitly ask Oracle first.
HTH
FJFranken
My Blog: http://managingoracle.blogspot.com

Similar Messages

  • Question about GG if Data Center Fails

    Hi i came across a question from my team about, what happens to Golden Gate replication when a data center failure occurs or an automatic switch-over to VCS cluster happens?
    Automatic switch-over to VCS cluster did happen in our environment once, when our NIC card was not working for our Primary server in Production VCS Config shut down the primary node and automatically switched over to failover node in VCS cluster.
    I am also looking into option what will happen to on-going Extract and Pump process on source? How will i be able to syncronize the residual data from source side to target side, if such things happen?
    Any comments or experience will help me.
    Thanks

    All of our filesystems on primary gets failed over to failover cluster node, like how it happens with database bins and data files.
    We have tested it and verified if we can start the Golden Gate after failover happens on failover node.
    Thanks

  • URGENT: QoS Design on Data Center MPLS - MediaNet Question...

    Hello,
    I am posting this in hopes I can get some guidance from anyone who has done this in the field.  We have a large enterprise customer with 21 sites all around the world, they have Verizon MPLS and are experiencing QoS related issues on their WAN regarding Video/Voice.  We have proposed remediating their network acccording to the Enterprise QoS SRND 3.3 and the new MediaNet SRND to account for Video and TP QoS (     
    http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND_40/QoSCampus_40.html )
    Here is the problem/question that was proposed in our presales meeting and I honestly don't know where to look for an answer... I am not asking for anyone to design a solution for me, just merely point me in the right direction:
    The Data Center has a ~40MB MPLS Connection ( full mesh ) into the cloud ( Verizon )
    Site A has a 8MB connection
    Site B has a 4MB connection
    I know on the Service policy and the interfaces at SiteA and SiteB I can assign "Bandwidth xxxx" and use ~95% of the bandwidth to do queuing and shaping/policing ect.  I am not concerned with SiteA and SiteB, that I think I can handle...
    Question was posed from the customer, "How can we ensure at the DataCenter level the 40MB MPLS is "chopped" up so that only 8MB of the total speed goes to SiteA ALONG with an attached QoS policy designed for that specific site, as well as ensure only 4MB goes to SiteB with an attached QoS policy.
    So I am looking for a way to allocate bandwith per site on the DC 40MB connection going into the cloud ( so that SiteB cannot use more than 4MB ) and attach a MediaNet specific QoS Service policy to that site.  The customer does not have seperate MPLS circuits for each site, they all come into the DC on 40MB shared ethernet connection ( no VC, or dedicated circuits to other sites ). 
    Any thoughts on if this is possible? 
    Thanks!
    Alex

    This is an example I have seen and I hope that is useful to you.
    Site A
    Subnet: 172.16.1.0/24
    Site B
    Subnet:172.16.2.0/24
    HeadOffice:
    ip access-list extended Site_A
    permit ip any 172.16.1.0 0.0.0.255
    ip access-list extended Site_B
    permit ip any 172.16.2.0 0.0.0.255
    class-map match-any Site_A
    match access-group name Site_A
    class-map match-any Site_B
    match access-group name Site_B
    policy-map To_Spokes
    class Site_A
    shape average 8000000
    service-policy Sub_Policy(Optional)
    class Site_B
      shape average 4000000
      service-policy Sub_Policy(Optional)
    class class-default
      shape average 28000000
      service-policy Sub_Policy(Optional)
    Interface G0/0
    Description To MPLS cloud
    bandwidth 40000000
    service-policy output To_Spokes
    interface G0/1
      Description To HeadOffice
    bandwidth 40000000
    service-policy output To_Spokes
    It would be greatly appreciated if someone can correct this or improve it as I am still learning.
    Please see the netflow graph from one of our routers using a similar policy as above.

  • Query on DNS setup for Active Directory for a new data center

    I have third party DNS appliances providing DNS Service for Active Directory (Windows 2008 R2) and there are also secondary DNS servers, which are MS DNS server with a secondary zone configured, for redundancy. I have to setup a new data center
    and move servers/services to this data center. In this scenario, can I install a new Microsoft DNS server with a secondary zone and use this as the primary DNS Server for all the member servers at this new location ? I am aware that this new DNS server will
    not be able to make any updates to the secondary zone and for that purpose, is there anyway to redirect such requests to the DNS appliances in my current data center across the WAN ? I am trying to avoid purchasing a new DNS appliance for the new data center
    and want to know what are the alternatives I have.
     

    im not entirely sure by your setup, as normally you would use AD integrated zones for DNS in an AD environment - although there are other options as you have already setup.
    the fact the zone is a secondary zone in DNS server terms doesn't mean you can't point your clients to it as their primary dns server. They will quite happily resolve names using a secondary server.
    so as long as your dns devices are correctly setup to support the additional secondary zone I see no reason why you couldn't do this.
    Regards,
    Denis Cooper
    MCITP EA - MCT
    Help keep the forums tidy, if this has helped please mark it as an answer
    My Blog
    LinkedIn:

  • Wireing Question for Data Center

    I work in what I would consider to be a small/mid sized data center. We use two 6513 as the core/distribution for ~25 racks of servers.
    My question comes in the way of cabling the servers to the core. Currently they are using long patch cords between the 6513 and each server. Well it’s a mess, functional but messy.
    I'm trying to figure out the best way to clean up the mess and make it look professional.
    Most people seem to suggest 2 different ways to accomplish this:
    1) Install switches in each rack and run fiber from the core to the rack. Wire each server to the switch in the rack.
    2) Install 24/48 port patch panels between the core area and the racks.
    I'm wondering what people think of these ideas and if there are any other suggested ways of accomplishing this?
    Andy

    Hi Andy,
    Here's something that we used to do where I worked:
    We had 6509's with three/four 48 port blades servicing between 150 and 200 phones roughly. I had four total switches, one on each of four floors. So this would be roughly similar to your DC environment, only we're servicing longer horizontal runs and phones, not servers -- but the idea is the same (i.e. high density cabling issues).
    Lord knows that when you're plugging in 48 cables into one of those blades, it can get pretty crowded. And since we don't yet know how to alter the laws of physics that determine space requirements, we have to search for alternatives.
    Back to my environment: On three of the four floors, we just wired straight from the patch panel (that ran to floor locations) to the switch. Quite a mess when you're running in 48 cables to one blade! However, this is traditional and this is what we did. My cabling guy (very smart fella) suggested something else. At the time I was too chicken to do it on the other floors, but I did agree to try it on one floor. Here's what we did:
    He ran Cat5 (at the time, that was standard) connections in 48 cable bunches from an adjacent wall into the switch. They had RJ-45 connections so that they could plug in, and they were all nice and neat. On the other end, they plugged in to a series of punch down blocks (kind of like you see in a phone room for telephone structured cabling). These, in turn, were cross connected to floor locations on another punch down block that went to the floor locations. Now, whenever we wanted to make a connection live, we simply had to connect the correct CAT5 jumper wire from one punch down block to the other. You never touch the actual ports in the switch. They just stay where they are. All alterations are done on the punch down blocks. This keeps things nice and neat and there's no fiddling with cables in the switch area. Any time you need to put in a new blade, you just harness up 48 more cables (we called them pigtails) and put them in the new blade.
    NOTE: You could do the exact same thing with patch panels instead of punch down blocks, but with higher densities, it's a bit easier to use the blocks and takes up much less space.
    ADVANTAGES:
    * Very neat cable design at the switch side.
    * Never have to squeeze patch cables in and out.
    * Easy to trace cables (but just better to document them and you'll never have to trace them).
    * Makes moves, adds, and changes (particularly adds) very easy.
    DISADVANTAGES:
    * Not sure that you can do it with CAT6.
    * You have to get a punch down tool and actually punch cables (not too bad though after you do a few).
    * You need to make sure that you don't deprecate the rating on the cable by improperly terminating it (i.e. insufficient twists)
    Anyway, I haven't had a need to do this in a while and I no longer work at the same place, but my biggest concern would be if that meets with the CAT6 spec. Not sure about that, but your cabling person could probably tell you.
    I'm not a big fan of decentralizing the switches to remote locations. It can become cumbersome and difficult to manage if you end up with a lot of them. Also, it doesn't scale well and can end up with port waste (i.e. you have 24 servers in one cabinet on one switch and then along comes 25; you now have to buy another 12 or 24 port switch to service the need with either 11/23 ports going to waste -- not good).
    Good luck. Let us know how you make out. I'd be glad to go in to more detail if the above isn't explained well enough.
    Regards,
    Dave

  • Exchange Data Center Switching Database Not Mounting

    -Primary DC > 02 Exchange Servers ( All roles Installed)  MAPI & Replication Network Between two
    -DR Site > 01 Server (All roles) + 01 Server ( Mailbox Roles) Only MAPI network
    -All the Servers in both Sites are members of DAG called DAG1
    - All Mailbox Server have Database Copies in healthy states
    I shutdown the Primary Database Completely and I saw that Mailbox Databases not mounting On DR Site.
    What need to be configured? to Switch over and Automatic mount in case Primary Data Center Down.
    here is the Screen Shot:
    After Some time

    Hi,
    All DAG members should have the same number of networks, MAPI and Replication networks. Base on your description, members in DR site only have MAPI network.
    If your primary datacenter fails, you need to switchover to secondary datacenter manually.
    You can look at the following article.
    http://technet.microsoft.com/en-us/library/dd351049(v=exchg.141).aspx
    Best regards,
    Belinda Ma
    TechNet Community Support

  • Oracle Database 10g Release 2 - Free Desktop Data Center DVDs

    Hi,
    I have registered for the subject Oracle Database 10g Release 2 - Free Desktop Data Center DVDs more than a month ago, but I am still awaiting to receive them.
    Raghavan Karoth (Mahamaya Websoft)
    775 Dupont Street
    Toronto, On, M6G1Z5
    Tele:1-514-667-0577

    "10203_vista_w2k8_x86_production_db.zip" had been created and this folder size shows 63 MB Your download is not complete, zipfile is almost 800MB, from the download site:
    10203_vista_w2k8_x86_production_db.zip (797,326,161 bytes)
    You are right, home editions are not supported. That does not necessarily mean, it does not work, but it's your risk.
    Werner

  • Welcome to the Enterprise Data Center Networking Discussion

    Welcome to the Cisco Networking Professionals Connection Network Infrastructure Forum. This conversation will provide you the opportunity to discuss general issues surrounding Enterprise Data Center Networking. We encourage everyone to share their knowledge and start conversations on issues such as Mainframe connectivity, SNA Switching Services, DLSw+, managing SNA/IP and any other topic concerning Enterprise Data Center Networking.
    Remember, just like in the workplace, be courteous to your fellow forum participants. Please refrain from using disparaging or obscene language or posting advertisements.
    We encourage you to tell your fellow networking professionals about the site!
    If you would like us to send them a personal invitation simply send their names and e-mail addresses along with your name to us at [email protected]

    Hi together,
    Since the release of SAP NetWeaver 2004s to 'Unrestricted Shipment' as of 6th of June 2006, we have renamed the forum 'SAP NetWeaver2004s Ramp-Up' to 'BI in SAP NetWeaver2004s'.
    The forum should continue to adress BI issues particular to the release SAP NetWeaver 2004s. Please post general BI, project, etc. question to the other existing BI forums.
    The SAP NetWeaver BI organisation will also use this forum to communicate / roll-out information particular to the release of SAP NetWeaver 2004s (in addtion to the FAQs and other material on the SAP Service Marketplace and information in other areas of the SDN).
      Cheers
         SAP NetWeaver BI Organisation

  • How can I create a VM in Brazilian Data Center

    I am working on a Proof of Concept to use Cloud Services in Latin America. What I am trying to do is setup an FTP Site in the Azure Brazilian Data Center. So how can
    I create a VM in the Brazilian Data Center… I do not see a way to do this through the Portal nor the Management tools. <o:p></o:p>
    This question has been posted before but none of the answers seem to be satisfactory. My management does not want to spend money on a support contract until I get a
    simple POC configured in this data center.  So the only support mechanism I have is through the forums. For this reason I am posting this question again in hopes I receive a better answer for it. 

    Hi,
    I guess, the Brazil Regions are available only to customers with billing addresses in Brazil.
    The restriction on subscription location and access is documented here at the bottom:
    http://azure.microsoft.com/en-us/regions/#services
    I suggest you could contact with azure support team using the below link to get more details:
    http://www.windowsazure.com/en-us/support/contact/
    Also, I suggest you could use azure
    powershell or
    mangemnet API to create VM and set the region as "Brazil South".
    Hope this helps.
    Girish Prajwal

  • Server Requirements - Always On in a single server? - Needs clustering even one server on another Data Center?

    Hello:
    I have been asking this questions to different forums and got different responses, so I wanted to know if asking to "Microsoft" will give me some good directions. (All in SQL Server 2012, including the OS)
    Question 1.- Always On "HAS" to be configured on a WSFC node? How about in a Single SQL Server. (NO Clustering)?.
    Question 2.- What about our mirroring processes configured and running in single servers, do we have to have WSFC installed before we can upgrade them to Always On?.
    Question 3.- In a case I have WSFC, and configure Always On, can my second or third replica reside in a single SQL Server? (No WSFC). What if I can not have Clustering in a DR Data Center? or I do have only VM's on the DR Center?
    Any help will be greatly appreciated.
    Thanks
    Oscar Campanini

    Hi Oscar,
    Please find the answers below.
    Question 1.- Always On "HAS" to be configured on a WSFC node? How about in a Single SQL Server. (NO Clustering)?.
                    - Yes . Each replica must be on a different node of a WSFC cluster. Without WSFC Cluster you cannot create always on as it relies on the failover capabilities of the
    cluster.
    Question 2.- What about our mirroring processes configured and running in single servers, do we have to have WSFC installed before we can upgrade them to Always On?.
                     - You cannot really upgrade a Database mirroring configuration to Always on , both are different and works differently. Again for Always on each participating
    replica must be on a WSFC cluster
    Question 3.- In a case I have WSFC, and configure Always On, can my second or third replica reside in a single SQL Server? (No WSFC). What if I can not have Clustering in a DR Data Center? or I do have only VM's on the DR Center?
                   - NO all replicas have to be in single nodes of WSFC cluster.
    Note: SQL Server doesnt have to be clustered.
    Consider the following scenario. YOu need to create Always on with a 3 node topology ie 1 Primary, I secondary and 1 readonly secondary.
    YOu need to have all these three nodes a part of Windows Server Failover Cluster. The clustering needs to be done only in the windows level. YOu can install standalone SQL Servers on all these 3 nodes and then condigure them as replica's in ALways on.
    Read these links to clear your questions -
    http://technet.microsoft.com/en-gb/sqlserver/gg490638.aspx
    http://technet.microsoft.com/en-us/library/hh510230(v=SQL.110).aspx
    http://technet.microsoft.com/en-us/library/ff878487(v=sql.110).aspx#ServerInstance
    Note: When I said Always ON I was reffering to Availability Groups.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Ask the Expert: Data Center Integrated Systems and Solutions

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about utilizing Cisco data center technology and solutions with subject matter expert Ramses Smeyers. Additionally, Ramses will answer questions about FlexPOD, vBlock, Unified Computing Systems, Nexus 2000/5000, SAP HANA, and VDI.
    Ramses Smeyers is a technical leader in Cisco Technical Services, where he works in the Datacenter Solutions support team. His main job consists of supporting customers to implement and manage Cisco UCS, FlexPod, vBlock, VDI, and VXI infrastructures. He has a very strong background in computing, networking, and storage and has 10+ years of experience deploying enterprise and service provider data center solutions. Relevant certifications include VMware VCDX, Cisco CCIE Voice, CCIE Data Center, and RHCE.
    Remember to use the rating system to let Ramses know if you have received an adequate response.
    Because of the volume expected during this event, Ramses might not be able to answer every question. Remember that you can continue the conversation in the Data Center Community, under the subcommunity Unified Computing, shortly after the event. This event lasts through August 1, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Ramses,
    I have dozen questions but will try to restrain myself and start with the most important ones :)
    1. Can cables between IOM and FI be configured in a port-channel? Let me clarify what I"m trying to achieve: if I have only one chassis with only one B200M3 blade inside, will the 2208 IOM and FI6296 allow me to achieve more than 10Gbps throughput between the blade and the Nexus 5k? Of course, we are talking here about clean ethernet environment.
         B200M3 --- IOM2208 --- 4 links --- FI6296 --- port-channel (4 links) --- Nexus5548
    2. Is it possible to view/measure throughput for Fibre Channel interfaces?
    3. Here is one about FlexPod: I know that in case of vBlock there is the company that delivers fully preconfigured system and offers one universal support point so customer don't have to call Cisco or VMware or storage supports separately. What I don't know is how it works for FlexPod. Before you answer that you are not sales guy, let me ask you more technical questions: Is FlexPod Cisco product or is NetApp product or this is just a concept developed by two companies that should be embraced by various Cisco/NetApp partners? As you obviously support Datacenter solutions, if customer/partner calls you with are FlexPod related problem, does it matter for you, from support side, if you are troubleshooting fully compliant FlexPod system or you'll provide same level of support even is the system is customized (not 100% FlexPod environment)?
    4. When talking about vCenter, can you share your opinion about following: what is the most important reason to create the cluster and what will be the most important limitation?
    5. I know that NetApp has feature called Rapid Clones that allows faster cloning than what vCenter offers. Any chance you can compare the two? I remember that NetApp option should be much faster but didn't understand what is actually happening during the cloning process and I'm hoping you can clarify this? Maybe a quick hint here: seems to me it will be helpful if I could understand the traffic path that is used in each case. Also, it will be nice to know if Vblock (i.e. EMC) offers similar feature and how it is called.
    6. Can I connect Nexus 2000 to the FI6xxx?
    7. Is vBlock utilizing Fabric Failover? Seems to me not and would like to hear your opinion why.
    Thanks for providing us this opportunity to talk about this great topic.
    Regards,
    Tenaro

  • Replication between 130 nodes and 1 Data Center

    Hi everyone.
    I have 130 database nodes (Oracle Standard Edition One) with a big distance of separation, and 1 Data Center with 3 nodes (Oracle Real Application Cluster 10g R2). The connection between nodes and datacenter is through various ISP ( WAN).
    I have exactly the same model design of database in nodes and datacenter.
    DataCenter is a repository of data for reporting to directors and dictate the business rules to guide all nodes.
    Each node have approximately 15 machines connected with desktop application.
    In other words Desktop Application with a Backend Database (node).
    My idea of replication is not instantly, when a transaction commit in a node then replicate to datacenter. Also over nigth replicate images because is heavy, approximately 1 mg per image. Each image correspond to one transaction.
    On the orher hand i have to replicate some data from datacenter to nodes, business rule, for example: new company names, new persons, new prohibitions, etc.
    My problem is to determine th best way to replicate data through nodes to datacenter.
    Please somebody could suggest me the best solution.
    Thanks in advanced.

    Last I checked, Streams and multi-master replication require enterprise edition databases at both ends, which rules them out for the sort of deployment you're envisioning.
    If a given table will only ever be modified on nodes or on the master site, never both, you can build everything as read-only materialized views. This would probably require, though, that the server at the data center have 130 copies of each table, 1 per node. For schemas of any size, this obviously gets complicated very quickly. For asynchronous replcation to work, you'd need to schedule periodic refreshes, which assumes that you have relatively stable internet connections between the nodes and the data center.
    I guess I would tend to question the utility of having so many nodes. Is it really necessary to have so many? Or could you just beef up the master and have everyone connect directly?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Migrate Standby ASA to Backup Data Center

    Hello Experts,
    We have backup data center where I am now  planning to provide backup internet service ( in the case where there is internet down or power outage at main server room) .
    I have a pair of Cisco ASA's 5540, one of which I need to move to backup data center ( BDC), Presently I have ADSL router at disaster serve room with static public IP from ISP.
    Currently, I am publishing all my internal resources through ASA. Now my questions, if I move Standby ASA to Disaster Server Room. How I can publish the same internal resources through standby ASA and make it standby as active during the down time of main server room
    Please can anyone suggestion how to achieve this setup. Is is this scenario possible
    Thanking in advance.
    Samir

    Hello,
    I knew it.
    I'll just tell you from the beginning hope it might help you to understand. I appreciate your help.
    Presently at my main data center I'm having a  leased line router and then 2 ASA 5540 (with failover active/standby).
    I was thinking to move 1 ASA to backup disaster server room. In this regard,  I asked earlier how I can still achieve the active/standby after migrating to backup room. But you had anwered my query
    Query 2
    I have got new ADSL service and router  with public static IP at backup server room. Now I moved one of my ASA.
    How can I keep publishing the internal resources ( like access to internal webserver, rdp connection) by using this ADSL service if the main server room is completely down .
    Hope it is clear.
    Thanks

  • Scheduled system maintenance on EU data center - October 26th, 2014

    To ensure the highest levels of performance and reliability, we've scheduled a database server upgrade on our EU data center. To minimize the customer impact, the upgrade is scheduled at the most convenient hours and will take up to 6 hours to complete. During the maintenance procedure, creating and updating content, Partner registration, trial site creation, publish from Muse, sFTP, APIs and some site admin sections will not be available. Additionally, all sites on the EU data center will experience a 20 minutes downtime sometimes during the maintenance window. Except for the scheduled 20 minutes downtime, the website front-ends will not be impacted by the maintenance.
    Maintenance schedule:
    Start date and time: Sunday, October 26th, 9:00 AM UTC (check data center times)
    Duration: We are targeting a 6 hours maintenance window
    Customer impact:
    Partner registration, Trial site creation Muse Publish, APIs, FTP and some admin section will not be available through the entire maintenance window on all data centers
    All websites and services on EU data center will experience a 20 minutes downtime sometimes within the maintenance window
    Creating or updating content  on sites located on EU data center will be unavailable during the maintenance procedure
    For up to date information about system status, check the Business Catalyst System Status page. We apologize for any inconvenience caused by these service interruptions. Please make sure that your customers and team members are made aware of these important updates.
    Thank you for your understanding and support,
    The Adobe Business Catalyst Team

    Congrats to Alan! Only one article, but it's a good one!
     System Center Technical Guru - October 2014  
    Alan do Nascimento Carlos
    ALM and IT Operations - Management 360 with System Center Operations Manager
    in 06 Steps
    Ed Price: "Lots of images. Great job breaking up the steps! Could benefit from a TOC and References. Great article!"
    GO: "Thanks for the only article. great btw. :-)"
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Scheduled system maintenance on US data center - March 15th 2015

    To ensure the highest levels of performance and reliability, we've scheduled a database server upgrade on our US data center. To minimize the customer impact, the upgrade is scheduled at the most convenient hours for the region and will take up to 6 hours to complete. During the maintenance procedure,  Partner registration, trial site creation, publish from Muse, sFTP, APIs and some site admin sections will not be available for sites on all data centers (including EU and AU). Additionally, during the 6 hours maintenance procedure, all sites on the US data center will experience 2 sessions of 1 minute downtime. Except for the 2 scheduled  downtime sessions, the website front-ends will not be impacted by the maintenance.
    Maintenance schedule:
    Start date and time: Sunday, March 15th, 6:00 AM UTC (check data center times)
    Duration: We are targeting a 6 hours maintenance window
    Customer impact:
    Partner registration, Trial site creation, Adobe Muse publish, APIs, FTP and some admin sections will not be available through the entire maintenance window on all data centers
    All websites and services on US data center will experience 2 sessions of 1 minute downtime sometimes within the maintenance window
    For up to date information about system status, check the Business Catalyst System Status page. We apologize for any inconvenience caused by these service interruptions. Please make sure that your customers and team members are made aware of these important updates.
    Thank you for your understanding and support,
    The Adobe Business Catalyst Team

    Congrats to Noah and Mr. X!
     System Center Technical Guru - March 2015  
    Noah Stahl
    Make System Center Orchestrator Text Faster than a Teenager using PowerShell
    and Twilio
    Ed Price: "Wow, I love the breakdown of sections. As Alan wrote in the comments, "Wow! Great article!""
    Mr X
    How to educate your users to regularly reboot their Windows computers
    Ed Price: "I love the table and use of code snippets and images! Great article!"
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

Maybe you are looking for