Virtualised Servers and SQL  performance

Virtualised Servers and SQL  performance
A client would like to virtualise their Servers as this will enhance their redundancy.
This will include their SQL Server where SAP B1 is running on.
I would like to know whether this will be a good idea concerning SQL performance. (it is said that it does have a negative impact.)
Specs concerning the company:
1.     They would like to set up a ZEN layer u201CLinuxu201D and then run their virtual machines from it on a SAN format, and use another server as redundancy.
2.      Their SBO Companyu2019s total to about 30 gig.
3.     They have about 100 users 30% log on using Citrix.
Can anyone tell me if this would be a good solution for them, or is it a complete no no concerning performance?
Thank you

Hi,
We have done exactly what your client wants to do and I would strongly recommend to spend a significant amount of time testing performance. AddOns are loading normally (Sachin, you should check if you don't have 2 network cards on your license server).
However, I am still not convinced that it is the best solution in terms of performance. If I had to do it again, I would probably have 2 physical SQL servers with full redudency instead.
Vincent

Similar Messages

  • Can we recover SCCM 2012 R2 site Servers and SQL DB from hyper V or Vmware VM snapshot

    Hi Folks
    Can we recover SCCM 2012 R2 site Servers and SQL DB from hyper V or Vmware VM snapshot
    if yes is there any challenges or any document available from Microsoft on Hyper V SCCM VM snapshot recovery.

    I've made it work and it "should" work. However it's not the best practice method of site recovery. You should recover using a SQL restore.
    See good example
    http://anoopcnair.com/2012/07/01/sccm-configmgr-2012-primary-site-server-and-database-recovery-part-1/
    Note that you should be using snapshots only on occasions when you are carrying out a risky operation. You can revert it the operation fails. It is not a substitute for a robust backup solution.
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • No of columns in a table and SQL performance

    How does the table size effects sql performance?
    I am comparing 2 tables , with same number of rows(54 million rows) ,
    table1(columns a,b,c,d,e,f..) has 40 columns
    table2 (columns (a,b,c,d)
    SQL uses columns a,b.
    SQL using table2 runs in 1 sec.
    SQL using table1 runs in 30 min.
    Can any one please let me know how the table size , number of columns in table efects the performance of SQL's?
    Thanks
    jeevan.

    user600431 wrote:
    This is a general question. I just want to compare table with more columns and table with less columns with same number of rows .
    I am finding that table with less columns is good in performance , than the table with more columns.
    Assuming there are no row chains , will there be any difference in performance with the number of columns in a table.Jeevan,
    the question is not how many columns your table has, but how large your table segment is. If your query runs a full table scan it has to read through the whole table segment, so in that case the size of the table matters.
    A table having more columns potentially has a larger row size than a table with less columns but this is not a general rule. Think of large columns, e.g. varchar2 columns, think of blank (NULL) columns and you can easily end up with a table consisting of a single column taking up more space per row than a table with 200 columns consisting only of varchar2(1) columns.
    Check the DBA/ALL/USER_SEGMENTS view to determine the size of your two table segments. If you gather statistics on the tables then the dictionary will contain information about the average row size.
    If your query is using indexes then the size of the table won't affect the query performance significantly in many cases.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • OraOledb, Linked Servers and SQL Server 2005 issues

    Some issues I've come across when using SQL Server 2005 (and SQL Server Express 2005), the OraOLEDB provider (10.2.0.1) and a linked Oracle database (8.1.7.4.0 64-bit)
    1) You must set the OraOledb.Oracle\AllowInProcess value to 1 to allow the OraOledb provider to run in SQL Server's process. Without doing this I receive an 'unspecified error' from the OLE DB provider when attempting to run a query
    2) When running a ' select * ' query across a linked server using the provider, I receive the following error: Msg 9803, Level 16, State 1, Line 1
    Invalid data for type "numeric". I can, however, select all of the columns by name and the query completes (no error). Sometimes the 'select *' query returns a few rows before the error, sometimes it doesn't. The Microsoft Provider for Oracle does not have this problem

    Okay... I've got a Win2K3 Std Ed server (x64) running 64-bit SQL Server 2005 Enterprise Edition. I've installed the Oracle 10g 10.2.1 full client and admin tools, added two named services via the NetConfig assistant, and successfully set up (and tested) a connection via the ODBC Administrator to an Oracle database.
    Now... when I try to create a new connection manager in SQL Server 2005 Integration Services, the OLEDB provider for Oracle can't be found, and when I try to manually add an underlying OLEDB connection to the database, SQL Server reports:
    Test connection failed because of an error in initializing provider. The 'OraOLEDB.Oracle.1' provider is not registered on the local machine.
    Does anyone know what I need to do to see my ODBC Server data connections in SQL Server 2005 (64 bit)? I don't have this issue on my 32-bit SQL Server 2005 servers.

  • SCOM 2012 R2 Management Servers and SQL Server Availability Groups

    My team is in the middle of rolling out a brand new install of System Center Operations Manager 2012 R2.
    I have set up a SQL Server Availability group that is spread across multiple subnets and installed the initial management server per
    here. 
    As I have gone through attaching new management servers, I have noticed disturbing behavior.  When the AG fails over to another node, the management servers will lose connectivity unless I either recycle the service or it fails back to the original
    node.  This is not a security issue as I have set up permissions.  It says it cannot connect to the database anymore at all.
    Has anyone else seen this behavior?  Is there a setting I am missing somewhere?
    Thanks for your help.
    Dale

    Hi Dale,
    What about ApplicationIntent\Readonly?
    http://msdn.microsoft.com/en-us/library/hh205662(v=vs.110).aspx
    Still thinking about DNS, may be this could be for your case:
    http://www.sqlservercentral.com/Forums/Topic1449216-2799-1.aspx
    Natalya

  • OraOledb for 64-bit, Linked Servers and SQL Server 2005 issues

    Our environment is : SQL Server 2005, Windows Server 2003, 64-bit and 32-bit operating systems.
    Problem on 64-bit operating system box: (32-bit works fine).
    I am trying to access Oracle 10g database using linked server from our SQL Server 2005. In case of number fileds i got the following error:
    Invalid data for type "numeric".
    After going through one of the posting in this forumn i was able to resolve the problem by converting those column values to char while querying and then converting them back to numeric type on SQL server side.
    But today i ran into another problem. There is a VARCHAR2 column. I was able to retrieve the data yesterday for that column but today i am getting a blank recordset. If i exclude the colum from the query then i am getting all the rows.
    I am querying against a view and it has got a number of columns whose data type is VARCHAR2.
    Again the problem is on 64-bit operating system only. We have a 32-bit operating system on which i am able to retrieve the data including this column. I looked at the data and everything looks OK. No funny characters etc.
    I tried workarounds like using cast, to_char, checking for nulls etc., Nothing works.
    Any help is greately appreciated. Thanks.

    Did you find a resolution for this? We have similar problem. Set up a linked server in SQL 2006 to Oracle (running on Windows 64-bit) Linked server works and views I had set up were working but they added some new data in the Oracle test database I am using and now I get errors on one of the views.
    Error I am getting on the view is "Cannot initialize the data source object of OLE DB Provider "OraOLDEDB.ORacle" for the linked server"
    If I fine tune my queries to find the specific table or view that is at issue, then I get the error "inconsistent metadata for a column"

  • [sql performance] inline view , group by , max, join

    Hi. everyone.
    I have a question with regard to "group by" inline view ,
    max value, join, and sql performance.
    I will give you simple table definitions in order for you
    to understand my intention.
    Table A (parent)
    C1
    C2
    C3
    Table B (child)
    C1
    C2
    C3
    C4 number type(sequence number)
    1. c1, c2, c3 are the key columns of tabla A.
    2. c1, c2, c3, c4 are the key columns of table B.
    3. table A is the parent table of Table B.
    4. c4 column of table b is the serial number.
    (c4 increases from 1 by "1" regarding every (c1,c2,c3)
    the following is the simple example of the sql query.
    select .................................
    from table_a,
    (select c1, c2, c3, max(c4)
    from table_b
    group by c1, c2, c3) table_c
    where table_a.c1 = table_c.c1
    and table_a.c2 = table_c.c2
    and table_a.c3 = table_c.c3
    The real query is not simple as above. More tables come
    after "the from clause".
    Table A and table B are big tables, which have more than
    100,000,000 rows.
    The response time of this sql is very very slow
    as everyone can expect.
    Are there any solutions or sql-tips about the late response-time?
    I am considering adding a new column into "Table B" in
    order to identify the row, which has max serial number.
    At this point, I am not sure adding a column is a good
    thing in terms of every aspect.
    I will be waiting for your advice and every response
    will be appreciated even if it is not the solution.
    Have a good day.
    HO.
    Message was edited by:
    user507290

    For such big sources check that
    1) you use full scans, hash joins or at least merge joins
    2) you scan your source data as less as possible. In the best case each necessary table only once (for example not using exists clause to effectively scan all table via index scan).
    3) how much time you are spending on sorts and hash joins (either from v$session_longops directly or some tool that visualises this info). If you are using workarea_size_policy = auto, probably you can switch to manual for this particular select and adjust sort_area_size and hash_area_size big enough to do as less as possible sorts on disk
    4) if you have enough free resources i.e. big box probably you can consider using some parallelism
    5) if your full scans are taking big time check what is your db_file_multiblock_read_count, probably increasing it for this select will give some gain.
    6) run trace and check on what are you waiting for
    7) most probably your problem is IO bound so probably you can do something from OS side to make IO faster
    8) if your query now is optimized as much as you can, disks are running as mad and you are using all RAM then probably it is the most you can get out of your box :)
    9) if nothing helps then you can start thinking about precalculations either using your idea about derived column or some materialized views.
    10) I hope you have a test box and at least point (9) do firstly on it and see whether it helps.
    Gints Plivna
    http://www.gplivna.eu

  • Workflows 2013 and SQL Report Services: Configure on Independent Servers or Existing App Servers?

    Currently, my SharePoint 2013 Farm has Workflows 2013 and SQL Report Services installed on app servers, along with other applications. Workflows 2013 isn't working correctly (installed prior to my coming on board with the organization), I'm assuming it
    is a configuration issue, but rather than re-configure, I've read that is recommended to be a standalone server. It was also recommended by a MSFT rep that our report services have its own independent server. Currently, Report Services is also installed on
    an app server that runs other applications on the farm.
    Has anyone had any experience with installing Workflows 2013 and/or SQL Report Services on independent vs. existing application servers?
    Additionally, my existing application servers have 24 GB of RAM (I believe the recommendation for SharePoint production environments is 12 GB), should I plan for the potentially new servers (Workflows and Report Services) to also have 24 GB of RAM? Not really
    sure if it will be needed if they have dedicated tasks.
    Thanks for any input you can provide!

    It entirely depends on load. WFM will work just fine installed on servers in the SharePoint farm. SSRS preferably should go on the end-user facing servers for better performance.
    The questions you're asking are all "it depends" :)
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • SQL Performance and Security

    Help needed here please. I am new to this concept and i am working on a tutorial based on SQL performance and security. I have worked my head round this but now i am stuck.
    Here is the questions:
    1. Analyse possible performance problems, and suggest solutions for each of the following transactions against the database
    a) A manager of a project needs to inspect total planned and actual hours spent on a project broken down by activity.
    e.g     
    Project: xxxxxxxxxxxxxx
    Activity Code          planned     actual (to date)
         1          20          25
         2          30          30
         3          40          24
    Total               300          200
    Note that actual time spent on an activity must be calculated from the WORK UNIT table.
    b)On several lists (e.g. list or combo boxes) in the on-line system it is necessary to identify completed, current, or future projects.
    2. Security: Justify and implement solutions at the server that meet the following security requirements
    (i)Only members of the Corporate Strategy Department (which is an organisation unit) should be able to enter, update and delete data in the project table. All users should be able to read this information.
    (ii)Employees should only be able to read information from the project table (excluding the budget) for projects they are assigned to.
    (iii)Only the manager of a project should be able to update (insert, update, delete) any non-key information in the project table relating to that project.
    Here is the project tables
    set echo on
    * Changes
    * 4.10.00
    * manager of employee on a project included in the employee on project table
    * activity table now has compound key, based on ID dependence between project
    * and activity
    drop table org_unit cascade constraints;
    drop table project cascade constraints;
    drop table employee cascade constraints;
    drop table employee_on_project cascade constraints;
    drop table employee_on_activity cascade constraints;
    drop table activity cascade constraints;
    drop table activity_order cascade constraints;
    drop table work_unit cascade constraints;
    * org_unit
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE org_unit
    ou_id               NUMBER(4)      CONSTRAINT ou_pk PRIMARY KEY,
    ou_name          VARCHAR2(40)     CONSTRAINT ou_name_uq UNIQUE
                             CONSTRAINT ou_name_nn NOT NULL,
    ou_type          VARCHAR2(30) CONSTRAINT ou_type_nn NOT NULL,
    ou_parent_org_id     NUMBER(4)     CONSTRAINT ou_parent_org_unit_fk
                             REFERENCES org_unit
    * project
    CREATE TABLE project
    proj_id          NUMBER(5)     CONSTRAINT project_pk PRIMARY KEY,
    proj_name          VARCHAR2(40)     CONSTRAINT proj_name_uq UNIQUE
                             CONSTRAINT proj_name_nn NOT NULL,
    proj_budget          NUMBER(8,2)     CONSTRAINT proj_budget_nn NOT NULL,
    proj_ou_id          NUMBER(4)     CONSTRAINT proj_ou_fk REFERENCES org_unit,
    proj_planned_start_dt     DATE,
    proj_planned_finish_dt DATE,
    proj_actual_start_dt DATE
    * employee
    CREATE TABLE employee
    emp_id               NUMBER(6)     CONSTRAINT emp_pk PRIMARY KEY,
    emp_name          VARCHAR2(40)     CONSTRAINT emp_name_nn NOT NULL,
    emp_hiredate          DATE          CONSTRAINT emp_hiredate_nn NOT NULL,
    ou_id               NUMBER(4)      CONSTRAINT emp_ou_fk REFERENCES org_unit
    * activity
    * note each activity is associated with a project
    * act_type is the type of the activity, for example ANALYSIS, DESIGN, BUILD,
    * USER ACCEPTANCE TESTING ...
    * each activity has a people budget , in other words an amount to spend on
    * wages
    CREATE TABLE activity
    act_id               NUMBER(6),
    act_proj_id          NUMBER(5)     CONSTRAINT act_proj_fk REFERENCES project
                             CONSTRAINT act_proj_id_nn NOT NULL,
    act_name          VARCHAR2(40)     CONSTRAINT act_name_nn NOT NULL,
    act_type          VARCHAR2(30)     CONSTRAINT act_type_nn NOT NULL,
    act_planned_start_dt     DATE,
    act_actual_start_dt      DATE,
    act_planned_end_dt     DATE,
    act_actual_end_dt     DATE,
    act_planned_hours     number(6)     CONSTRAINT act_planned_hours_nn NOT NULL,
    act_people_budget     NUMBER(8,2)      CONSTRAINT act_people_budget_nn NOT NULL,
    CONSTRAINT act_pk PRIMARY KEY (act_id, act_proj_id)
    * employee on project
    * when an employee is assigned to a project, an hourly rate is set
    * remember that the persons manager depends on the project they are on
    * the implication being that the manager needs to be assigned to the project
    * before the 'managed'
    CREATE TABLE employee_on_project
    ep_emp_id          NUMBER(6)     CONSTRAINT ep_emp_fk REFERENCES employee,
    ep_proj_id          NUMBER(5)     CONSTRAINT ep_proj_fk REFERENCES project,
    ep_hourly_rate      NUMBER(5,2)      CONSTRAINT ep_hourly_rate_nn NOT NULL,
    ep_mgr_emp_id          NUMBER(6),
    CONSTRAINT ep_pk PRIMARY KEY(ep_emp_id, ep_proj_id),
    CONSTRAINT ep_mgr_fk FOREIGN KEY (ep_mgr_emp_id, ep_proj_id) REFERENCES employee_on_project
    * employee on activity
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE employee_on_activity
    ea_emp_id          NUMBER(6),
    ea_proj_id          NUMBER(5),
    ea_act_id          NUMBER(6),      
    ea_planned_hours      NUMBER(3)     CONSTRAINT ea_planned_hours_nn NOT NULL,
    CONSTRAINT ea_pk PRIMARY KEY(ea_emp_id, ea_proj_id, ea_act_id),
    CONSTRAINT ea_act_fk FOREIGN KEY (ea_act_id, ea_proj_id) REFERENCES activity ,
    CONSTRAINT ea_ep_fk FOREIGN KEY (ea_emp_id, ea_proj_id) REFERENCES employee_on_project
    * activity order
    * only need a prior activity. If activity A is followed by activity B then
    (B is the prior activity of A)
    CREATE TABLE activity_order
    ao_act_id          NUMBER(6),      
    ao_proj_id          NUMBER(5),
    ao_prior_act_id      NUMBER(6),
    CONSTRAINT ao_pk PRIMARY KEY (ao_act_id, ao_prior_act_id, ao_proj_id),
    CONSTRAINT ao_act_fk FOREIGN KEY (ao_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id),
    CONSTRAINT ao_prior_act_fk FOREIGN KEY (ao_prior_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id)
    * work unit
    * remember that DATE includes time
    CREATE TABLE work_unit
    wu_emp_id          NUMBER(5),
    wu_act_id          NUMBER(6),
    wu_proj_id          NUMBER(5),
    wu_start_dt          DATE CONSTRAINT wu_start_dt_nn NOT NULL,
    wu_end_dt          DATE CONSTRAINT wu_end_dt_nn NOT NULL,
    CONSTRAINT wu_pk PRIMARY KEY (wu_emp_id, wu_proj_id, wu_act_id, wu_start_dt),
    CONSTRAINT wu_ea_fk FOREIGN KEY (wu_emp_id, wu_proj_id, wu_act_id)
              REFERENCES employee_on_activity( ea_emp_id, ea_proj_id, ea_act_id)
    /* enter data */
    start ouins
    start empins
    start projins
    start actins
    start aoins
    start epins
    start eains
    start wuins
    start pmselect
    I have the tables containing ouins and the rest. email me on [email protected] if you want to have a look at the tables.

    Answer to your 2nd question is easy. Create database roles for the various groups of people who are allowed to access or perform various DML actions.
    The assign the various users to these groups. The users will be restricted to what the roles are restricted to.
    Look up roles if you are not familiar with it.

  • SQL Performance and Hyper-V Generation 2

    Hi
    Environment - HP DL580 G5 running 2012 R2 Hyper V
    There are only 2 VM's on this box and the underlying storage is provided by Equalogic.
    I have created 2 VM's running 2012 R2 Server with SQL 2012 Enterprise SP2 which are identical apart from one being generation 1 (imported from a 2008 R2 server) and the other being a native generation 2.
    My issues is if I increase the memory to 20gb (runs fine on 10gb) on the gen2 server it will crash under load (dbbc checkdb or any other intensive operation).  Gen1 server has no problems running at 20gb.
    I am suspecting a numa issue but before I go down this route was wondering if anyone else has seen this.  VMQ's have been disabled on both VM's

    Hi,
    Thank you for your post.
    From your description, 
    I see the current issue you are facing is: The native generation 2 VM running on Windows Server 2012 R2 Hyper-V host will crash after increasing memory to 20 GB. Please let me know if I have misunderstood anything.
    You mentioned that both of your VMs are configured with static memory, I suspect that might be the cause. To improve Hyper-V and VM performance, there are corresponding best practices, for
    memory, the best practice is using dynamic memory.
    Although, the Dynamic Memory feature does not help directly in achieving better performance of the virtual machines, but it allows you to balance the allocation of memory resource
    dynamically. It is recommended to configure Dynamic Memory parameters for each critical virtual machine running on a Hyper-V server.
    Since
    dynamic memory adjusts the amount of memory available to a virtual machine, based on changes in memory demand using a memory balloon driver, which helps use memory resources more efficiently.
    You can read this blog for more information.
    Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    Currently, could you please try configuring the VMs with dynamic memory instead of static memory to see if the issue can be solved?
    Please let me know if there is any update. Thanks for your time.
    Best Regards,
    Sophia Sun 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Service Manager 2012 Agentless Monitoring and SQL Servers More secure distribution

    Having both SCSM 2012 and 2012 in my network, I want to monitor the all using a single account which is securely distributed. The problem is agentless monitored servers does not appear in the "more secure" list.
    My Scenario:
    I have scsm 2012 installed with the following condition:
    SCSM Main DB -> Monitored with Agent (not a management server)
    SCSM Data Warehouse -> Agentless monitored
    SCSM Workflow and other Management Servers -> Agentless Monitored
    Some Test SCSM 2012 Servers with SQL on the same machine -> Agentless Monitored
    Some Test and Operational SCSM 2012 Servers -> Agentless Monitored
    I want to use a single account to monitor them all, I have set the "Service Manager Database Account" profile (yet to understand why Service Manager Database
    Account is actually a Profile!) and in the underlying account set the distribution to "more secure" . As stated above, I have many servers with the SQL running on the management serves and I cannot
    configure them in the SQL account.
    Is my scenario wrong?
    Thanks
    YSobhdel

    It seems the SQL MP is monitoring the database on agentless managed computers without my permission!
    All my agentless managed computers are using the default action account to monitor the SQL!
    Any explanation why is that so?
    Thanks
    YSobhdel

  • Change the IP Addresss of Servers that are members of a Windows Failover Cluster and SQL cluster

    Hi all,
    I have 3-nodes Windows Server 2012 R2 Cluster, on top of it I have two clustered named SQL 2012 instances installed. We need to change the IPs and subnet for the three physical servers incorporated in the Windows cluster. What will be the impact on
    the Windows and SQL clusters? What extra configuration is needed in both Windows and SQL clusters to make sure the service will not fail after changing the IPs?

    A similar question was answered
    here.
    The impact is that your Windows and SQL Server failover clustered instances will be offline during configuration so you need to schedule downtime. Plus, your client applications will not be able to find the new IP addresses immediately, depending on your
    DNS TTL value. Work with your network/systems engineers to find out what this value is or validate the existing TTL values on your clustered resources by checking the HostRecordTTL property value. A workaround here is to set the HostRecordTTL property values
    of your SQL Server network name to a very low value, say 1 minute, and then change it back to the current one once all of the applications can connect back to them.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • Is it possible to deploy SharePoint or its Service Applications on: multiple DB-Servers and multiple SQL Instances?

    Hello Forum,
    We have a SharePoint 2013 farm (Enterprise edition) that uses one single SQL Server 2012 (Standard edition). That statement means: All my SharePoint DBs e.g. (Config, Admin, Content, and Service Apps) DBs are hosted and running onto one single instance e.g.
    Server1\SQLInstance1.
    We have some new requirements to install and configure BI tools such as: PerformancePoint services and PowerPivot. BI tools require either SQL Server 2012 Enterprise or BI editions, and we do NOT want to upgrade our current SQL Server1\SQLInstance1
    Instead, We have other separate SQL Server instance which is enterprise edition let's name it (ServerX\InstanceX) that is running standalone, and we are thinking or using it, and my 2 questions are:
    1) Can we use this other separate
    SQL Server instance which is enterprise edition to host the create and hosts the DBs of PerformancePoint services and PowerPivot ?
    2) My second question is the same: Can I create PerformancePoint services application in my SharePoint farm, But in the Database Server field, I fill up
    the details of the other DB server ServerX\InstanceX  which is the one that is SQL
    enterprise edition ? Will this work ?
    Any official Microsoft resources/links tell that it is possible to deploy SharePoint or its service applications on multiple DB-Servers and multiple SQL Instances?

    Thank you Alex and Anil,
    What are the ramifications of that?
    I mean, Assuming that I have created such a farm where most of SarePoint DBs in Standard SQL instance while the PerformancePoint service application and others e.g. PowerPivot and reporting service are deployed and configured onto other Enterprise SQL instance.
    Are there any recommendations or concerns that you would like to draw my attention to ?

  • MDT Deployment share and SQL Share on different servers

    Hello Technet,
    I would like help to figure out how to deal with my MDT Deployment share being on a different server than my SQL Share. I am trying to use the MDT Database in MDT 2013 and SQL Server Express 2014. My deployment share is located on a Linux File Server share,
    and my MDT Server and SQL Express Server are located on the same Windows Server 2012 VM.My Bootstrap.ini therefore looks like this:
    [Settings]
    Priority=Default
    [Default]
    SkipBDDWelcome=YES
    DeployRoot=\\192.168.1.10\DeploymentShare$
    UserID=user
    UserPassword=password
    my Customsettings looks like this:
    [Settings]
    Priority=CSettings, CPackages, CApps, CAdmins, CRoles, Locations, LSettings, LPackages, LApps, LAdmins, LRoles, MMSettings, MMPackages, MMApps, MMAdmins, MMRoles, RSettings, RPackages, RApps, RAdmins, TaskSequenceID,Default
    [Default]
    CaptureOS=YES
    [CSettings]
    SQLServer=SQLServer
    Instance=SQLEXPRESS
    Database=MDT
    Netlib=DBNMPNTW
    SQLShare=Logs
    Table=ComputerSettings
    Parameters=UUID, AssetTag, SerialNumber, MacAddress
    ParameterCondition=OR
    Therefore, the problem is that during winPE Bootup, WinPE connects to the deploymentshare using the provided credentials just fine, and tries to process customsettings. However, I can map a network drive to the SQL Share as seen in the logs by the following
    errors:
    1.ERROR-Unable to map a network drive to \\SQLServer\Logs
    2. ZTI Error opening SQL Connection:[DBNETLIB][ConnectionOpen(Connect()).]SQL Server does not exist or access is denied.
    3.Unable to establish database connection using [CAPPS] properties.
    In WinPE, as soon as I hot F8, and use net use \\SQLServer\Logs, the deployment works fine and the properties are read from the database during each section process..
    Why is MDT Unable to map the \\SQLServer\Logs , and it requires the manual use of net use? Must I move the deployment share to the same server as my SQL Server Server? Thanks so much.

    Should work, Don't know why it's failing.
    Can you post your full bdd.log file to a public site like OneDrive and share the link?
    There are some inconsistencies with your written summary above.
    Keith Garner - Principal Consultant [owner] -
    http://DeploymentLive.com

  • Developing database views between Oracle and SQL Server tables

    I am on Oracle 10.2, my organization has many SQL Server databases as well and has now made
    SQL server as company standard so many new databases will be developed in SQL Server. It is of course
    not possible to convert all Oracle databases to SQL Server, so a mix environment will exist. Two questions:
    1.     Is it possible to develop database views in Oracle (10g in my case) which join Oracle tables with tables in SQL Server 2008? If yes, how. I have seen some heterogeneous connectivity setup to connect SQL Server to Oracle, but not sure whether it is possible to develop a database view across two databases.
    2.     I know it is not a SQL Server forum, but many DBA’s know both Oracle and SQL Server. Is it possible to develop views in SQL Server (SQL Server 2008 R2 in my case) which join Oracle 10g and SQL Server 2008 tables? I know in SQL Server, there is way to set up linked servers, but do not know whether it is possible to develop views.
    Thanks a lot for your insight.

    You can create views that join local Oracle tables and remote SQL Server tables. I'm pretty sure you can do the reverse as well but I haven't personally done it.
    However, I would be very concerned about the performance you'd get if you created that sort of view. You'd very frequently end up in a situation where Oracle has to pull all the data in the remote table across the database link in order to apply predicates and join the data locally. That could be disastrous from a performance standpoint.
    Justin

Maybe you are looking for

  • Trying to install Oracle 11.5.10.2 on AIX - first steps

    Hi. Oracle 9.2.0.5.0. AIX I have created the necessary directories on our development machine, and each of them contains the contents of each of the 8 DVDs from the software bundle I have been given. Im not sure of the next step. I have this metalink

  • I can not download multiple things from a website all at once it gives javascript error

    i have been trying for awhile at this website to download multiple things at once and i keep getting a javascript:void 0 error..it did not happen until i upgraded firefox here recently..i would love for this to be fixed as i do use this site alot

  • Classic Mode Infected with SevenDust

    I need some help here... Classic mode is infected with SevenDust aka 666 virus. I can't boot with Norton Anti Virus because I have a new iBook and need OS 9 for that, which it won't boot into. Should I remove the OS 9 system folder and run Norton fro

  • Is form builder changed in R12 ?

    Dear members, Oracle recently released its new version R12. I am not sure what will be Form Builder like in R12. I heard that there is no more Form bulider in R12 and It is being replaced by oracle applications frame work. Is this true ? I request al

  • Slicing and Dicing a RAW cut

    Ok, this is a really basic question, but I dont see it addressed in the Adobe Premiere Classroom in a book I used Capture (F5) to get all the RAW footage from the camera into one big file. I have already set clip markers at the beginning of all the t