ASE 12.5 and replication server 15.1, - MS-SQL server

Our  company has Sybase ASE version 12.5 and Sybase Replication Server 15.1. We would like to replicate data from/to Microsoft SQL server.  How can this be done with our current environment?

NOTE: I haven't worked with Sybase repserver and MSSQL (as source or target) so fwiw ...
I looked through the various release bulletins for Repserver Options (15.1, 15.7.1 SP100, 15.7.1 SP120) and all I can find are references to MSSQL 2008/2008R2 ... with earlier versions of RS (eg, 15.1) not supporting MSSQL2008.
The compatibility matrix tool (http://certification.sybase.com/ucr/search.do) also does not show anything higher than 2008/2008R2; w/2003 the highest supported platform for RS 15.1.
If you're talking about 2012 as a target database you may be safe as long as 2012 can work with the drivers that come with RS 15.1.  As for using 2012 as a source ... *shrug* ... I wouldn't even want to guess at this point.
Sorry, I can't tell from the various documents if Sybase does not support MSSQL 2012, or if Sybase documentation is woefully out of date.
I would have to suggest contacting either Sybase tech support or your Sybase sales rep ... or wait for someone from Sybase/Repserver engineering/support to jump on this thread.

Similar Messages

  • Database Mirroring and Replication in SQL Server 2008 R2

    I have configured the mirroring and replication between 4 servers (A,B,C,D).i.e, Mirroring between A to B and C to D, Replication between A and C. The configuration was a success and I am able test the replication(B to C) when I have failed over the mirroring
    dbs(A to B). The replication works fine after the mirroring fail over but I am not able to check its status in the Replication monitor. When I am having any insert in a table which is replicated in B, it is reflected back to C..it means the replication is
    working fine.
    Any thought on how I can view the status of replication from mirrored server. I tried adding the publisher in the monitor, but no luck. If I check the snapshot agent status, it says could not retrieve the info, same with the log reader agent status.
    Any suggestions on this please.
    Thanks, Siri

    For example in your case...
    Server A is principal and Server B is Mirror with either Manual or Automatic Failover.
    Server A is replicated to Server B ( publisher & B is subscriber )
    In Server A Database named Test_Mirror_Replication is configured for both mirroring and Replication.
    Now you have failed over your Database 'Test_Mirror_Replication' from Server A to Server B.
    After the Failover Server A will act as Mirror for DB 'Test_Mirror_Replication' & Server B will act as Principal for DB 'Test_Mirror_Replication'
    Hope my understanding is correct ?
    If yes then have you tried monitoring the replication after registering in other instance with current principal database sql instance name ? I mean current Publisher database name sql instance ? not your old sql instance name which was before the mirroring
    role change or failover...
    Hope you are trying with mirror database server name ?
    Raju Rasagounder MSSQL DBA

  • Unable to connect hana with sybase replication server and i am not getting ECH plz help

    unable to connect hana with sybase replication server and i am not getting ECH plz help

    Please don't necrobump/hijack threads: https://wiki.archlinux.org/index.php/Fo … bumping.22
    https://wiki.archlinux.org/index.php/Fo … _hijacking
    Closing

  • Sybase replication server on HA / DR

    Hi Experts,
    We are in the process of setting up DR environment for Sybase ASE using replication server. The primary site has an HA configured.
    My doubt is when we need to install replication server at standby site, what host needs to be specified as the "primary system hostname" - The hostname of CI or the DB server.
    Additionally, do we need to install replication server on both nodes of HA cluster? and how would the DR agent configuration work in case of DB failover to the CI node?
    Unfortunately, there is very little documentation available online on how to setup the DR environment, apart from SAP note 1891560
    Would appreciate your invaluable inputs on this.
    Thanks in anticipation.
    Regards,
    Varun

    Hi Varun,
    Sorry - I don't have any experience with the IBM solution.  So I have comments, but not a solution.
    The HADR Solution will work well in HA / cluster systems where the IP address is dynamically re-assigned to the active node.  This allows the server configurations to point to just 'one' IP address and the cluster software (examples: MS Cluster, Veritas, HP Service Guard) assigns that IP to which ever node is active.
    Using the IBM solution, (based on a quick Google search) it appears we rely on Open Server HA capability to include both addresses in the interfaces file under a single Server name, so that both address can be attempted when making a connection.
    Unfortunately, the configurations created by the DR Agent do not support this type of configuration using Interfaces files.  Most server address are assigned a single IP address, and most address are assigned directly (not pointing to an Interfaces file entry).
    As an example, when the HADR nodes are configured in ASE, you define the other nodes in the HADR group with a command like the following:
    sp_hadr_admin addserver, <node_name>,<server_name>
    To use a interfaces file name for the server name, the command might look like:
    sp_hadr_admin addserver, 'D01_Site1', 'D01'
    But the configurations created by the DR Agent use a host:port syntax instead of an interfaces file server name, like:
    sp_hadr_admin addserver, 'D01_Site1', 'host1.my.domain.corp:5001'
    The point I am trying to make is that even if we edit the interfaces files to use the additional HA addresses, other configurations like the one above would need to be discovered and changed as well.
    Unfortunately, even if all this reconfiguration work is accomplished successfully, the DR Agent itself does not use interfaces files and has no support for specifying an alternate HA address.  And I am not aware of any workaround for that limitation.
    In summary, I think that short answer is that we don't have HADR support using DR Agent for HA environments without a dynamic IP address.  It might be possible to manually reconfigure some of it, but without support in the DR Agent itself, you would be missing failover and monitoring support the DR Agent provides.
    Sorry that I do not have a more encouraging answer.
    Regards,
    Stephen

  • How to enable multi-statement replication like select into in SAP Replication server

    Hi All,
    Currently I am worling on replication of non logged operation using SAP Replication Server.My source and target databases both are Sybase ASE 15.7. I created a normal stored procedure having non logged operation like :
    create procedure proc1
    as
    select * into tab2 from tab1
    I have created database replication definition using following command :
    create database replication definition def1
    with primary at dewdfgwp01694.src
    replicate DDL
    replicate functions
    replicate transactions
    replicate tables
    and created subscription as well
    After marking the procedure using sp_setrepproc proc1,'function', I started the repagent (sp_start_rep_agent src)
    But after marking the procedure I am unable to execute the procedure and having the error :
    SELECT INTO command not allowed within multi statement transactions
    Sybase error code=226
    Can anyone please guide me in this situation
    FYI : I have executed all three commands in primary database :
    sp_dboption src,'select into/bulkcopy/pllsort',true;
    sp_dboption src,'ddl in tran',true;
    sp_dboption src,'full logging for all',true

    I am getting the error in primary database(Sybase ASE console) as well as in repserver .
    This error is occurring after the marking of the procedure in the primary database for replicating.
    And after getting this error i am unable to replicate any other table or procedure(seems the DSI thread is going down in repserver)
    the error in repserver is given below :
    T. 2014/09/20 16:58:03. (27): Last command(s) to 'server_name.trg':
    T. 2014/09/20 16:58:03. (27): 'begin transaction  [0a] exec proc1  '
    E. 2014/09/20 16:58:03. ERROR #1028 DSI EXEC(103(1) 'server_name.trg) - dsiqmint.c(4710)
    Message from server: Message: 226, State 1, Severity 16 -- 'SELECT INTO command not allowed within multi-statement transaction.
    H. 2014/09/20 16:58:03. THREAD FATAL ERROR #5049 DSI EXEC(103(1) server_name.trg) - dsiqmint.c(4723)
    The DSI thread for database 'server_name.trg' is being shutdown. DSI received data server error #226 which is mapped to STOP_REPLICATION. See logged data server errors for more information. The data server error was caused by output command #0 mapped from input command #0 of the failed transaction.
    I. 2014/09/20 16:58:03. The DSI thread for database 'server_name.trg' is shutdown.
    I. 2014/09/20 18:07:48. Replication Agent for server_name.src connected in passthru mode.

  • Creating atomic subscription in Sybase Replication Server

    I am trying to create atomic subscription in sybase repserver where my primary database is ORACLE and target database is Sybase ASE. For non materialization the subscription is created but for atomic subscription its not creating.
    Do I need to have any pre-requisites for that?????

    Hi Kenny,
    please see here:
    https://training.sap.com/de/de/search?query=Sybase+replication+server
    You can change the country in the drop-down menu at the top.
    Regards,
    Arnold

  • Afaria Database on SQL Anywhere 12 and Replication

    Hi,
    I have Afaria Farm and I Use SQL Anywhere 12 as database server.
    I want to replicate this database to another SQL Anywhere 12 Server.
    Which one is good way to replication
    And can I have some documentation that tells this step by step?
    I am not a DB admin.

    Hello Tevfik,
    I think we need to understand your requirements a little more: why are you interested in SQL Anywhere replication and what are your plans for it?
    If you're looking for an always-available SQL Anywhere database for Afaria, there is high-availability (database mirroring). This will allow you to maintain two database server partners (with a third arbiter), where the primary has read/write capabilities and the mirror partner is read-only. There is a tutorial here in the documentation.
    If you're looking for a reporting server to do read-only queries for reporting, there is read-only scale out (copy nodes). The tutorial can be found here.
    Are you trying to keep a backup copy of your database via replication? If so, you may want to consider live backups (see the wiki), or just regular backups (also see the wiki). Regular backups should still be used in conjunction with any of the high-availability scenarios described above.
    If you're really looking for data movement, particularly to multiple database nodes, there is synchronization to other enterprise databases (including SQL Anywhere, but this also includes HANA, ASE, IQ, etc.) with MobiLink and replication to other SQL Anywhere databases, via SQL Remote. MobiLink uses a session-based HTTP or TCP/IP connection as a transport, while SQL Remote can use offline FILE, FTP, SMTP, and online HTTP as a transport.
    You may read elsewhere that historically SQL Anywhere can also be used with SQL Remote for Adaptive Server Enterprise (ASE) or Replication Server for replication, but both of these methods have been deprecated and removed in current versions and are not supported by development.
    Regards,
    Jeff Albion
    SAP Active Global Support

  • ASE 15.7 on Windows Server 2012

    Hi,
    did anyone manage to get ASE 15.7 SP 131 running with more than 3 GB default data cache on a Windows host ? I'm not talking
    about how to configure that memory or use the cfg file to do it offline. What I mean is this message in the log file:
    "Configuration of the cache (default data cache) failed since the defined cache configuration consumes more memory than is available for buffer caches."
    I have 16 GB RAM, max memory=9GB, sp_configure memory shows 6738682 K bytes available. As soon as I try to increase "default data cache" to more than 3 GB (2.3 GB to be exact) , ASE won't come up again. Please try this first on your environment before coming back with questions as it sounds weird.
    Regards,
    Mat

    Yes - my default data cache is 6GB.....but not on Win2012...which is why I think it is OS related.  I am not as familiar with Windows as Unix/Linux - but in Unix/Linux, there are kernel configs that control the size of the shared memory segment as well as the number of shared memory segments.   When ASE starts, it attempts to grab everything in ONE shared memory segment of X size (whatever total memory is set to).   Soooo...normally, when ASE is running, in Unix, you do an ipcs -a and you will see the ASE user (Sybase typically) with 1 shared memory segment.   If you sp_configure 'total memory' higher while ASE is running, it then grabs another segment - so, it would have two segments.   However, when you shutdown and restart, it will attempt to grab a single segment at the now higher size.   If you were on Unix, that would explain why you could increase it slightly while running, but it wouldn't restart at the higher setting.
    However, on Windows, ASE will grab more than one segment....if you look in your errorlog at boot time, you should see lines similar to:
    10:49:31.10 kernel  Allocating a shared memory segment of size 1075904512 bytes.                     
    10:49:31.36 kernel  Allocating a shared memory segment of size 191758336 bytes.                      
    10:49:31.37 kernel  Allocating a shared memory segment of size 86245376 bytes.                       
    10:49:31.37 kernel  Allocating a shared memory segment of size 78446592 bytes.                       
    10:49:31.37 kernel  Allocating a shared memory segment of size 35586048 bytes.                       
    10:49:31.37 kernel  Allocating a shared memory segment of size 119603200 bytes.                      
    10:49:31.38 kernel  Allocating a shared memory segment of size 4295032832 bytes.                     
    10:49:31.41 kernel  Allocating a shared memory segment of size 7002324992 bytes.                     
    10:49:31.45 kernel  Kernel memory at 0x0000000020000000, 1075904512 bytes                            
    10:49:31.45 kernel  Kernel memory at 0x0000000020000000, 3536896 bytes                               
    10:49:31.46 kernel  Server part of first shared memory region at 0x000000002035F800, 1072367616 bytes.
    10:49:31.46 kernel  Failed to log the current message in the Windows NT event log                    
    10:49:31.46 kernel  Server region 0 at 0x0000000060220000, 191758336 bytes                           
    10:49:31.46 kernel  Server region 1 at 0x000000006B9D0000, 86245376 bytes                            
    10:49:31.46 kernel  Server region 2 at 0x0000000070CF0000, 78446592 bytes                            
    10:49:31.46 kernel  Server region 3 at 0x00000000757E0000, 35586048 bytes                            
    10:49:31.47 kernel  Server region 4 at 0x0000000077DD0000, 119603200 bytes                           
    10:49:31.47 kernel  Server region 5 at 0x000000007FFF0000, 4295032832 bytes                          
    10:49:31.47 kernel  Server region 6 at 0x0000000180030000, 7002324992 bytes                          
    10:49:31.47 kernel  Highest valid address is 0x0000000321620000                                      
    10:49:31.47 kernel  Adaptive Server is using the threaded kernel mode.                              
    ...as you can see, I grabbed 8 shared memory segments with sizes from 35MB to ~7GB....likely due to fact that other apps were using memory and since shared memory is normally pinned (non-swapped), Windows had to scrounge multiple non-contiguous blocks of memory.
    Windows did some funky stuff with shared memory and permissions (which is why I suggested running as Administrator)....some of which I suspect was to do with same sort of things that Linux tried adding some security around by randomizing virtual address space, etc.    I am not sure if that is related to your problem except only indirectly perhaps.....
    ....what it looks like to me is that your OS installation (and not sure who/how it was done) is capping your shared memory segments at 4GB...or you are being capped at a set number of shared memory segments.   The latter in Unix (for example) is set system wide, so if you set the OS to limit only 32 shared memory segments, then all processes on the host combined can only have 32 shared memory segments.   This can get fun if running SAP installations as the NW worker processes also use shared memory....since this post is in the Custom vs. SAP area, I will assume you are not running SAP - but it is possible that other programs such as MSSQL, Oracle, etc. all will need shared memory.....
    It would be interesting to see the corresponding entries in your errorlog similar to the above when ASE booted fine and also when it failed....might point us to whether it is a total MB of shared memory as the issue or the number of segments.....

  • Only Remote-Desktop and Replication with 0.0 Reduction

    Hi guys,
    I have a solution with 16 WAAS 4.4.3 and I got good numbers in all default applications except Remote-Desktop and Replication
    ( All Traffic 53% of Reduction).
    I can understand not otimize Remote-Desktop Applications cause the customer is using Metaframe with crypto and already compacted and
    using too RDP with crypto/compacted.
    But in the Replication I would expect some gain in Active Directory and MS-FRS Replication.
    I got 0.0, Actually, the optmized traffic is biggest that the Original Traffic.
    I have to modify something in the Microsoft servers to be able to got reduction in Replication Application ??
    I'm using all the Default Classifiers in Replication Application
    Double-Take                            LZ+TFO+DRE       1100, 1105
    EMC-Celerra-Replicator            LZ+TFO+DRE       8888
    MS-AD-Replication1                 LZ+TFO+DRE       UUID:e3514235-4b06-11d1-ab04-00c04fc2dcd2
    ms-content-repl-srv                   TFO                     507, 560
    MS-FRS1                                LZ+TFO+DRE       UUID:f5cc59b4-4264-101a-8c59-08002b2f8426
    netapp-snapmirror                    LZ+TFO+DRE       10565-10569
    pcsync-http                             LZ+TFO+DRE       8444
    pcsync-https                           TFO                      8443
    rrac                                        TFO                      5678
    Rsync                                     LZ+TFO+DRE       873
    Thanks a Lot
    My Best Regards,
    Andre Lomonaco

    As you said, you need to only send the changes over the line.
    Java can not tell you what has changed, so you need to either write some JNI, or do a check on the current image, with the last image. (this would eat quite a bit of CPU time on the server).
    It might be worth looking at the source for VNC, and other simler products.
    The simpliest solution would be to split the image into blocks of x by y pixels, then do a pixel-by-pixel check on that block, and only send the block if it has changed.
    Next thing to think about is scrapping RMI, and using raw sockets. RMI has a fair sized overhead.
    Zipping a JPEG is unlikely to have any effect. A JPEG is already compressed.
    You might want to look at a lossless compression format, like PNG, as the JPEG will have articles. This might be acceptable. If so, changing the compression ratio of the JPEG might be an option.
    Finally, the bottle neck might be the call to Robot.createScreenCapture. This is not the fastest method.

  • Regarding replication server configuration

    Good Morning...
    In my current project,I have 3 databases like A,B and C.now i need to create materialized views on B & C database using dblink(created on A database), then i need to refresh around 70 tables for every 1 hour. so in this regard i need information about replication server configuration & DBLINK & MVIEW's ...

    You must have clicked on the wrong link. This is the SQL and PL/SQL forum.

  • Question about the Apache plug-in and WL server

    We have a bunch of Weblogic app-servers, and I want to set up Apache servers to
    front-end them. Here is what I'd like to do:
    (1) Load balancer forwards requests to one of Apache servers
    (2) Apache serves the static content (html,gif,css etc.)
    (3) Apache servers forward the request to one of the alive Weblogic servers, with
    requests for the same Weblogic session should preferably stay with the same Weblogic
    server.
    The catch is that we are not using Weblogic clustering - we have our own application-specific
    light-weight clustering (virtually no replication of dynamic state). All I want
    is that the Apache plug-ins keep forwarding the packets to the same Weblogic server
    for a session (unless the server dies), and that thsi property hold even if multiple
    client requests (for the same session) are rotated across multiple Apache servers.
    The first is really important, the second just nice-to-have (I can setup load-balancer
    with sticky sessions if needed).
    The Apache plug-in documentation seems to suggest that we must use Weblogic clustering
    for us to be able to specify multiple Weblogic servers in the plug-in config file,
    but is that really required?
    Second, does the session cookie uniquely identify the Weblogic server or does
    the Apache plug-in keep the mapping between the cookie and the server? Also, does
    this answer depend on whether we use Weblogic clustering or not?
    The answer depends on the protocol between the Apache plug-in and the Weblogic
    server. Is it documented? Available under NDA?
    Your help will be really appreciated!!
    thanks
    -amit

    is that the Apache plug-ins keep forwarding the packets to the same Weblogic
    server
    for a session (unless the server dies), If the session id is found in the cookie, request or postdata (in that order),
    the plugin will preserve the sticky session.
    and that thsi property hold even
    if multiple
    client requests (for the same session) are rotated across multiple Apache
    servers.The rules apply to all apache instances as the single instance.
    BTW, the preferred server from the session has to be in the serverList(you defined
    in the httpd.conf). You are not using clusters in the backend hence the server
    list will not be updated
    dynamically. The plugin will not know the changes in the backend without modifying
    the httpd.conf and restarted.
    The first is really important, the second just nice-to-have (I can setup
    load-balancer
    with sticky sessions if needed).
    The Apache plug-in documentation seems to suggest that we must use Weblogic
    clustering
    for us to be able to specify multiple Weblogic servers in the plug-in
    config file,
    but is that really required?
    It's a recommended configuration, but not mandatory.
    Second, does the session cookie uniquely identify the Weblogic server
    or does
    the Apache plug-in keep the mapping between the cookie and the server?The server info is in the cookie for the same client although
    the plugin also maintains a list of servers.
    Also, does
    this answer depend on whether we use Weblogic clustering or not?
    No.
    The answer depends on the protocol between the Apache plug-in and the
    Weblogic
    server. Is it documented? Available under NDA?
    Your help will be really appreciated!!
    We only support http and https(60sp1 or later).

  • Best Practice Adding New Target to Namespace and Replication

    Hi,
    whats the best way to add a new target to Namespace and replication. Goal is to replace a old file Server at the end.
    I did the following:
    - copied the share with robocopy incl timestamps of files and folders
    - created share
    - added the new share as a new target as well as meshd member of the replication connection
    - disabled the new member in the Namespace, so no one can Access it until dfsr is fully done and initialized
    After the the new dfsr Connection was replicated through AD to all 4 Members (3 different site, 1 same site) the
    following happend:
    dfsr begin and almost every file was in a conflicted and copied over the the Conflict Folder. Almost all timestamps
    of the Folders were changed to the current date, but the timestamps of the files not.
    Thousands of eventlogs: 4412
    The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.
    Any idea why? Later on i disabled the Connections to the remote Fileservers, but that did not stop it.
    My idea was to pre-seed the files with robocopy. So what would be the best way to prevent that for the next share ? Is it a better way to just add the target to a bi-directional Connection to the local Fileserver without adding to DFS-N and without copying
    the files before ? Is it better to let DFSR do the hole Initial sync incl Files ?
    At the end i have no loss of date but to check almost every file for conflict took Ages to finish.
    Thanks a lot,
    Marco

    Hi,
    The steps you performed are correct - compare with waiting for DFS initial replication, a manually pre-staging is recommended.
    When doing the Robocopy step, wether all attributes are copied such as NTFS permissions?
    After robocopy, you can add that folder as a folder target of DFS replication group - you can add it to DFS namespace after replication finished.
    And if it is a Windows 2012 R2, you can prestaging DFS database for a better result.
    https://social.technet.microsoft.com/Forums/windows/en-US/a06c9d25-ed04-44e9-a1f7-e1506e645d53/forum-faq-how-to-prestaging-dfsr-database-on-windows-server-2012-r2?forum=winserverfiles
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Sql server replication server SLOW

    My team has been troubleshooting this for months now.  Sql Server 2008 r2
    Our Main DB & server is on "server A" and is a physical machine with a single virtual instance on it, it does not share with anyone, it has its own physical hard drives and has 20 gb memory and hosts our 1 terabyte DB and is very fast which
    is perfect.  It performs nightly loads of data which users access throughout the day.  it is not a transactional db, more of a data warehouse db.
    our second sql server, "Server B", has the exact same specs, actually we doubled the memory since it was slow to 40 gb memory and its still 4 times slower than server A!!!  only difference is the hard drive is on a SAN, which I am told is
    state of the art, fast, etc...
    we setup replication on ServerA to replicate to ServerB.  The distribution DB is on ServerB as well.
    We did a test where we shut down the distribution DB since it seems like that takes up about 30% of the cpu usage, and the queries we ran are still 4 times slower!
    Our next step is we are going to try and put the Distribution DB which is on ServerB on its own volume.
    and then next step is to try and put Server B on its own server (not sharing with any other companies) with its own physical hard drives like server A is.
    Can anyone offer some insight, help, thoughts, etc.  thank you.

    Hi Hilary,
    This is most likely due to IO. SANs are NOT all the same. Typically you should not use ISCI for SQL Server deployments. As the number of NICs required, the number of HOPs, management, network configuration and server configuration can be difficult
    to align and defend in support of SQL Server deployments. Note: this is true of ALL database servers..
    However, the first thing you need to do in order to identify if there are any Disk Related IO Bottlenecks is to collect performance metrics on the 2 environments. Start with Server A and then implement the same performance counters on Server B then perform
    the 4 queries capture the information from both servers and compare.
    Data to Collect
    There are numerous counters available in perfmon. We recommend that you collect data from the following counters when you are analyzing characteristics of a SQL Server workload. These disk-specific counters are listed under the LogicalDisk
    section of the available counters. (The PhysicalDisk counter object may show the same values if there is a 1:1 relationship between a logical volume and disk.) The LogicalDisk object reports volume letters or mount point names (rather than disk numbers) and
    it can provide a finer level of granularity if there are multiple volumes on a single Windows disk.
    Counter Description LogicalDisk Perfmon Object
    Disk Reads/sec - Disk Writes/sec = Measures the number of IOPs.You should discuss the expected IOPs per disk for different type and rotational speeds with your storage hardware vendor.
    Typical sizing at the per disk level are listed below:
    10K RPM disk – 100 to 120 IOPs
    15K RPM disk – 150 to 180 IOPs
    Enterprise-class solid state devices (SSDs) 5,000+ IOPs
    Average Disk sec/ Average Disk sec/Write = Measures disk latency. Numbers vary, but here are the optimal values for averages over time:
    1 - 5 milliseconds (ms) for Log (ideally 1 ms or less on average)
    Note: For modern storage arrays, log writes should ideally be less than or equal to 1-2 ms on average if writes are occurring to a cache that guarantees data integrity (that is, battery backed up and mirrored). Storage-based replication and disabled write
    caching are two common reasons for log latencies in the range of 5 or more milliseconds.
    5 - 20 ms for Database Files (OLTP) (Ideally 10 ms or less on average)
    Less than or equal to 25-30 ms for Data (decision support or data warehouse)
    Note: The value for decision support or data warehouse workloads is affected by the size of the I/O being issued. Larger I/O sizes naturally incur more latency. When interpreting this counter, consider whether the aggregate throughput potential of the
    configuration is being realized. SQL Server scan activity (read-ahead operations) issues transfer sizes up to 512K, and it may push a large amount of outstanding requests to the storage subsystem. If the realized throughput is reasonable for the particular
    configuration, higher latencies may be acceptable for heavy workload periods.
    If SSD is used, the latency of the transfers should be much lower than what is noted here. It is not uncommon for latencies to be less than 5 ms for any data access. This is especially true of read operations.
    Average Disk Bytes/Read - Average Disk Bytes/Write = Measures the size of I/Os being issued. Larger I/Os tend to have higher latency (for example, BACKUP/RESTORE operations issue 1 MB transfers by default).
    Current Disk Queue Length = Displays the number of outstanding I/Os waiting to be read or written from the disk. Deep queue depths can indicate a problem if the latencies are also high. However, if the queue is deep, but latencies are low
    (that is, if the queue is emptied and then refilled very quickly), deep queue depths may just indicate an active and efficient system. A high queue length does not necessarily imply a performance problem.
    Note: This value can be hard to interpret due to virtualization of storage in modern storage environments, which abstract away the physical hardware characteristics; this counter is therefore limited in its usefulness.
    Disk Read Bytes/sec - Disk Write Bytes/sec = Measures total disk throughput. Ideally larger block scans should be able to heavily utilize connection bandwidth. This counter represents the aggregate throughput at any given point in time.
    SQL Server Buffer Manager Perfmon Object = The Buffer Manager counters are measured at the SQL Server instance level and are useful in characterizing a SQL Server system that is running to determine the ratio of scan type activity to seek
    activity.
    Checkpoint pages/sec = Measures the number of 8K database pages per second being written to database files during a checkpoint operation.
    Page Reads/sec = Pages reads/sec measures the number of physical page reads being issued per second.
    Readahead pages/sec = Read-ahead pages/sec measures the number of physical page reads that are performed using the SQL Server read-ahead mechanism. Read-ahead operations are used by SQL Server for scan activity (which is common for data
    warehouse and decision support workloads). These can vary in size in any multiple of 8 KB, from 8 KB through 512 KB. This counter is a subset of Pages reads/sec and can be useful in determining how much I/O is generated by scans as opposed to seeks in mixed
    workload environments.
    Cheers,
    -Ivan

  • Removing Server 2000 DC and adding Server 2008 DC.

    Removing Server 2000 DC and adding Server 2008 DC.
    From: Server 2000 Sp4 (not leaving in place, no plans to demote)
    To: Server 2008 Sp2 (will be a single DC and hold Global Catalog)
    Single forest domain. Only one DC.
    Problem: The old server 2000 is still a primary DC and the new server 2008 is not taking over.
    Completed the following tasks:
    NIC binding (connected NIC at top of list on new server)
    New server is also DNS server. This role is working and it points to itself 127.0.0.1 and clients have been moved to use the new server for DNS, they are working.
    New server has some shared folders. Clients are connected and this is working until we remove the old server, then they cannot authenticate to the mapped drive.
    Both servers show the role of Domain Controller
    Adprep /forestprep and /domainprep and /domainprep /gpprep were run on the server 2000 (as it was the existing DC) with a successful message.  (not sure if enough time was allowed for replication)
    All five Flexible Single Master Operations (FSMO) roles were transferred using GUI.
    Schema Master, Domain naming master, Infrastructure master, RID master, PDC Emulator.
    And have been verified. All roles transferred and the new server 2008 and new server is also the global catalog.
    Then to verify new server is handling the role we unplugged the Ethernet cable from the Old server 2000, then went to client stations and re-started them and they would not find the new DC and connect to it.
    On the new server when we opened active directory users and computers the domain did not appear.
    Verified in DNS manager the A record and reverse pointer were correct.
    For some reason the new server is looking to the old server. Even though all roles are moved over and DNS appears to be setup correctly it won’t exist independently.
    What’s missing?
    If something was missed or performed wrong, do we have to remove the roles and start from scratch? Or can we re-run adprep and walk the steps again leaving all as-is?

    Hello,
    as first step please post an unedited ipconfig /all from the old and new DC/DNS server and one problem client.
    It seems that your installation from the new DC wen't well and as described in
    http://msmvps.com/blogs/mweber/archive/2010/02/06/upgrading-an-active-directory-domain-from-windows-server-2000-to-windows-server-2008-or-windows-server-2008-r2.aspx Just check again yourself.
    A DC should NEVER be shutdown or just disconnected from a domain, it MUST be demoted correct, as you ruin into replication errors in the event viewer without that steps and you will never be able to install a additional DC with using the same name.
    See the end of my article about removal steps.
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://msmvps.com/blogs/mweber/
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.

  • EUL on Replication Server May Not Match Production EUL

    Hello:First let me thank you for the rapid response on the "Replication Impact on the EUL" issue. This is a follow on to that initial question. I intend to perform Discoverer administrative tasks on the replication server EUL. The tasks will involve creation of new business areas and their customization including the sharing of some reference folders in existing schemas with the new business areas.
    1. Should the replication process be "quieted" i.e. halted during the administrative processes?
    2. Are the changes to the replication database EUL required to be made to the production database EUL as well?
    3. Will the replication process adversely affect a changed EUL on the replication server database?

    You are just confusing those who did not see that thread and they will not know how to follow up on it.
    Why not continue it on that thread or show the URL
    Replication Impact on EUL

Maybe you are looking for

  • Javax.servlet.* package does not exit

    While compiling servlet followinng errors are coming C:\Servlets+JSP\HelloServlet\src\helloservlet\HelloServlet.java:2: package javax.servlet does not exist import javax.servlet.*; C:\Servlets+JSP\HelloServlet\src\helloservlet\HelloServlet.java:3: pa

  • Help with notes

    So ive got an Iphone 3G 8gig and I can't seem to sync my notes from my phone to my dell with windows 7. I keep finding that Outlook express is needed to sync my notes however im getting a message that outlook is not compatable with windows 7??? any a

  • How can one Delete a Template in Pages

    Can any one advise me how to delete Templates in Pages on my MacBook Pro?

  • Why is there no Capture Pilot equivalent for Lightroom?

    Why is there no Capture Pilot equivalent for Lightroom?

  • Process Chain triggers Automatically after the Transport

    Hello There, I have a Process chain setup to load Transaction Data and the infopackets are set to "Immediate Start".. When I transport these to Test environment, they are getting executed immediately after the transport.. Just wondering ,, this is wh