SSAS database synchronization is slow

Hi,
I'm testing the SSAS database synchronization function.
and I found it slow!!!
my server is able to write at 400MB/sec and more, but SSAS is only able to write at 60MB/sec.
I'm testing copying the olap database from 2 instances on the same server, so not have the network involved.
is there any option available around the synchronization process? excepts the compression one.
at the olap service itself? in the config file, is there any config around disk cache for this sync process or something like this?
thanks.
Jerome.

Hi Jerome,
According to your description, you copy the OLAP database from one instance to another instance on the same server, you need to know why SSAS is only able to write at 60MB/sec on the server which is able to write at 400MB/sec and more, right?
Based on my research, synchronization's file copy phase is single-threaded. And due to the single-threaded file copy phase, synchronization might not be the most efficient way of transferring files. So in your scenario, SSAS is only able to write at 60MB/sec
on the server which is able to write at 400MB/sec and more. Here are some blogs about synchronization internal and synchronization performance troubleshooting, please see:
http://www.informit.com/articles/article.aspx?p=1842938&seqNum=3
http://www.informit.com/articles/article.aspx?p=1842938&seqNum=4
Regards,
Charlie Liao
If you have any feedback on our support, please click
here.
Charlie Liao
TechNet Community Support

Similar Messages

  • How to schedule SQL Agent job for XMLA script file of SSAS database instead of taking a backup.

    I want to script XMLA file instead of backup of database and also want to schedule a job for the process in SQL Agent. 
    Is there any pros and cons when I script the XMLA file of the SSAS db instead of taking a backup ? 
    Amir

    Hi Amir,
       You can take the Create SSAS database XMLA script and run the job to create the cube. The script contains the Metadata definition of the Database and it does not contain the actual data. 
    Pros: Since you are only creating an empty cube the script will run faster 
    Cons: You still need to process the created cube to use it for reports.
    Fastest option is to take the backup of the cube and restore wherever necessary.
    And you can also use TFS source control to deploy the cube and process it later.
    Regards,
    Venkata
    Venkata Koppula

  • How to get Cube and Dimension ID from SSAS Database programatically

    Hi All,
    I am processing one SSAS cube from SSIS package and processing the cubes dynamically .For this am putting the Cube ID ,Cube name, Dimension ID, Dimension Name in a table and generating the XML programmatically.
     I can right click the properties of the dimension and cube and will get the ID information. But is there any way we can get the ID information programmatically .So that On the fly I will get the information and create the XML without storing
    the these information in table.
    We are using 2008 R2
    Thanks in advance
    Roshan

    Hi,
    Here is the C# code you want. Try it and see.
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Xml;
    using AMO = Microsoft.AnalysisServices;
    namespace ConsoleApplication4
    class Program
    static void Main(string[] args)
    AMO.Server oServer = new AMO.Server();
    oServer.Connect(@"Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=AdventureWorksDW;Data Source=DEVWKS6\MSSQLSERVERMDX");
    foreach (AMO.Database db in oServer.Databases)
    foreach (AMO.Cube cube in db.Cubes)
    Console.WriteLine(System.String.Format("Cube Name : {0} Cube ID : {1}", cube.Name, cube.ID));
    foreach (AMO.CubeDimension dim in cube.Dimensions)
    Console.WriteLine(System.String.Format("Dimension Name : {0} Dimension ID : {1}", dim.Name, dim.ID));
    System.Console.WriteLine("");
    System.Console.WriteLine("");
    oServer.Disconnect(true);
    oServer.Dispose();
    System.Console.ReadLine();
    If you know your target SSAS Database Name, then you could use LINQ to narrow your search. Take a look into the following Code.
    AMO.Database db = oServer.Databases.Cast<AMO.Database>().Where<AMO.Database>(SSASdb => SSASdb.Name == "AdventureWorksDW").FirstOrDefault();
    foreach (AMO.Cube cube in db.Cubes)
    Console.WriteLine(System.String.Format("Cube Name : {0} Cube ID : {1}", cube.Name, cube.ID));
    foreach (AMO.CubeDimension dim in cube.Dimensions)
    Console.WriteLine(System.String.Format("Dimension Name : {0} Dimension ID : {1}", dim.Name, dim.ID));
    System.Console.WriteLine("");
    Best Regards...
    Chandima Lakmal Fonseka

  • When i start my hot backup my database getting very slow

    Hi,
    I am using following commands for enabling hot backup
    SQL>ALTER SYSTEM ARCHIVE LOG CURRENT;
    SQL>ARCHIVE LOG LIST;
    SQL >ALTER DABATBASE BEGIN BACKUP;
    Database altered.
    SQL>SELECT FILE#,STATUS FROM V$BACKUP;
    FILE# STATUS
    1 ACTIVE
    2 ACTIVE
    3 ACTIVE
    4 ACTIVE
    and using cp -rp command to copy the file (backup copying speed good) but database performance very slow
    How to improve performance ...
    Regards
    Vignesh C

    Uwe Hesse wrote:
    It is very likely that you experience slow performance with ALTER DATABASE BEGIN BACKUP , because until you do ALTER DATABASE END BACKUP , every modified block is additionally written into the online logfiles . Doesn't that happen only the first time the block is modified?
    >
    The command was introduced for split mirror backups, when this period is very short. Else ALTER TABLESPACE ... BEGIN/END BACKUP for every tablespace one at a time reduces the amount of additional redo during non-RMAN Hot Backup. There appear to be only 4 files. We don't know how big or sparse they are.
    >
    RMAN doesn't need that at all - much less redo - and also archive - generation then.
    Furthermore, you can use BACKUP AS COMPRESSED BACKUPSET DATABASE to decrease the size of the backup even more - if space is an issue.
    In short: Use RMAN :-)
    Agree with that! Unless the copy is actually going to an NFS mount or something, where I would be concerned whether it is the type of NFS that Oracle likes. I'd also advise a current patch set, as the OP didn't tell us the exact version, and I have this nagging unfocused memory of some compression problems of the "oh, I can't recover" variety.
    I'd like to see some evidence on I/O and cpu usage before giving advice. When I used to copy files like this, it would choke out everyone else. RMAN was a savior, but had to wait for local SAN upgrade.

  • Local Cube Filtering / Add SSAS database as a data source for another SSAS project

    I need to generate a local cube using a fully processed SSAS database deployed on SSAS server. However, I need to restrict the data downloaded in the local cube while doing this.
    I have restricted access to the dimension data by mapping data to UserName/CustomData & using
    dynamic role-based security. When I generate a
    local cube by executing an AdoMdCommand using xmla structure of this online cube, the .cub file thus created also contains the data that shouldn't be accessible to the user creating the cube.
    I am aware of
    CREATE GLOBAL CUBE statement, but since it has some limitations such as we cannot have DISTINCT COUNT measures, I cannot use this statement because my online cube contains some important measures that use the DISTINCT COUNT aggregation. Plus, when I include
    a dimension with huge amount of data in it, the execution of command times out.
    So I am trying out an approach wherein I create a different SSAS project and use the existing SSAS database as a Data Source and then carry out the filtering to output the .cub file using xmla structure of this project database. The problem is, somehow SQL
    Server Data Tools 2012 does not recognize another SSAS database as a valid data source.
    Could someone please let me know why can't we do this or suggest me any alternative approach to implement this?
    Regards, &lt;br/&gt; Akshay

    Hi David,
    According to your description, you are creating a SQL Server Analysis Services project, now what you want is using SAP HANA as the data source, right?
    SSAS support many type of data source. However, as you can see on the link below, SAP HANA data source was not listed on that link. So this type of data source is not supported in current version of SSAS. Microsoft will update that document when it is supported.
    http://msdn.microsoft.com/en-IN/library/ms175608.aspx
    Thank you for your understanding.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Database synchronization with MySQL

    Hi,
    I'm connecting my ADFbc to a MySQL database v5.6; However, I can't seem to find the option that allows me to synchronize my entities to the changes made on the database tables. I am using JDeveloper 12c.
    Is the database synchronization option only available for an Oracle database connection?
    Imad.

    Imad,
    database synchronization is available.
    Zeeshan Baig's Blog: Connecting MySQL Database with Jdeveloper 11g
    Is your project migrated from a older jdev version ?

  • How to Programmatically generate .asdatabase from SSAS database from SSAS server?

    Is there a way to  Programmatically generate .asdatabase file and other ssas config files from SSAS database? Not by devenv .dwproj. 

    Thank you very much for your answer. 
    I know how to generate the asdatabase: (I can do it by devenv.exe  project.dwproj
    or build the ssas project manually).  How ever I want to programmatically generate
    .asdatabase by C#.
    I
    programmatically did
    some modifications on the SSAS database(drop some dimension and measures),  then I want to generate .asdatabase so that i can deploy it to the server. I know i can do it by xmla as well.  But I really like deploying the cube by .asdatabase, not xmla.
    Any thoughts?
    Thanks,
    Jackie

  • Database performing Very slow  - Lots of wait events

    My database is on Oracle10g on Sun 5.10
    The users are complaining about database is very slow.
    I analyzed the indexes & later on rebuild them, hardly it has only 5% performance improvement.
    http://i812.photobucket.com/albums/zz43/sadeel00/untitled1.jpg
    http://i812.photobucket.com/albums/zz43/sadeel00/untitled2.jpg
    ADDM has no recommendations.

    Duplicate post - Database performing Very slow  - Lots of wait events
    Srini

  • Advanced Queueing and databases synchronization

    What kind of (application) problems could be solved by Advanced Queueing ?
    What about databases synchronization ( two Oracle Database Standard Edition One ) ?
    TANK YOU !

    AQ is intended for messaging. Messaging between applications, messaging between back-end and front-end.
    Synchronization would be a poor use of messaging. If you want synchronization then name your version number (all four places) and replication technology and we can point you in the right direction.
    For most situations I use:
    DBMS_RECTIFIER_DIFF or DBMS_COMPARISON
    http://www.morganslibrary.org/library.html

  • Database is very slow

    Hi Guys,
    I am geting trouble. My database is very slow. I couldn't log in when I launch enterprise manager. I use sqlplus to login on as sys, and it was ok, but it was very slow when I select from v$lock. It is fast when I select from v$session. How can I fix? How can know what database is doing? Thanks.

    It is been fixed by our consultant. The reason is memery shortage. The consultant company didn't tell us much more. Thank you guys. Now I checked my system, in v$session_wait view, there are 3 buffer busy waits events there, and in P2Text column, there is a value "Block#". I checked the view of V$waitstat, the result like
    data block 4234 1259059
    undo header 68169 6811160
    is there problem?
    in v$system_envents like
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO
    buffer busy waits 72994 72950 7953083 109 7,9531E+10
    when I select from v$lock, it took long time, finally I cancelled. I coun't use enterprise manager to expand instance tree to see the sessio, very slow and like hang. What's problem? How do I fix it?

  • Database is running slow

    How do I check why database is running slow? I got a complaint from user saying that database is slow, How do I check which process is taking to long?
    Select * from v$version;
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE     11.2.0.2.0     Production"
    TNS for HPUX: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    Hi,
    Do you have Enterprise Manager configured? Using EM is the best and the easiest way to see what's wrong with your database.
    Otherwise, if the performance is slow since last few minutes, you can use Active Session History (ASH) Reports to check the top queries. Also you will notice the top wait events in the same.
    If the performance issue is consistent over last few hours, you better check AWR & ADDM reports. These will give you a better insight and way to find out if there are any performance issues. From the Top wait events, you can see if the bottleneck exists in the database or external to the database.
    If there's a query that's suddenly performing badly, you can first try to gather it's stats of the tables involved.
    Regards,
    Rizwan Wangde
    Sr. Oracle DBA
    http://rizwan-dba.blogspot.com

  • A user connect to the database is very slow.

    Hi, all!
    I met a problem in the database. A user(schema) connect to the database is very slow. and the user query the data is slow too.

    And some errors in the listener.log :
    23-Aus -2012 06:03:54 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ora36)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))) * (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.xxx)(PORT=24923)) * establish * ora36 * 12514
    TNS-12514: TNS:listener does not currently know of service requested in connect descriptor
    23-Aus -2012 06:03:54 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ora36)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))) * (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.xxx)(PORT=23409)) * establish * ora36 * 12514
    TNS-12514: TNS:listener does not currently know of service requested in connect descriptor
    23-Aus -2012 06:03:55 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ora36)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))) * (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.xxx)(PORT=9103)) * establish * ora36 * 12514
    TNS-12514: TNS:listener does not currently know of service requested in connect descriptor
    How can I resolve this, thanks!

  • ESSO-database synchronization.

    Hi all,
    I'm newbie to Oracle Enterprise Single Sign On.
    I need to do database synchronization with ESSO.
    I'm using Oracle Database 11.1.0.
    Can anyone please share a link or document describing the steps to be followed for database synchronization with ESSO?
    Thanks in advance.
    Regards,
    Swathi.

    Hi Rivey,
    Thanks a lot for your reply.
    I'm able to synchronize oracle database with ESSO i.e,I'm able to login to oracle database directly without giving credentials.
    But when I try to extend the schema to Oracle Database I'm getting the following database error :
    ORA-12514 : TNS :Listener does not currently know of service requested in current descriptor.
    Thanks & Regards,
    Swathi.

  • Scheduling SQL AGENT JOB for SSAS DataBase to take backup files Daily with Different Names

    Hi All,
    I am working with SSMS. I have Analysis Services DataBase.
    i want to Schedule a SQL SERVER AGENT JOB.
    I want to take that AS DB BackUp Daily and has to be stored in Same
    Folder/same path in my local
    machiene with Different Names.
    Means, There should be Daily BackUps Files in that Path.  According to that we have to Schedule the JOB.
    Can anyone help me for this..
    Thanks,
    Supraja.

     Hi Katherine,
    Thanks a lot for your Response. what
    you have posted is very useful for me.
    But i am searching the result using AS
    DataBase BackUp Script. 
    i found one of the easy way. its working for me.
    Please check the below link for your reference..
    http://dbatasks.blogspot.in/2012/08/taking-backup-of-ssas-database.html
    Thanks,
    Supraja.

  • TP4 ADF BC [BUG] view link are not updated after database synchronization

    I have a table with a recursive relationship.
    By mystake, I created a recursive foreign key for the same attribute (deptid->deptid instead of deptid->parentdeptid).
    I generated entity, view and application module from this schema.
    When compiling, errors was produced indicating the problem.
    I corrected the problem in offline database, generate the changes into the database successfully.
    I ask to synchronize the entity with the database, the changes was identified correctly and a new association was created.
    First remark: the old one still there and it was necessary to delete it and rename the new generated one (to be consistent with other name). May be an option to overwriting the old one will be more pleasant, may be asking if the new one has to replace an existing one and show a list of existing ones.
    Regenerating the association was correct and one error was removed when compiling again. The error on the view link still there.
    I would like to delete the assoc and recreate it from the entity assoc but it was not possible easily because the view link is used in the application module !!!!
    It will be nice to permit to regenerate an existing view link from an entity association without deleting it.
    May be it is a better way to synchronize ? I would be very interesting to know how to achieve it as the best !

    Was the inability to have the synchronization remove the key specific to the existing association's being self-referential, or if your initial association was from DeptId to some other attribute (which you then corrected) would the synchronize have fixed the problem?
    The simplest way to achieve what you want given existing features would be to have delete the view link instance, view link, and association, follow by resynchronizing (I believe), then recreating the viewlink and adding back the view link instance.

Maybe you are looking for