Difference in partition size on different servers

Hi!
I copied my cube from one server to the other with import DB. When I processed my cube on the other server with new data, the size of one partition changed from 1.8G to 12G. When I check the original tables, their sizes are almost the same, and number of rows
too. Then I copied Db and the cube from the first server to the second one - same result - size of partion grew up abnormally.
What can be the problem and how can I minimize the size of my partition?

Hi Tatyanaa,
Microsoft SQL Server Analysis Services provides several standard storage configurations for storage modes and caching options. Which type of storage are you used in the second server?
MOLAP - The MOLAP storage mode causes the aggregations of the partition and a copy of its source data to be stored in
a multidimensional structure in Analysis Services when the partition is processed.
ROLAP - The ROLAP storage mode causes the aggregations of the partition to be stored in indexed views in the relational
database that was specified in the partition's data source.
So if the partition storage are set to ROLAP in first server and MOLAP in the second server, then you may encounter this issue since the source data is stored in the Analysis Services server. For the detail information about it, please refer to the link
below.
http://www.sql-server-performance.com/2013/ssas-storage-modes/
http://sqlblog.com/blogs/jorg_klein/archive/2008/03/27/ssas-molap-rolap-and-holap-storage-types.aspx
http://msdn.microsoft.com/en-IN/library/ms175646.aspx
Regards,
Charlie Liao
TechNet Community Support

Similar Messages

  • Global Index on several partitions with each partition on different servers

    Hi,
    I have a table divided into 4 partitions. Each partition is on a different server. Currenlty the indexes are set per partition. I would like to create a global index which would work on all partitions. How could I create global index which will work on all 4 partitions/servers ? (My support team is telling me that it is not possible with different servers. It only works for several partitions on 1 physical server. Is it true ?)
    Thanks,
    Nicolas

    harry76 wrote:
    Hi,
    Are you sure this is an Oracle database. I think SQL Server has this kind of architecture in some cases?
    Not quite - in SQL Server a single instance can control multiple databases and a partitioned object can have different partitions in different databases; but the SQL Server partitioning strategy is always the equivalent of "local partitioned indexes".
    Maybe this system is using partitioned views. It is possible to create clone table structures with disjoint data sets across multiple databases and then create a UNION ALL view of the tables with a predicate on each query block identifying the data in each database. The optimizer can then do "partition elimination" if your query specifies the column(s) used in the defining predicates.
    Regards
    Jonathan Lewis

  • Performance between two partitionned tables with different structure

    Hi,
    I would like if there is a difference between two partitionned tables with different structure in term of performance (access, query, insertions, updates ).
    I explain myself in detail :
    I have a table that stores one value every 10 minutes in a day (so we have 144 values (24*6) in the whole day), with the corresponding id.
    Here is the structure :
    | Table T1 |
    + id PK |
    + date PK |
    + sample1 |
    + sample2 |
    + ... |
    + sample144 |
    The table is partionned on the column date, with a partionned every months. The primary key is based on the columns (id, date).
    There is an additionnal index on the column (id) (is it useful ?).
    I would like to know if it is better to have a table with just (id, date, value) , so for one row in the first table we'll have 144 rows in the future? table. The partition will already be on the columns (id, date) with the index associated.
    What are the gains or loss in performance with this new structure ( access, DMLs , storage ) ?
    I discuss with the Java developers and they say it is simpler to manage in their code.
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Thanks & Regards
    From France
    Oliver
    Edited by: 998239 on 5 avr. 2013 01:59

    I mean storage in tablespaces and datafiles on disk.
    Can you justify please and give me concrete arguments why the two structures are equivalent ( except inserting data in T(id, date,value))
    because i have to make a report.i didnt say any thing like
    two structures are equivalent ( except inserting data in T(id, date,value)i said
    About structure : TABLE1(id, date, value) is better than TABLE1(id, date, sample1, .... sample144)because
    1) oracle has restriction for numbers of column. Ok you can have 144 columns now but for future if you must have more than 1000 columns , what will you do?
    2) Restrictions on Table Compression (Table compression is not supported for tables with more than 255 columns.)
    3) store same type values on diffrent columns is bad practise
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#i4383
    i remember i seen Toms article about this but now i cant find it sorry ((( if i found i will post here

  • Appropriate partition size for windows 7?

    i just bought windows 7 and want to install it on my mac pro that i recently bought. i was wondering what's an appropriate partition size for windows 7 and if it matters whether i install the 32 bit or the 64 bit? i'm mainly using it for a game (league of legends) if that makes an difference. thank you for your time.
    -Nikko

    If you have more than 4GB of RAM or are planning on some time in the future to install more than 4GB of RAM use the 64 bit version. The 32 bit version can not address more than 4GB. Windows programs are now 64 bit. Your computer is 64 bit. 64 bit Windows runs 32 bit Windows programs. So there is no reason to install 32 bit Windows.
    Windows 7 requires a lot of space. Then you need space for any data you create. I think that at a minimum you need 50GB plus data. I would opt for larger rather than smaller so the minimum I would partition is 100GB. If you conserve space now by making a small partition it is likely you will fill up that space quickly. You'll then wish you had made a larger partition initially. Search this forum, there are several questions on how to make a Boot Camp partition larger.

  • Best way to compare data of 2 tables present on 2 different servers

    Hi,
    We are doing data migration and I wil like to compare data between 2 tables which are present on 2 different server. I know to find the difference i can go for minus or full outer join and by creating the database link.
    But my problem is the volume of the data. The tables under consideration has approximately 40-60 columns and number of rows in each tables are around 60-70 million. Also both the tables are on 2 diffferent servers.
    I would like to know
    1] What will be the best way to compare the data and print the difference from performance perepective? I know that if I am going for DB links then its will definitely impact the performance as the tables are across 2 different servers.
    2] Is it advisable to go for using SQL - PL/SQL for this kind of senario or dump the data in flat files and use C or C++ code to find the difference.
    Regards,
    Amol

    Check this at asktom.oracle.com. Search for "Marco Stefanetti" and follow the few posts between Marco and Tom. As far as your tables being on separate servers, you could consider dumping the data to file and using external table or using CTAS ( create table as select ) to get both tables locally.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2151582681236

  • Distribution Monitor for 2 different servers from 2 different sites

    Hello all,
    We are trying to use Distribution Monitor during a parallel Unicode Conversion on a SAP 4.7 system.
    The source system and target system are 2 different servers located on 2 different sites (more than 500Kms distant).
    Questions:
    1. Can we use Distribution Monitor with 1 source server dedicated for the Export and 1 target server dedicated for the import of a package?
    2. If it is not possible, what are the constraints in fact?
    3. Can we have a scenario where Distribution Monitor is used on the source system in order to use the parallelism benefit and Migration Monitor used on the target system?
    Thanks for your help & feedback,
    Chris

    Hi Chris,
    1. Can we use Distribution Monitor with 1 source server dedicated for the Export and 1 target server dedicated for the import of a package? The Answer is No
    In order to use Distribution monitor, u need minimum two application servers on source systems  and correspondingly atleast minimum two application servers on target system 
    For example let us say Application server A  and Application server B on sources systems and Application Server C and Application server D on target ytem
    Then configure Distribution monitor properties to include two application servers as source systems and two application servers as target systems.  When u exeute distribution monitor preparation, first it scan database servers in source system nd target system  and then scan CI servers in source and target system. Then Packages will be distributed in two application servers A and B
    Run Export from Application server A  for first fifty packages  , at the same time Run import  these first fifty packages in Application Server C
    Run Export from Application Server B  for other remining packages and at the same time , run import the remaining packages into Application Server D.
    (that is one to one correspondence)
    2. If it is not possible, what are the constraints in fact? - There is no constraints. However there is lots of time consuming during Distribution monitor preparation and checking.
    3. Can we have a scenario where Distribution Monitor is used on the source system in order to use the parallelism benefit and Migration Monitor used on the target system? - The Answer is No.
    You cannot mix distribution monitor tool for source system and Migration tool used on target system.
    You have to use any one tool depending on the size of database used.
    if your database size used is very very large then  recommend to use distribution monitor where u in sou can have multiple R3load jobs in each application server. Say Application server A use 20 R3load jobs and Application server B use 15 r3load jobs).
    Thanks
    APR

  • Solaris 10 Max Partition size

    Hi,
    I would like to know the maximum partition size that Solaris 10 can support/create.
    We have a Sun StorEdge 6920 system with a 8 TBytes, based on 146 GBytes Hard Disks.
    Is it possible to create a 4 TBytes partition?
    if not, any suggestions are appreciated.

    Look to EFI, allows filesystems built to large TB sizes if you needed.
    Per SUN:
    Multi-terabyte file systems, up to 16 Tbyte, are now supported under UFS, Solaris Volume Manager, and VERITAS's VxVM on machines running a 64-bit kernel. Solaris cannot boot from a file system greater than 1 Tbyte, and the fssnap command is not currently able to create a snapshot of a multi-terabyte file system. Individual files are limited to 1 Tbyte, and the maximum number of files per terabyte on a UFS file system is 1 million.
    The Extensible Firmware Interface (EFI) disk label, compatible with the UFS file system, allows for physical disks exceeding 1 Tbyte in size. For more information on the EFI disk label, see System Administration Guide: Basic Administration on docs.sun.com. "
    Found this at:
    http://www.sun.com/bigadmin/features/articles/solaris_express.html
    So you may want to look into EFI, which is different way of partitioning the disk.

  • Expanding windows 7 partition size ...

    Hello,
    I did some searching around and found several opinions / answers to the subject. I'm not sure which direction to take - one option being Paragon's CapTune software, and the other being the Disk Management Utility in Windows 7, which should allow you to expand the disk.
    The reason for me posting is two-fold. I attempted the Disk Management option, and found that the 'Extend Volume' option is not available for my Bootcamped drive. I was wondering if anyone had any thoughts on this and why it wouldn't work.
    Also, any opinions on CampTune would be great.
    What about VMWare Fusion? Could this alone expand a partition size?
    I know this is a topic that has been spoken about (probably quite a bit), but I was hoping to get some more personal feedback. What is the best way to do this? If the answer is start over with Bootcamp, that is also acceptable.
    Thank you,
    n
    Message was edited by: daidalos62

    Paragon. I'd buy the full Paragon Hard Disk Manager™ 2011 Suite
    That includes cloning Windows to SSD, to another drive, CampTune (resizing) and HFS access. I used it over the weekend to move Windows to an SSD (but that was using dedicated drives, not hybrid Windows on Mac setup. Took all of 20 minutes and able to boot.
    So you can always try backup both OS systems, and then restore each.
    I'd say post on their forum, but threads and questions seem to go unanswered or very little activity and input, and of course quiet for 2 weeks holidays.
    VMs are a totally different animal and nothing to help or do with partitioning.
    http://www.paragon-software.com/home/hdm-personal/

  • Two drives shows difference in file size, two drives shows difference in file size

    i have two drives on 'Lacie' and other 'seagate'. Both the drives have identical data, basically they are backup drives.
    File size that shows up at the bottom of finder window is different in both the drives.
    They have a difference of 3kb data. Though the indivisual files on the both the drives shows the same size.
    My question is why would be there any difference in file size if both the drives contains exactly the same data?

    SORRY FOR THE DUPLICATE THREAD.....Heh, there's nothing to be sorry about :)
    I post it just to help consolidate the posts, in case someone has same problem in future, they can find the whole thread.

  • Spooling of a query generates different file sizes for different databases

    Please help me regarding a problem with spooling. I spooled a query output to a file from two different database. In both the databases the table structure is the same and the output produced only one row. But the file size is different for the databases. How can this problem occur? Is there any database parameter need to be checked? Both the databases are in same version 10.2.0.1.0.
    before running the spool i did a
    sql> set head off feedback off echo off verify off linesize 10000 pages 0 trims on colsep ' '
    on both the sessions.
    In one database the filesize is *1463 bytes* and on the other the filesize is *4205 bytes*.
    Please help me to find out these discrepancies.

    hi Mario,
    I think you are not getting my point. Both the files contain the same output but their sizes are different. This is due to the no of blank spaces between columns. I wanted to clarify why there is a difference between two filesize when the query output is the same.

  • Specified a smaller partition size than the filesystem, using parted

    I felt so sorry to make such a stupid mistake:
    I shrinked the filesystem size with resize2fs, in the next step, I used parted to resize the corresponding partition. But unfortunately, because parted treated 1GB as 1000MB, which is quite different from the Linux system, that made me specified a smaller partition size than filesystem size. As you can predict, this ruined the filesystem. How can I restore my files?
    Last edited by victl (2014-12-07 17:06:55)

    victl wrote:
    ackt1c wrote:Boot Live CD and mount the partition, copy necessities.
    Thank you, I'll try. But is there any method to avoid data lose?
    Backups.
    Not a Sysadmin issue, moving to NC...

  • Sccli command shows partition size as 0MB

    I have configured 3510 with 12x142GB disks, 2 global spare, RAID10 connected to 2 SunFire v1280 servers.
    After partitons are done, sccli show config command shows 0 MB in size.
    sccli> show partitions
    LD/LV ID-Partition Size
    ld0-00 5AA178DA-00 0MB
    ld0-01 5AA178DA-01 0MB
    ld0-02 5AA178DA-02 0MB
    ld0-03 5AA178DA-03 0MB
    ld0-04 5AA178DA-04 0MB
    ld1-00 6B0565DB-00 0MB
    ld1-01 6B0565DB-01 0MB
    ld1-02 6B0565DB-02 0MB
    Why is this behaviour? Any ideas?

    Found the problem. The Firmware on 3510 was 4.21F but the CLI was 1.5.2. After installing 2.2 version of CLI this issue was fixed.
    Thanks

  • Ptsearchserver recommended partition size

    Hello,
    Using ptsearchserver 10.3.0. The existing Portal Search Server index is Clustered over two nodes each exceeding over 32 Gb in size. This large size, contribues to instability in the
    search servers, corruption in the search files, delays in restoring services when the search servers abend.
    Allocation of 140GB spare allocated disk space, was requested and mounted
    The existing 2 node search cluster will be partitioned resulting in each cluster being comprised of 2 search partitions.
    While little vendor documentation exists, we are hopeful a side effect of the re-partitioning will be a reduced timeframe for re-indexing the search database
    as the size will be split over multiple partitions.
    Question is whether or not one additional partition with two nodes is going to be sufficient to see a noticeable change in the areas listed above? If not, have searched extensively and not found a recommended partition size so we can do the math and know how many partitions should be created so our search server operates optimally?
    Any input about this action we are attempting to take is greatly appreciated.

    Really depending on what you are going to store on D and E you would need to decide what the paritition size should be since there is not a recommend partition size for thos parititions these are customizable.
    The recommend size for C is at least 32 GB
    http://www.microsoft.com/windowsserver2008/en/us/system-requirements.aspx

  • After duplicate operation, file sizes(Checkpoint file size) are different

    HI
    I have a some questions.
    We are testing a 4-way Replication. After duplicate operation, file sizes(Checkpoint file size) are different in OS command(du -sh).
    Is the normal?
    TimesTen Version : TimesTen Release 7.0.5.0.0 (64 bit Solaris)
    OS Version : SunOS 5.10 Generic_141414-02 sun4u sparc SUNW,SPARC-Enterprise
    [TEST17A] side
    [TEST17A] /mmdb/DataStore # du -sh ./*
    6.3G ./SAMPLE
    410M ./SAMPLE_LOG
    [TEST17A] /mmdb/DataStore/SAMPLE # ls -lrt
    total 13259490
    -rw-rw-rw- 1 timesten other 501 Aug 14 2008 SAMPLE.inval
    -rw-rw-rw- 1 timesten other 4091428864 Jan 29 02:13 SAMPLE.ds1
    -rw-rw-rw- 1 timesten other 4113014784 Jan 29 02:23 SAMPLE.ds0
    [TEST17A] /mmdb/DataStore/SAMPLE # ttisql sample
    Command> dssize ;
    PERM_ALLOCATED_SIZE: 8388608
    PERM_IN_USE_SIZE: 36991
    PERM_IN_USE_HIGH_WATER: 36991
    TEMP_ALLOCATED_SIZE: 524288
    TEMP_IN_USE_SIZE: 5864
    TEMP_IN_USE_HIGH_WATER: 6757
    [TEST17B] side
    [TEST17B] /mmdb/DataStore # du -sh ./*
    911M ./SAMPLE
    453M ./SAMPLE_LOG
    [TEST17B] /mmdb/DataStore/SAMPLE # ls -lrt
    total 1865410
    -rw-rw-rw- 1 timesten other 334 Dec 11 2008 SAMPLE.inval
    -rw-rw-rw- 1 timesten other 4091422064 Jan 29 02:25 SAMPLE.ds1
    -rw-rw-rw- 1 timesten other 4091422064 Jan 29 02:25 SAMPLE.ds0
    [TEST17B] /mmdb/DataStore/SAMPLE # ttisql sample
    Command> dssize;
    PERM_ALLOCATED_SIZE: 8388608
    PERM_IN_USE_SIZE: 432128
    PERM_IN_USE_HIGH_WATER: 432128
    TEMP_ALLOCATED_SIZE: 524288
    TEMP_IN_USE_SIZE: 5422
    TEMP_IN_USE_HIGH_WATER: 6630
    [TEST18A] side
    [TEST18A] /mmdb/DataStore # du -sh ./*
    107M ./SAMPLE
    410M ./SAMPLE_LOG
    [TEST18A] /mmdb/DataStore/SAMPLE # ls -lrt
    total 218976
    -rw-rw-rw- 1 timesten other 4091422064 Jan 29 02:22 SAMPLE.ds0
    -rw-rw-rw- 1 timesten other 4091422064 Jan 29 02:32 SAMPLE.ds1
    [TEST18A] /mmdb/DataStore/SAMPLE # ttisql sample
    Command> dssize;
    PERM_ALLOCATED_SIZE: 8388608
    PERM_IN_USE_SIZE: 36825
    PERM_IN_USE_HIGH_WATER: 37230
    TEMP_ALLOCATED_SIZE: 524288
    TEMP_IN_USE_SIZE: 6117
    TEMP_IN_USE_HIGH_WATER: 7452
    [TEST18B] side
    [TEST18B] /mmdb/DataStore # du -sh ./*
    107M ./SAMPLE
    411M ./SAMPLE_LOG
    [TEST18B] /mmdb/DataStore/SAMPLE # ls -lrt
    total 218976
    -rw-rw-rw- 1 timesten other 4091422064 Jan 29 02:18 SAMPLE.ds1
    -rw-rw-rw- 1 timesten other 4091422064 Jan 29 02:28 SAMPLE.ds0
    [TEST18B] /mmdb/DataStore/SAMPLE # ttisql sample
    Command> dssize;
    PERM_ALLOCATED_SIZE: 8388608
    PERM_IN_USE_SIZE: 36785
    PERM_IN_USE_HIGH_WATER: 37140
    TEMP_ALLOCATED_SIZE: 524288
    TEMP_IN_USE_SIZE: 5927
    TEMP_IN_USE_HIGH_WATER: 7199
    Thank you very much.
    GooGyum

    You don't really give much detail on what operations were performed and in what sequence (e.g. duplicate from where to where...) nor if there was any workload running when you did the duplicate. In general checkpoint file sizes amongst replicas will not be the same / remain the same because:
    1. Replicas are logical replicas not physical replicas. Replication transfers logical operations and applies logical operations and even if you try and do exactly the same thing at both sides in exactly the same order there are internal operations etc. that are not necessarily synchronised which will cause the size of the files to vary somewhat.
    2. The size of the file as reported by 'ls -l' represents the maximum offset that has so far been written to in the file but the current 'usage' of the file may be less than this at present.
    3. Checkpoint files are 'sparse' files (unless created with PreAllocate=1) and so the space used as reported by 'du' will in general not correspond ot the size of the file as reported by 'ls -l'.
    Unless you are seeing some kind of problem I would not be concerned at an apparent difference in size.
    Chris

  • Partition sizes and mount points

    Are there any guidelines or recommendations for partition sizes, either absolute (MB, GB) or relative (%) for a fresh NOWS SBE 2.5 server for general office use - file, print, G'wise email, ifolder.
    Which partitions should be separate?
    I have about 450 GB to play with: 140 GB mirrored, 420 GB RAID 5
    Thanks, James.

    Originally Posted by jmclean
    Are there any guidelines or recommendations for partition sizes, either absolute (MB, GB) or relative (%) for a fresh NOWS SBE 2.5 server for general office use - file, print, G'wise email, ifolder.
    Which partitions should be separate?
    I have about 450 GB to play with: 140 GB mirrored, 420 GB RAID 5
    Thanks, James.
    Hello James,
    Refer to the Novell Open Workgroup Suite Small Business Edition 2.5 Issues Readme
    2.3 Partitioning NOWS SBE 2.5
    The pre-configured partitions selected by the install are currently the only supported partitions available. Although you can change file system types to increase swap partition size, new partitions are not supported.
    The default partitions are:
    /boot
    swap
    / (root)
    All data is in the "/" partition unless you intend to use NSS. NSS Pools must be created on an uninitialized disk or uninitialized logical volume so a LUN on your RAID5 array would be a logical candidate. For more information about installing NSS see the Issues Readme section 2.8 Installing Novell Storage Services.
    Should you decide to deviate from the standard partitioning, the partition sizes you choose would depend on the number of applications you intend to install and the amount of user data you need to store and not so much on the number of users on the system. Every deployment is different!

Maybe you are looking for