Best Practice of Maximum number of InfoCube Supported in a MultiCube

Hi Experts,
I would kindly like to ask what is the maximum number of InfoCubes that should be added in a MultiCube that will not hamper the performance of the query? can you kindly provide some link if possible? MultiCube is Union of all InfoCubes right?
Many Thanks and Hope to Hear from you Soon guys
Best Regards,
Chris

While this Note does mention 10 as a maximum, it really depends on what you are trying to do.
If system resources are a an issue, you can specify an RSADMIN entry that limits the number of parallel database queries that get spawned by a query on a multiprovider.  I think the system default might be 20, which is a lot unless you have lots and lots of cpus.
There is also a table where you can enter the logical partitioning criteria of the multiprovider, which can also be used to restrict the number of DB queries that get spawned, e.g.
- you create a 10 cubes, one to for each Bus Area in you rogranization.  By default, this would spawn 10 DB queries even when you only wanted data for two specific Bus Areas. By setting the partitioning criteria as 0BUS AREA, the system is smart enough to only query the two underlying cubes that contain the Bus Areas you want.  In this environemnt, where you only query one or a few Bus Areas, you could have many cubes in your multiprovider.  See Note 911939 - Optimization hint for logical MultiProvider partitioning

Similar Messages

  • Maximum number of AP supported has already joined cisco

    Dear all,
    We have vWLC, AP connected through MPLS network,  we also have ap base license for 15 AP, the problem is that AP cant joint to WLC because of
    this error:
    Maximum number of AP supported has already joined cisco
    WLC shows that all 15 licenses are used, but we have only 3 AP
    What is that? bug or not?, everything worked fine before we put new 4 AP into the network
    thank you
    I attached outputs in photos:

    This license is evaluate and it works fine, but when it is our license it shows no AP in summary
    (Cisco Controller) >show sysinfo
    Manufacturer's Name.............................. Cisco Systems Inc.
    Product Name..................................... Cisco Controller
    Product Version.................................. 7.4.100.60
    RTOS Version..................................... 7.4.100.60
    Bootloader Version............................... 7.4.100.60
    Emergency Image Version.......................... 7.4.100.60
    Build Type....................................... DATA + WPS
    System Name...................................... Cisco VWLC
    System Location.................................. ESXI
    System Contact...................................
    System ObjectID.................................. 1.3.6.1.4.1.9.1.1631
    IP Address....................................... 192.168.6.1
    System Up Time................................... 0 days 17 hrs 29 mins 9 secs
    System Timezone Location......................... (GMT +4:00) Muscat, Abu Dhabi
    System Stats Realtime Interval................... 5
    System Stats Normal Interval..................... 180
    Configured Country............................... Multiple Countries:BY,MX,US
    --More-- or (q)uit
    State of 802.11b Network......................... Enabled
    State of 802.11a Network......................... Enabled
    Number of WLANs.................................. 7
    Number of Active Clients......................... 41
    Memory Current Usage............................. Unknown
    Memory Average Usage............................. Unknown
    CPU Current Usage................................ Unknown
    CPU Average Usage................................ Unknown
    Burned-in MAC Address............................ 00:50:56:9F:68:43
    Maximum number of APs supported.................. 200
    (Cisco Controller) >show ap summary
    Number of APs.................................... 3
    Global AP User Name.............................. Not Configured
    Global AP Dot1x User Name........................ Not Configured
    AP Name             Slots  AP Model              Ethernet MAC       Location          Port  Country  Priority
    Meeting_Room         2     AIR-LAP1142N-E-K9      54:75:d0:f5:3a:e4  default location  1        BY       1
    Fttb                 2     AIR-CAP3602I-A-K9     30:f7:0d:29:03:42  default location  1        MX       1
    Technical            2     AIR-LAP1142N-E-K9      c8:9c:1d:f4:72:8a  default location  1        BY       1

  • What maximum number of users supported local p2p?

    What maximum number of users supported local p2p?

    the maximum number of DIRECT_CONNECTIONS NetStreams in one NetConnection is controlled by NetConnection.maxPeerConnections. the default is 8, but you can change it to anything.

  • Best practice for phone number column data type

    Hi,
    I hope I have posted this in the right forum, if not just notify me and i will remove it to the correct area.
    What would be considered best practice for storing a phone number.
    Number of Varchar2.
    Ben

    Well I was thinking that Number would have the following disadvantages,
    1. Users entering phone numbers into a form may be tempted to pre format with spaces, thereby throwing up an error.
    2. Mobile phone numbers may have leading zeros
    3. Calculations are not carried out on phone numbers.
    I was leaning towards a varchar2 type.
    Ben

  • Maximum number of emails supported in a folder

    Hi Guys,
    What is the maximum number of emails you can in one folder on iPhone 5S?
    We have a user whose Exchange mailbox is around 16.5GB and he has about 93K items overall and around 20K in the sent items. Intermittently he loses emails from his Sent Items going back to March 2010. Then it takes days to re-sync the Sent Items folders. This happens on his iPhone 5S(iPhone6C2/1104.257) and iPad2C1, we use Exchange Activesync.
    Thanks

    The theoretical limit is the NTFS volume limit. This is provided in the below article.
    NTFS Physical Structure
    However the practical limit depends on your application and business scenario. You can read about FILESTREAM and FileTable internals and about performance considerations from the below excellent article
    Microsoft SQL Server 2012 Internals: Special Storage - Kalen Delaney and Craig Freedman
    Krishnakumar S

  • Maximum number of connection supported by weblogic

    Hello,
    In our current applicatin we are using weblogic 7.0 as the app server and Oracle 9i as the db server.
    The problem is the system is running out of connection frmo the connection pool very frequently, it is because of increase in the number of users,So far we have the maximum number of connectin capacity in the weblogic connection pool is 159default value). We thought of increasing the connection as an immediate solution.
    Would like to know what is the maximum value that can be set as max connection attirubute.Does weblogic gives any restriction about max number of connection in the pool or the db server decides?
    Can any you please provide maximum what value we can set for both weblogic as well as oracle.
    Any help would be appreciated.
    Thanks
    Viswa

    Please go through the below link:
    https://msdn.microsoft.com/en-us/library/cc645993(v=sql.105).aspx#SSAS
    Support for Windows Server 2012 was added in a cumulative update for SQL Server 2008 SP3. Although Windows Server 2012 supports 64
    physical processors and 640 logical processors, not all of the SQL Server 2008 or 2008 R2 services were retrofitted to support the additional processing capability of Windows Server 2012. Specifically, Analysis Services does not support more than 64 logical
    processors in either SQL Server 2008 or SQL Server 2008 R2.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped.
     [Blog]

  • Best Practices and Maximum Availability

    We are on the upgrade path going from 10.2.0.4 to 11.2. I was excited when reading about Transient Logical Standby and how it could cut down your time to just a few minutes of outage. However, I found that we are unable to use it because we have data types that can not be used with Logical Standbys. I was hoping that someone could point me to another Whitepaper or documentation of best practices of upgrading with a Physical Standby from 10g to 11g.
    Regards
    Tim

    Thank you for the reference. It is an interesting concept with the transportable tablespaces. I will have to read this in a little more detail. I am not sure how I get my physical standby back into place. I don't want to have to rebuild it...but maybe the document covers that as well (I didn't see it on the first read).
    Thanks for the information.
    Regards
    Tim

  • Maximum number of files supported in a folder for FileTable feature

    Hi,
    I have implemented a solution using SQL 2012 FileTable with an expected workload of 150K files a year. File's size is not that big, just a few KB EDI file but I am wondering what is supported/recommended limit of number of files in a folder for FileTable?
    Thanks
    Usman Shaheen MCTS BizTalk Server http://usmanshaheen.wordpress.com

    The theoretical limit is the NTFS volume limit. This is provided in the below article.
    NTFS Physical Structure
    However the practical limit depends on your application and business scenario. You can read about FILESTREAM and FileTable internals and about performance considerations from the below excellent article
    Microsoft SQL Server 2012 Internals: Special Storage - Kalen Delaney and Craig Freedman
    Krishnakumar S

  • Best practice for maximum performance

    Hi all
    OS : Linux AS 4.2
    Oracle 9.2.0.8
    My database size is around 4TB (Every day increasing 5GB) and we are following a method like analyze the main table's latest partitions only because the entire statistics collection will take more time. Is that a good practice?
    Is there any difference between analyze table command and using dbms_stats package to collect statistics?
    Collecting entire statistics or estimate 33 % or 15% will make any difference (Performance level)?
    Indexes are one of the main factors which will effect to the database performance. As a DBA, what all the things I should do in a daily basis to maintain indexes in a proper state.
    I just wanted to know a best way of managing the statistics and indexes properly
    Many thanks in advance
    Nishant Santhan

    Nishant Santhan wrote:
    Hi all
    OS : Linux AS 4.2
    Oracle 9.2.0.8
    My database size is around 4TB (Every day increasing 5GB) and we are following a method like analyze the main table's latest partitions only because the entire statistics collection will take more time. Is that a good practice? Nishant,
    it depends, as always. If you have (important or large) queries that rely on global-level statistics then the statistics generated by "aggregation" (provided you've complete statistics on partition-level) might be insufficient, or they might become more and more outdated if you've generating them once and don't update them any longer (provided you're using DBMS_STATS, ANALYZE is not capable of generating genuine global-level statistics).
    Note that 11g offers here a significant improvement with the "incremental" global statistics. See Greg Rahn's blog note about this interesting enhancement:
    http://structureddata.org/2008/07/16/oracle-11g-incremental-global-statistics-on-partitioned-tables/
    >
    Is there any difference between analyze table command and using dbms_stats package to collect statistics? Definitely there are, more or less subtle ones. Regarding the partitioning, as already mentioned above, DBMS_STATS is capable of generating "real" top-level statistics at the price of "duplicate" work (analyzing the table at whole potentially reads the same data again that has been used to generate the partition-level statistics), but see the comment about 11g above.
    There are more subtle difference, e.g. regarding the calculated average row length, etc.
    >
    Collecting entire statistics or estimate 33 % or 15% will make any difference (Performance level)?It all depends on your data. 1% might be sufficient to get the plans right, but sometimes even 33% might not be good enough. 11g adds here another significant improvement using a cunning "AUTO_SAMPLE_SIZE" algorithm that seems to get it quite close to "computed" statistics but taking far less time:
    http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/
    Indexes are one of the main factors which will effect to the database performance. As a DBA, what all the things I should do in a daily basis to maintain indexes in a proper state. B*tree indexes should in general be in a good condition without manual interference, there are only a few circumstances where you should consider performing manual maintenance. And most of these can be covered by the "COALESCE" operation, an actual "REBUILD" should not be required in most cases.
    You might want to read Richard Foote's blog about indexes:
    http://richardfoote.wordpress.com
    and Jonathan Lewis' notes about index rebuilding:
    http://www.jlcomp.demon.co.uk/indexes_i.html
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Best Practice for Running Number Table

    Dear All
    Thank you for your attention.
    I would like to generate number for each order
    AAAA150001
    AAAA is prefix
    1 is year and 0001 is he sequence number.
    I proposed the table as below
    Prefix    | Year     | Number
    AAAA    | 15        | 1
    Using  SQL query as below to get the lastest number
    SELECT CurrentNumber = Prefix + Year + RIGHT ('0000'+ CAST (Number+1 AS VARCHAR(4)), 4)
    FROM RunningNumber WHERE Prefix = 'AAAA'
    after all save process then update the running number table
    UPDATE RunningNumber SET Number = (Number +1) WHERE Prefix = 'AAAA' AND Year = '15'
    Is that a normal approach and good to handle concurrent saving?
    Thanks.
    Best Regards
    mintssoul

    Dear Visakh16
    Each year the number will reset, table will as below
    Prefix    | Year     | Number
    AAAA    | 15        | 8749
    AAAA    | 16        | 1
    I could only use option1 from your ref.
    To use this approach, I must make sure 
    a) the number will not be duplicated or jumped as there is multiple users using the system concurrently.
    b) the number will not increment when there is any error after get the new number
    Is that using the following methods could archive a) & b)? 
    1) .NET SqlTransaction.Rollback
    2) SQL
    ROLLBACK TRANSACTION Thanks.
    To prevent repeat information, details of 1) & 2) is not listed here, please refer to my previous reply to Uri
    thanks.
    Best Regardsmintssoul

  • What is a best practice for counting number of payments in Payroll Run?

    I'm trying to get a feel for what people are using as a means for counting number of payments being created during a payroll run.  I would like to get a total number of checks, direct deposits, wires and zero net checks.  My initial thought is to use the Pre-DME number "Selected and Evaluated Persons".  This number seems to always match the number of persons "Selected" and "Evaluated" in posting to FI/CO.
    Since many companies are restricting use of SE16 and access to many of the BT tables is locked down, I would like to use something that the end user has access to and can easily use.  I don't believe the spools from RFFOUS_C and T are appropriate since it would have overflow pages (2 page REM Statements) in that number and thus would not be an accurate reflection of total number of payments.
    I would appreciate your insight and thoughts on this topic.
    Thanks!

    Hello Jennie,
    This is what we do every payroll:
    1. After running live payroll and before exiting our users run wage type reporter /559 to get the payroll net pay. They also run Postings to FICO simulation run to check payroll net pay GL account. This match all the time. So this is the first pass.
    2. They finish all subsequent activities in PRD run upto RFFOUS_T.
    3. We have developed a custom report which almost looks like REGUH which gives the net pay details by every payment method and amount to be deposited or paid. We match this report from the output we received from step 1.
    4. Run postings to FICO live and check the payroll net pay GL account for final validation.
    The steps might sound tedious but of a great use.
    Arti

  • Best practices for maximum performance of C# CLR scalar UDF in View

    Can anyone give some tips for best performance of a C# CLR UDF on SQL 2008 R2 or SQL 2012? Including maximizing the performance of the transition to CLR. I am using it in a View.  E.g.,
    create myview (id, name, address) as select id, dbo.MyUDF(name), address from mytable;
    In particular:
    Are there any specific C# build options or source code options? The MyUDF prototype is defined as:
        [Microsoft.SqlServer.Server.SqlFunction()]
        public static SqlString MyUDF(SqlString data)
    Is it better to build the DLL for target x64 or Any CPU?
    Does the use of a specific Target Framework have any effect? (e.g., DotNet 3.5 versus DotNet 4.0 or 4.5)
    Is there any way to inform SQL Server that the UDF is safe for Parallelism? 
    I read somewhere that using the InProc ado.net Provider might improve performance but I don't see any mention of a Provider in the documentation for Create Assembly.
    Thanks for any tips.
    Neil Weicher
    www.netlib.com

    * Does the use of a specific Target Framework have any effect? (e.g., DotNet 3.5 versus DotNet 4.0 or 4.5)
    Yes, if you select 4.0 or higher, the assembly will not load on SQL 2008 R2.
    * Is there any way to inform SQL Server that the UDF is safe for Parallelism? 
    User-defined functions (save for inline) in general is a good recipe if you want to make sure that there is no parallelism. That is, the answer to your question is no, and if there is, SQL Server does not care.
    * I read somewhere that using the InProc ado.net Provider might improve performance but I don't see any mention of a Provider in the documentation for Create Assembly.
    Not sure what you mean here, but if you mean the context connection, this is in your code, and have nothing to do with what you put in CREATE ASSEMBLY. Except that, if you connect by some other means than the context connection, the assembly needs to have
    EXTERNAL_ACCESS_PERMISSION. But are you really doing data access from your UDF?
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best practice for number of result objects in webi

    Hello all,
    I am just wondering if SAP has any recommendation or best practice document regarding number of fields in Result Objects area for webi. We are currently running on XI 3.1 SP3...one of the end user is running a webi with close to 20 objects/dimensions and 2 measure in result objects. The report is running for 45-60 mins and sometimes timing out. The cube which stores data has around 250K records and the report would return pretty much all the records from the cube.
    Any recommendations/ best practices?
    On similar issue - our production system is around 250GB what would be the memory on your server typically...currently we have 8GB memory on the sap instance server.
    Thanks in advance.

    Hi,
    You mention Cubes so i suspect BW or MS AS .   Yes,  OLAP data access (ODA) to OLAP DataSets is a struggle for WebIntelligence which is best at consuming Relational RowSets.
    Inefficient MDX queries can easily be generated by the webi tool, primeraly due to substandard (or excessive) query and document design. Mandatory filters and focused navigation (i.e. targetted BI questions) are the best for success.
    Here's an intersting article about "when is a webi doc too big" https://weblogs.sdn.sap.com/pub/wlg/18706
    Here's a best practice doc about webi report design and tuning ontop of BW MDX : https://service.sap.com/~sapidb/011000358700000750762010E 
    Optimization of the cube itself, including aggregates and cache warming is important. But especially  use of Suppress Unassigned nodes in the BW hierarchy, and "query stripping" in the webi document.
    finally,  patch level of the BW (BW-BEX-OT-MDX) component is critical.  i.e. anything lower than 7.01 SP09 is trouble. (memory management, mdx optimization, functional correctness)
    Regards,
    H

  • What is the maximum number of PVC's supported by Cisco BPX 8620 and 8680 chassis with BCC-4V 128MB DRAM and 4 MB BRAM?

    We are working on a capacity planning project for one of our customers and we need an estimate on the maximum number of PVCs supported in the following situations:
    a)Cisco BPX 8620 and 8680 chassis with BCC-4V 128MB DRAM and 4 MB BRAM ?
    b)Maximum number of PVC's supported by each of the following STM-1 cards:
    - model BXM-155-4D and 4DX ?
    - model BXM-155-8D and 8DX ?

    a)It depends upon software level. b) 16,000 per card, With release 9.3:
    60K Connections Support on BXM-E—Provides the ability to support a maximum of 60K per card for VSI applications for the BPX 8600, for example, PNNI or MPLS, used on enhanced BXM-E cards.

  • Best Practices- Number of users in Contract Manager

    What is the best practices for the number of users operating in Primavera Contract Manager? We currently have v13.0.3.0

    There are no real limits to the number of users working in PCM, but your server(s) must be sized correctly to work effectively.  I've seen PCM instances with hundreds of users.  Sizing requirements is typically included with your installation materials; otherwise you may want to look at the knowledgebase for this document.

Maybe you are looking for