Impact of BIA 7 on BI.7 sizing in terms of CPU requirements

Hello,
We are finalizing the sizing of our future BI.7 environnement.
As designed, the BI7 server capacity will be determined by the SAPS/memory quicksizer results.
But in addition to that, to be on the safe side, we have also to keep in mind that our BI.7 database server must be SAPS scalable,
In concrete terms, it means bying a much more expensive server than expected.
Besides, we are also planning on deploying BIA next year. So we figured that since the Accelerator contributes to reduce BI workload (data read, rollup, change-run, etc), we won't need that much CPU
in the future.
Based on your BIAccelerator experience, what you think of thoses assumptions ?
Thank you for your attention and your help.
Best Regards.

Hi Raoul
Your assumptions are legitimate in regard to BWA removing some of the work load on BW. However, before you make those assumptions you need to be committed to purchasing BWA next year. If for some reason, your company does not purchase BWA next year and you made sizing decisions for the BW environment based on a BWA purchase you might be grossly under sized.
If you are going to consider those assumptions when sizing BW, you will also have to understand how you plan to use BWA when you get it. For instance, if you only plan to use BWA for a handful of cubes with poor performing queries, you're not as likely to reduce as much of the workload from BW had you put every infocube in BWA and ran with zero aggregates. There's a lot of thought which needs to be put into this up front. Considering, you are just beginning with BW, I would personally recommend to size your BW without any thoughts of purchasing a BWA. This is certainly the safest.
Regards,
Josh
SAP NetWeaver RIG

Similar Messages

  • Resource estimation/Sizing (i.e CPU and Memory) for Oracle database servers

    Hi,
    I have came across one of the requirement of Oracle database server sizing in terms of CPU and Memory requirement. Has anybody metalink notes or white paper to have basic estimation or calculation for resources (i.e CPU and RAM) on based of database size, number of concurrent connections/sessions and/or number of transactions.
    I have searched lot on metalink but failed to have such, will be great help if anybody has idea on this. I'm damn sure it has to be, because to start with implementation of IT infrastructure one has to do estimation of resources aligned with IT budget.
    Thanks in advance.
    Mehul.

    You could start the other way around, if you already have a server is it sufficient for the database you want to run on it? Is there sufficient memory? Is it solely a database server (not shared)? How fast are the disks - SAN/RAID/local disk? Does it have the networking capacity (100mbps, gigabit)? How many CPUs, will there be intensive SQL? How does Oracle licensing fit into it? What type of application that will run on the database - OLTP or OLAP?
    If you don't know if there is sufficient memory/CPU then profile the application based on what everyone expects, again, start with OLTP or OLAP and work your way down to the types of queries/jobs that will be run, number of concurrent users and what performance you expect/require. For an OLAP application you may want the fastest disks possible, multiple CPUs and a large SGA and PGA (2-4GB PGA?), pay a little extra for parallel server and partitioning in license fees.
    This is just the start of an investigation, then you can work out what fits into your budget.
    Edited by: Stellios on Sep 26, 2008 4:53 PM

  • How to properly size Coherence? (3.6)

    My Customer needs to setup a new Coherence environment and needs to understand the sizing that of the systems in term of CPU and memory.
    Is there any tool that can help?
    Thank you
    Chiara

    Hi Chiara,
    In order to get a better understanding of sizing in terms of CPU and memory, I would suggest your customer to take a look at the Coherence Best Practices document available at http://coherence.oracle.com/display/COH35UG/Best+Practices
    Also, in order to achieve maximum performance your customer should also take a look at the Performance Tuning guide available at http://coherence.oracle.com/display/COH35UG/Performance+Tuning
    If your customer is going to use Coherence*Extend, then the Best Practices for Coherence Extend document available at http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appbestextend.htm would be useful too.
    Finally, to monitor the Coherence cluster, your customer can use JMX tools as explained at http://coherence.oracle.com/display/COH35UG/How+to+Manage+Coherence+Using+JMX
    Hope it helps.
    -Cris

  • APEX vs Forms 6i - Processor/System/Network Overhead

    We have been developing and deploying applications using Forms 6i for some years, have moved to web forms and are now developing in APEX. The IT department of client of ours has asked us to provide the relative performance merits and impact on CPU performance for each of the three technologies with particular focus on the server on which the Oracle database is running, in order to determine a basis for charging.
    Assuming that the application is the same. i.e. There is a common set of PL/SQL commands across the deployment technologies. Would it be true to say that this would be relatively the same for Forms and Web Forms, since these are generally deployed with separate forms or application servers, but would be higher for APEX, since APEX PL/SQL commands are required to build web pages before being sent (in this case) to the Oracle HTTP server? If so, are there any figures available to substantiate this case?
    Taking this one step further. Given that there is a network overhead for each of the deployments (in addition to the database overhead) has anyone conducted an analysis on the relative efficiencies of the three in presenting the same content? Or any insight as to what that might be? This could potentially be offset against an increase in datbase server cycles, if the former is true.
    Thanks very much for your help.
    Regards, Malcolm

    This will be hard to quantify without running your own tests, but based on feedback from other customers, the server resources required for APEX are somewhere in the neighborhood of 1/3 to 1/10 that required for Forms. This is especially true for memory, since every Forms client requires a dedicated server connection whereas APEX uses connection pooling. So, lets say you have 1,000 Forms users with an average memory requirement of 5mb per client (just guessing here), that's 4.8gb of RAM just for client connections. The typical number of sessions in that size APEX deployment is 10-20 = 50-100mb of RAM for client connections. The CPU impact of rendering APEX pages is VERY insignificant compared to the CPU required for most of the queries your developers will write. One of the busiest internal APEX instances has over 200,000 page views per day and is a 4 processor machine.
    Regarding network traffic, I'm not sure but you could measure the Forms traffic with Wireshark. You can probably estimate your average page view for an APEX to be somewhere between 35 and 50kb excluding CSS, JavaScript, and Images which should only need to load on the first page view. I highly doubt either client-server forms or web forms are less than that.
    Thanks,
    Tyler

  • RAC Capacity Planning

    Does anybody has any document or some thumb rules to perform Server Sizing in terms of CPU Speed, No. of CPUs, Kind of CPU, Storage Sizing, RAM etc. for DSS or OLTP applications.
    Kind Regards | Sanjiv

    Hi Sanjiv,
    I wanted to post the same question when I found your thread. Unfortunately, I also have questions rather than answers, but I hope that by providing a more specific question, I can attract some interest to this thread.
    We are in the process of installing Oracle RAC 11g R2 RAC on IBM Power server P740. We use IBM Logical Partitioning (LPARs) and more specifically, micro-partitioning feature (capability to allocate fractions of a core to an LPAR). Operating system is AIX 6.1. I/O is virtualized through VIO partition.
    We have started with 5 non-clustered Oracle 11.2.0.2 instances and planned to deploy them in 4 two-node clusters. Each cluster would host only a single instance, except for one cluster that was supposed to host two database instances. We sized clusters by assigning to each node the same number of cores that we had on the non-clustered instances.
    We have soon learned that clusterware and ASM processes represent a significant CPU utilization overhead of about 0.35 cores on each node. So if a node runs on an LPAR with 0.6 core assigned to it, the CPU utilization is above 50% even when cluster is idle. As a result, although we have doubled the resources when migrating to cluster, we have an undersized system and insufficient number of cores in the resource pool.
    As this point we are attempting to resolve the problem by consolidating the database instances into 2 clusters only, first 2-node cluster with a single large instance and second 2-node cluster with 4 smaller instances. We hope that reducing the number of clusters per P7 server will reduce the compound effect of clusterware overheads to CPU utilization of the P7 resource pools.
    Can somebody please comment on our sizing assumptions and CPU utilization findings:
    1. We have sized each node in the 2 node cluster to be of the equal size as the original non-clustered Oracle 11g instance. Is this sizing approach common?
    2. Our findings are that clusterware processes cause significant CPU utilization overhead and that we cannot use micro-partioning as we did before for non-clustered Oracle 11g instances. In other words, now we need LPARs with a min of a 1 core assigned to each node.
    Thank you and regards,
    VladB

  • Impact of logical partitioning on BIA

    We are on BI release 701, SP 5. We are planning to create logical partition for some of the infocubes in our system. These cubes are already BIA enabled, so will creation of logical indexes have any impact on the BIA or improve the BIA rollup runtime

    Hi Leonel,
    Logical partitioning will have impact on BIA in terms of performance .
    Current cube is already indexed on BIA. Now if you divide current cube data into different cubes and create multiprovider on it then each cube will have its own F table index on BIA.
    you have to put new cubes to BIA and Execute Initial filling step on BIA and place rollups for respective cubes.
    Point to be noted :
    As Data will be deleted from Current cube and move to other cubes , Indexes will not get deleted
    from corresponding F table indexes in BIA . There will be indexes for records which are not present in cube. Thus it is always a good practice to flush BIA which will remove all the indexes from BIA for current cube and create new Indexes on BIA.
    In this case , we will have consistent indexes on BIA which will not hamper performance.
    This will also improve rollup time as data will be less in each cube after logical partitioning. For rollup
    improvement time , we can implement Delta indexing on BIA as well.
    Question : Why do we want to create logical partitioning for cubes which are present on BIA as queries will never come to Cubes in BI system ?
    Regards,
    Kishanlal Kumawat.

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • The SGA was inadequately sized, causing additional I/O or hard parses.

    Dear all,
    We are using 10g RAC On Solaris 5.10
    Very freuently,we are getting the below message in DB Console
    The SGA was inadequately sized, causing additional I/O or hard parses.
    Additional Information
    The value of parameter "sga_target" was "15360 M" during the analysis period.
    Under recommendations,
    I found that
    Increase the size of the SGA by setting the parameter "sga_target" to 30720 M.
    Findings Path
    Findings Impact (%) Additional Information
    The SGA was inadequately sized, causing additional I/O or hard parses. 27 Additional Information
    Wait class "User I/O" was consuming significant database time. 13.2
    Hard parsing of SQL statements was consuming significant database time. 4.9
    Contention for latches related to the shared pool was consuming significant database time. 0.6
    Wait class "Concurrency" was consuming significant database time
    Can I rely on this info alone and increase SGA.. Is there any other way I confirm this ?.
    Dear Seniors,
    Please ignore this Thread if you find this question silly..
    Please advise
    Kai

    Hi Kai,
    Can I rely on this info alone and increase SGA.. Is there any other way I confirm this ?.Yes. Oracle has specific "cache" advisors to say if your data buffers of shared pool regions are too small.
    I have my notes here:
    http://www.dba-oracle.com/art_builder_buffers.htm
    The besy way to see them is to run a STATSPACK or AWR report . . .
    if you find this question silly..It''s not silly at all, it's a VERY common question!
    http://www.dba-oracle.com/t_estimating_sga_size.htm
    If you are running Oracle on a dedicated server, it's wasteful NOT to allocate all of the RAM to Oracle, less 20% for the OS . . .
    Lastly, remember that Oracle has an insatiable appetite for RAM, but there is a point of diminishing marginal return . . .
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • BI70 upgrade and landscape sizing

    We are currently part of the BI70 ramp up. Our whole BW landscape (21 systems) is currently running on BW 3.1 SPS 22. By upgrading the BW abap to BI70, I would like to have an estimate on how to size the different systems.
    The quicksizer only allow us to  enter data for a new installation and not to have the delta between different version.
    So could anybody advice on how we can plan for the new sizing for the currently used process ? and how the additional or specifc BI70 process could impact the sizing in terms of Memory, CPU, disk capacity ?
    Thanks
    Message was edited by: Alexis Lefebvre

    Hi Vladimir,
    In LMDB if the landscape Pattern is Hub (When the technical system is used in several product systems) then during MOPZ stack generation it will show the respective landscape upgrade details.
    if the landscape Pattern is Sidecar(When the technical system is used in single product systems)
    MOPZ will show only the particular upgrade details.
    if the ECC is sidecar, then MNOPZ will show only the respective stack details. or ECC is added like HUB like ERP 6.0, SAP netwear 7.0 and  SAP SRM , then it will show all the product system upgrade details
    Check the below link also for more details:
    https://websmp101.sap-ag.de/~sapidb/011000358700000044972013E/SpecificsInstUpgrade.pdf
    New in SP05: Product System Editor in the Landscape Management Database of SAP Solution Manager 7.1
    Rg,
    Karthik

  • BSI TaxFactory 8.0 server sizing guide?

    My company is installing BSI TaxFactory 8.0 for the first time on an AIX / Oracle platform.
    Is there a server sizing guide for how CPU and memory usage I need to plan for when payroll is running?  I realize it will vary based on number of employees.
    The only thing I can find on SAP Service Marketplace and BSI's web site is how much database space it requires (approx 2 gig).
    I've read all the notes; including 1064089 - Installing TaxFactory 8.0, but not as an upgrade.
    Our Basis Team Lead doesn't want it installed on the SAP Oracle db server.
    Thanks in advance,
    Mark Perrey

    Mark :
    If you're talking about BSI executable (i.e. tf80server.ksh for AIX / UNIX environment), this should be on the drive accessible by all SAP applications/db servers so it could be executed indepently of which server user is loggin on (due to load balance).
    If you're talking about BSI database, I don't see any issues with having this on the same SAP ORACLE dbase server (whether same instance or not). BSI dbase is relatively small (around 70 tables), and I would imagine database resource is probably minimum as most of the tax calucations are probaly done at the BSI application level.
    Rgds.

  • HARDWARE SIZING REQUIRED

    Hi,
    I am seeking approx RAM size and CPU requirements for my productions server with following details:
    1) 500-700 Active Users
    2) Modules to be implemented are:
    Oracle Apps modules
    · Oracle General Ledger
    · Oracle Receivables
    · Oracle Payables
    · Oracle Fixed Assets
    · Oracle Cash Management
    · Oracle E-Business Tax
    · India Localization
    · Oracle Purchasing
    · Oracle Inventory
    · Oracle Enterprise Asset Management
    · Oracle CRM ( Marketing, Services, Sales incl Install Base)
    · Oracle Business Intelligence and Oracle Discoverer
    3) Current database size is 240 GB and is growing at the rate of 25GB per month.
    Architecture of EBS is as :
    Database on Server X
    CM+ADMIN+FORMS+WEB on Server Y
    Please suggest.
    Thanks

    Have a look at the following thread:
    Hardware sizing for Oracle applications 12i
    Hardware sizing for Oracle applications 12i

  • Sizing the database for Manufacturing

    Hi All:
    Does anyone have a spreadsheet that will help me do some sizing for the database that is using Oracle Apps,
    Manufacturing module. I heard that there is one floating around.
    Thanks
    Eddie Lufker

    The installation instructions for each Oracle application contain sizing guidelines and minimum system requirements. These are accessible through Oracle Metalink or from the Oracle store at WWW.Oracle.com. In addition, your Oracle Sales Rep or Consultant can help you with sizing based on
    hardware vendor recommendations.

  • Sizing for Explorer

    I am trying to determine sizing for explorer. I did not find any option for explorer infosapces in sizing estimator. Is there any sizing estimator for Explorer only?

    Hi Jawahar,
    The sizing estimator doesn't have the feature to determine sizing based on InfoSpaces yet!
    Are you looking for sizing in terms of user access/activity for InfoSpaces? Or anything in specific like hardware requirements, etc.?
    I found a couple of resources below which might help based on what you're searching for.
    SAP BusinessObjects Explorer sizing and configuration - Business Intelligence (BusinessObjects) - SCN Wiki
    Also, there is a sizing GUIDE for Explorer (find the attached PDF):
    1742488 - SPOP Explorer 4.0 SP3 Sizing and Performance information
    Hope this helps.
    Regards,
    Sid

  • How can I increase the pixel count (tolerance) that will invoke sizing handles to make them easier to select and use?

    When working with Word tables, re sizing a column's width requires placing the mouse exactly on the line, waiting for the mouse pointer to change to the sizing control, and then you click and drag.  The problem is that the tolerance is so unforgiving
    (a few pixels one way or the other) that is rather difficult to get and keep the mouse in exactly the right spot to invoke the sizing handle.  Because the tolerance is so narrow, by the time you click to drag, the handle control often reverts to a regular
    mouse pointer because you moved the mouse a pixel, and instead you find yourself highlighting cell content instead of dragging the column width.  And you have to keep repeating this process over and over trial-and-error fashion until you finally get the
    sizing handle to display long enough to actually invoke it when clicking.   It is rather frustrating.  My question is this:  Is there a way to increase the tolerance to invoke a sizing handle?  In other words, increase the pixel count slightly,
    either side of the line, that will invoke the control for the sizing handle?   Instead of a few pixels, to something much more realistic/functional, like maybe 5 to 7 pixels either side of the line.  This is also a problem when dealing with columns
    in Windows Explorer - you find yourself dragging a column instead of re-sizing it because by the time you click the mouse, the sizing control has reverted to a regular mouse pointer - this has long been a source of wasted time and frustration to me.  I'm
    hoping there might be a way to change this in the Windows registry.  Thank you.   

    Cool article, but not relevant.
    I did not import from iPhoto nor Aperture.
    I have my photos as JPEG files in a folder on my hard drive.
    Photos, the app, did not make any duplicates. Rather, it made a giant Resources folder, almost as big as my folder of Image files.
    FInder Info confirms the size increase and lost capacity on my hard drive.
    But I appreciate the link.  That could certainly give someone the same impression.

  • Lot-sizing procedures

    Hi Experts,
    Can anybody please explain difference between following lot-sizing procedures.
    static lot-sizing procedures
    period lot-sizing procedures
    optimum lot-sizing procedures
    It will very much helpful if explain with example. Will appreciate with deserving point.

    Dear Raja,
    Pls. find details about Lot Sizing Procedures :
    Static Lot-Sizing Procedures
    Use
    In static lot-sizing procedures, the procurement quantity is calculated exclusively by means of the quantity specifications entered in the material master.
    Features
    The following static lot-sizing procedures are available:
    Lot-for-lot order quantity
    Fixed lot size
    Fixed lot size with splitting and overlapping
    Replenishment up to maximum stock level
    Period Lot-Sizing Procedures
    Use
    In period lot-sizing procedures, the system groups several requirements within a time interval together to form a lot.
    Features
    You can define the following periods:
    days
    weeks
    months
    periods of flexible length equal to posting periods
    freely definable periods according to a planning calendar
    The system can interpret the period start of the planning calendar as the availability date or as the delivery date.
    Splitting and overlapping are also possible for all period lot-sizing procedures
    The system sets the availability date for period lot-sizing procedures to the first requirements date of the period. However, you can also define that the availability date is at the beginning or end of the period.
    Optimum Lot-Sizing Procedures
    Use
    In static and period lot-sizing procedures, the costs resulting from stockkeeping, from the setup procedures or from purchasing are not taken into consideration. The aim of optimum lot-sizing procedures, on the other hand, is to group shortages together in such a way that costs are minimized. These costs include lot size independent costs (setup or order costs) and storage costs.
    Taking Purchasing as an example, the following problem hereby arises:
    If you order often, you will have low storage costs but high order costs due to the high number of orders. If you only seldom place orders then you will find that your order costs remain very low, but your storage costs will be very high since warehouse stock must be large enough to cover requirements for a much longer period.
    Features
    The starting point for lot sizing is the first material shortage date that is determined during the net requirements calculation. The shortage quantity determined here represents the minimum order quantity. The system then adds successive shortage quantities to this lot size until, by means of the particular cost criterion, optimum costs have been established.
    The only differences between the various optimum lot-sizing procedures are the cost criteria. The following procedures are available:
    Part Period Balancing
    Least Unit Cost Procedure
    Dynamic Lot Size Creation
    Groff Reorder Procedure
    Hope this helps.
    Regards,
    Tejas

Maybe you are looking for

  • Smart quotes won't work

    Hello there dear community, for some reason, Pages will not put quotation marks as I want them to be. As I am from Germany and therefore gernerally write in German, quotation marks are not " " but „ " . I have not figured out how to get Pages to do t

  • SPS upgrade from 18 to 21 due to Windows 7 and IE 8 Release

    Hi Guys, Our client has decided to upgrade their user's OS from Windows XP to Windows 7. Also the IE will be upgraded to IE8. We did an impact analysis with a single system for the BW front end. Except a few minor errors, nothing serious troubled the

  • OSB busienss service error after upgrading to 11.1.1.6

    Hello I have a business service which was working fine before we upgraded to 11.1.1.6. It has OWSM policy oracle/wss10_username_token_with_message_protection_client_policy applied. I am getting below error --- Error message: java.lang.UnsupportedOper

  • How to check who has cancel jobs in SM37

    Hello every one. Can any one explain how to check  who has cancel jobs in SM37(job overview). Is it possible to know such logs. In sm37 its only gives details about jobs what ever. Thanks & Regards.

  • Campus Manager 5.1.4 upgrade - locked files during install

    During installation of CM 5.1.4 on Windows 2003 Enterprise Server, I get popup windows "Stop All Programs". I checked Windows Service console but have been unable to determine which processes may be holding these files. How do I determine? Thanks, D: