Database Capacity Planning

When procurring for new Hardware how to measure CPU & Memory consuming for 100 Users logging into the database.
Is there any guidelines to measure ?
For one user minium memory would consume ?

Hi,
It depends on what those 100 users will do. For example - if there would be high concurrency (all 100 connect at the same time and keep executing the SQL statements) there would be high CPU and Memory requirements.
If you are sizing for a new system, consider look at the statistics from test systems, otherwise you can collect statistics from the existing database (workload metrics from awr/statspack schema tables and CPU, memory utilization) and do the predictive modelling for forecast the additional hardware requirements.
Cheers,
Neeraj

Similar Messages

  • Can anybody give me the formula for Database capacity planning for 10gR2?

    Hi ,
    I want to learn how to make database capacity planning for production in 10gR2. Here i need any formula to plan the capacity.
    Can anybody help me?
    Regards
    Rajesh

    hi,
    There is no perfect world. There is no perfect application. If I am permitted to say, there is no perfect CAPACITY PLANNING. In this world, we strive hard to achieve near PERFECTION
    DISK SPACE ESTIMATED FOR THE DATABASE(RDBMS) ONLY
    ESTIMATES ARE BASED ON ASSUMPTIONS, SAMPLING, STATISTICS
    ESTIMATES CAN NOT QUANTIFY DISK SPACE REQUIREMENTS IN REAL TIME
    ACTUALS ALWAYS VARY FROM ESTIMATES.
    so ther is no particular formula to plan capacity planning.
    or post your rquirements for the DB and wait for some reply
    regards,
    Deepak

  • Oracle Database capacity planning

    Hello Team:
    Does any one have a Capacity planning spread sheet for sizing the server Requirements? ( CPU,Memory etc )
    Regards,
    Bala

    >
    Thanks Justin. How do I size the Database server Hardware? Third party apps most cases come with a
    recommendation for hardware. I am talking about a situation where we have to build a database based
    on certain basic inputs such as Concurrent users, Load , response time etc etc.Craig Shallahamer's orapub site would be a good place to have a look.
    HTH,
    Paul...

  • We need to do a capacity planning/sizing for DB

    Can you please provide any template/suggestion for DB capacity planning

    Dear dba,
    Database capacity planning will be based on the volume of the data that you wanted to load to the database.
    For instance you are planning to trace the performance on the fixed line circuits. Take some sample performance files from the vendors and see the files' sizes. According to those file sizes estimate the average load and see the SLA for the period that the files will be stored on the database.
    I have never heard of any template or a suggestion methodology for database capacity planning rather than simple calculations like sum, multiply etc.
    Can you please elaborate more and we may have some additional information on your concerns.
    Regards.
    Ogan

  • Capacity planning of database objects

    Oracle 10.2.0.4
    Windows platform
    I was reading Oracle Admin Guide :- Capacity planning of database objects section, but these and most of other sections are very theoretical .
    Kindly suggest how to get these topics in better way.

    I think Reddy has it right: watch the trend in space usage primarily at the tablespace level but you would also want to take a look at the individual tables and indexes within a tablespace to spot the fast growers, to pick out tables that might need parameter adjustment (pct_free) to avoid row migration, to identify indexes that may not reach a reasonable stready state size, etc....
    The dba_outstanding_alerts and related views might be of interest.
    On occassion just taking a count(*) of how many application tables, indexes, and users exist on your system and storing this with space usage figures can also be useful information when it comes time to request more disk, cpu, or memory since you can show past growth.
    HTH -- Mark D Powell --

  • Info related to Capacity Planning

    Could somebody help me with the information regarding the following:
    I am doing some capacity planning exercise for my project work. I need to know the Hard disk required, RAM required and the CPU resource required for the following Oracle products running on Solaris Server.
    1. ORACLE 9iAS
    2. ORACLE 9i Database
    3. Oracle Interconnect
    4. Oracle 9i Discoverer
    regards,
    Pranab Mukherjee

    Not sure but you can look at this:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3c2b9c90-0201-0010-ab86-a574c7881607

  • Capacity planning for a 2 million hits / day site

    Hi,
    I am doing a feasibility study for a 2 million hits per day
    e-commerce site.
    We are looking at various vendors but would prefer an
    Oracle solution.
    One option I am looking
    at is a three-tier server architecture with instances of
    Oracle Application server running on one set
    of machines and instances of Oracle 8/8i running on a second
    set of machines.
    Macromedia generated HTML would be auto-translated to PL/SQL
    cartridges to generate the relevant WRBXs.
    Has anyone any experience of the performance issues /
    robustness of this type of large-scale
    development.
    In particular, I am looking for some hard numbers / capacity
    planning model on:
    Number of instances of Application Server;
    Number / type of Application Server boxes - SpecInt / Flt and
    memory;
    Number of instances of RDBMS / parallel Oracle;
    Number / type of RDBMS boxes - SpecInt / Flt and memory; and
    External communications bandwidth.
    Any advice, personal anecdotes, recommended sites, or
    literature references much appreciated.
    Regards,
    Ajit Jaokar
    null

    Hi,
    1. Do cross forest authentication Go through Global Catalog?
    Yes, cross-forest authentication relies on Global Catalogs of both forests.
    2.  Do Global catalogue cache ad object info if they are in different Forest. If not, how does the authentication request flow across the forest?
    GC doesn’t cache information for objects from another forest. However, once a user has been authenticated and authorized, its service ticket remains (on the local machine) for a while before it logs off. The authentication request
    is first received by the local DC, then GC, when GC couldn’t find a match in its own forest, it checks its database for trust information, if there is a name suffix matched, the authentication request is passed to the corresponding forest.
    3. Is this calculation still the same when considering cross forest trusts?
    Yes, establishing forest trust doesn’t consume much more space, because GC doesn’t store information of another forest.
    4. Do we need to consider any other memory requirements in a cross forest trust environment?
    Not really, as I mentioned above.
    More information for you:
    Accessing resources across forests
    http://technet.microsoft.com/en-us/library/cc772808(v=WS.10).aspx
    Best Regards,
    Amy

  • Capacity planning for DB, pctfree pctuser

    What is segment, pctfree, pctused ? want to clear my concepts...
    Secondly how we can do capacity planning for Database growth?

    The simplest and most accurate approach is
    - Create the schema
    - Load it with a representative amount of data (i.e. 1-2% of the expected data volume)
    - Measure the amount of disk required
    - Multiply by an appropriate factor to determine the eventual size
    You can also do back of the envelope calculations, taking the average row size of your larger tables, multiplying by the expected number of rows, then adding an appropriate multiplier. Probably in the neighborhood of 2 to account for indexes, empty space in the table, etc., but this is very application specific.
    Justin

  • Seeking a capacity planning consultant

    Xpede is an financial services application service provider based in
    Oakland, CA, seeking an capacity planning consultant for an short term
    (under one week) initial engagement, with possible on-going consulting work.
    Please send resume, qualifications, hourly rate or a fixed price bid to
    [email protected]
    Capacity Requirements Plan (2-5 days)
    The Plan concentrates on the responsiveness, scalability, and throughput of
    our middleware infrastructure through investigation, analysis, and a formal
    report. This service is conducted as a pre-deployment evaluation. The
    consultant will work with our staff to identify or establish our capacity
    requirements in quantitative terms, which express the requirements for an
    application and architecture, and will encompass both current and future
    timeframes. Performance metrics may take the form of expected transaction
    rate (workload), response time, growth factor, user capacity, database size,
    and others. The time invested in this phase can be reduced when such
    information already exists and can be provided to BEA.
    - Performance Assessment Management Reports
    - Benchmarking
    - Performance bottleneck identification and resolution
    - Performance tools (such as workload generators) to be developed
    - How to interpret metrics
    - How to gather metrics
    - What metrics to gather
    - Reference architecture on Solaris platform (build-out)
    Qualifications
    3-4 experience on Weblogic platform, with over 20 projects. In-depth
    understanding of BEA technology, able to plan and design the architecture of
    middleware projects. Able to design architectures using associated
    technologies (Java, Object-Oriented Design and Analysis, Solaris, Veritas,
    Netscape/iPlanet, Oracle). Experience with large-scale, full life-cycle
    projects. Able to share technical and business knowledge with others.

    thnx for the  reply, Alex.
    in the application tier, i would like to have  2  boxes:
    1) service applications only like MMS, UPSA in one box
    2) can i include  search service appln in second box which is also a   search  query server? or
    i need to create/configure search serv.appln in the first box itself?
    i have created a farm topology as  shown above:
     would like to know whether its correct or not:  

  • Seeking a Weblogic capacity planning consultant

    Xpede is an financial services application service provider based in
              Oakland, CA, seeking an capacity planning consultant for an short term
              (under one week) initial engagement, with possible on-going consulting work.
              Please send resume, qualifications, hourly rate or a fixed price bid to
              [email protected]
              Capacity Requirements Plan (2-5 days)
              The Plan concentrates on the responsiveness, scalability, and throughput of
              our middleware infrastructure through investigation, analysis, and a formal
              report. This service is conducted as a pre-deployment evaluation. The
              consultant will work with our staff to identify or establish our capacity
              requirements in quantitative terms, which express the requirements for an
              application and architecture, and will encompass both current and future
              timeframes. Performance metrics may take the form of expected transaction
              rate (workload), response time, growth factor, user capacity, database size,
              and others. The time invested in this phase can be reduced when such
              information already exists and can be provided to BEA.
              - Performance Assessment Management Reports
              - Benchmarking
              - Performance bottleneck identification and resolution
              - Performance tools (such as workload generators) to be developed
              - How to interpret metrics
              - How to gather metrics
              - What metrics to gather
              - Reference architecture on Solaris platform (build-out)
              Qualifications
              3-4 experience on Weblogic platform, with over 20 projects. In-depth
              understanding of BEA technology, able to plan and design the architecture of
              middleware projects. Able to design architectures using associated
              technologies (Java, Object-Oriented Design and Analysis, Solaris, Veritas,
              Netscape/iPlanet, Oracle). Experience with large-scale, full life-cycle
              projects. Able to share technical and business knowledge with others.
              

    Hi,
    What should be the Heap size cannot be determined by the RAM size 32GB or 64GB. It depends on the Application...Application Functionality, How many Long living objects it creates , how many JNI calls it makes, How many Jars/Classes it loads, How many users requests your application at peak load, what kind of Caching you are using...etc
    Above things are the points which can be used to determine the required Heap Size and the JVM tunning options.
    There is no standard calculation or Formula available...The only option to determined the required Tunning parameters and JVM settings is...Load Testing and Performance Testing.
    Thanks
    Jay SenSharma
    http://middlewaremagic.com/weblogic/?page_id=2261  (Middleware Magic Is Here)

  • Capacity planning and hardware sizing

    Hi,
    I would like to know in detail about capacity planning and hardware sizing while deploying oracle database products. I need to know about properly planning the infrastructure and configuring and deploying the products. Moreover, when should we use RAC,how many nodes, exadata, DR etc. How to size hardwares RAM,Disk for this kind of implemetations.
    I know it depends on budget and environment. Still if anyone kindly give some links/docs I will be able to have a decent start in this domain.
    Regards,
    Saikat

    See , they all depend on your company policy and the database size that they are going to have.
    1. Size : i have worked on databases sizing from 80G to 2000G , all depending on the client or the industry you are catering to.
    2. hardware and Ram : Server Team should know this better , you can have a simple small server if the DB and application are small and the usage would be less. If it a full fledged system with high load then you need to get in touch with your server team and raise an oracle SR for exact specifications for that database and also plan for some future growth.
    3. RAC : depends on the project if it requires such a setup.
    4. DR : is for disaster management. Also all companies implement this by adopting data guard ( physical standby).
    5. For having a good start , start reading the books and refer to oracle documentation. More you refer , more knowledge you will get .
    Start with the basics.... make your basics strong.. and then move ahead with complex setups.
    Regards
    kk

  • Capacity Plan with Oracle 10g

    Good Day to all,
    I've little time working in environments with Oracle databases, I have requested to carry out a capacity plan with Oracle Database 10g for a data warehouse project that is leading the company in which they work. I request to make a plan specifying, among other things: size and number of tablespace and datafiles, projection growth taking into account the initial charge and the charge per week (incremental). The truth is a bit complicated for my inexperience in this kind of sizing requirements we will ask for your valuable cooperation. There are mathematical formulas that allow me to take those projections into account the type of data and their lengths? , There is a standard for creating the tablespace and datafiles?.
    In advance thank you for your contributions.

    The first thing you need to get management to do is give you two things.
    1. The cost to the organization for downtime, rated in dollars/hour.
    2. The service level agreement for the system's customers.
    3. The amount of data to be loaded into the system and the retention time.
    4. What version of RAID or what ASM redundancy is planned.
    With that you can start at the grossest level which is planning for database + archived redo logs + online back files.
    I generally figure the database, itself, at about 25% of required storage because I like to have at least two full backups, a bunch of incremental backups, plus all of the archived redo logs that support them. And all on separate shelves.
    The number of tablespaces and data files is really just a question of maintenance. Ease of transport. Ease of movement. Ease of backing up.
    If you want to get down to the actual sizes of tables and indexes the best place to go is the DBMS_SPACE built-in package. Play with these two procedures:
    CREATE_TABLE_COST and CREATE_INDEX_COST which do the job far more efficiently and accurately than any formulas you will receive. You can find demos
    of this capability here: http://www.morganslibrary.org/reference/dbms_space.html.

  • Workflow Issues -  Capacity planning

    Hi All,
    Can any one suggest some good practises and solution for this problem
    Capacity planning:_
    There are no standards currently available with the current team to determine the capacity based on number of instances, instance size, etc.
    Thanks in Advance
    Edited by: user613889 on Jan 31, 2011 9:31 PM
    Edited by: user613889 on Jan 31, 2011 9:32 PM
    Edited by: user613889 on Jan 31, 2011 10:35 PM
    Edited by: user613889 on Jan 31, 2011 10:40 PM

    It is a trial and error process.. Depends on the number of instances
    the instance size should be restricted to 32KB only..
    Look at tuning the engine in the Oracle 10GR3 BPM Admin manual... for all possible best practicess.
    Especially look at the JVM heap size, instances threads, exec threads for automatic activities and manual activities as well as number of database pools and database connections for the engine db as well as external db..
    Use Fuego.Fdi.DirHumanParticipant or DirOrganizationalRole or DirOrganizationalGroup instead of Fuego.Lib.Role for getting participants in groups etc..
    This reduces the number of participants that should be in the cache and is helpful when an org has a lot of employees that are being pulled from the AD..
    Look at some of the formulas for sizing etc..
    One such formula in the manual is:-
    Maximum Database Connections >= (# Interactive Exec Threads + # Automatic Exec Threads)
    Make sure most of your instance variables are of the Category SEPERATED so that they consume less space and the instance size reduces..
    I think the number of Project Variables should not be more than 256 and the number of Business Parameters should be more than 64.. Please check on this..
    Depending on the number of instances and the size of project, make sure you run only a certain number of projects on one engine...

  • Capacity Planning Tool needed

    Hi,
    As a novice user, I have few questions. I would really appreciate any direction or help:
    Q#1: Is there a capacity planning tool in Oracle Database that can tell us when do we need more memory or CPU?
    Q#2: Is there a capacity planning tool in Oracle Database that can be used for sizing a new system?
    Q#3: How do we find out the status/usage of CPU, Disk Usage etc?
    This is the version in use: Oracle Database 11g Release 11.2.0.1.0 – 64bit Production
    With the Real Application Clusters and Automatic Storage Management options.

    >Q#1: Is there a capacity planning tool in Oracle Database that can tell us when do we need more memory or CPU?
    SQL SELECT
    >Q#2: Is there a capacity planning tool in Oracle Database that can be used for sizing a new system?
    SQL SELECT
    >Q#3: How do we find out the status/usage of CPU, Disk Usage etc?
    SQL SELECT

  • Capacity Planning Metrics

    Hi,
    We are in the need to do a Capacity Planning for our database. We are planning to increase number of users from 250 to 750. Please help us what are the metrics we need look into it.
    Database Size 4 TB.
    Thanks,
    Balaji. S

    It depends.
    Assuming that the number of users is a good proxy for the amount of workload on the database (that may not be the case particularly in a data warehouse environment where certain users are likely to be associated with vastly more expensive queries than other users), what is your database bottleneck? For most data warehouses (I'm guessing based on the 4 TB size), your bottleneck is I/O. So tripling the amount of workload is likely to require additional I/O bandwidth and/or more spindles and/or more RAM to cache more blocks. The next most prominent bottleneck tends to be CPU. So looking at your current CPU utilization and figuring out how many CPUs you need to add if your workload triples would seem reasonable.
    Justin

Maybe you are looking for