System Sizing

All,
Is there a document or guidelines for system sizing
available anywhere? I'm at the point of trying
to design/deploy WebLogic for the first time, but
the first step is sizing the proposed production
environment.
If we know how many hits/minute, how do we
determine things like:
number of instances
# of CPU's
Amount of memory
The deployment environment will be Sun/Solaris
and will employ clustering for HA.
Any info will be helpful.
Thanks!
-jeff dutkofski
Fort Worth, TX

There is a capacity planning guide. Talke to your BEA sales representative
and ask for a copy.
Cheers - Wei
Jeff Dutkofski <[email protected]> wrote in message
news:[email protected]..
All,
Is there a document or guidelines for system sizing
available anywhere? I'm at the point of trying
to design/deploy WebLogic for the first time, but
the first step is sizing the proposed production
environment.
If we know how many hits/minute, how do we
determine things like:
number of instances
# of CPU's
Amount of memory
The deployment environment will be Sun/Solaris
and will employ clustering for HA.
Any info will be helpful.
Thanks!
-jeff dutkofski
Fort Worth, TX

Similar Messages

  • System sizing for BPC 7.0M MS version

    Hi, Sirs
    In the Scalability Guidelines (Section 2.3) of the Master guide for BPC 7.0M
    follwing 3 elements are described in the matrix as key factors for system sizing. 
    For example, Medium BPC install
    1. Concurrent users = More than 75 concurrent users
    2. Cube size < 1.5GB
    3. Largest Fact table < 50000 rows
    We have plan to install BPC 7.0M SP05 and followings are our estimation for system sizing.
    1. Concurrent users = 76 concurrent users
    2. Cube size = 800MB
    3. Largest Fact table  =144000 rows 
    'Largest Fact table' exceeds the value of Medium BPC install,
    but we'd like to use servers which is described for  Medium BPC install.
    Would you please inform me the impact of the excess on system sizing?
    Thank you for your support in advance.
    Best Regards
    Hiro

    It will be any problem if the sizing exceed the recommendation. Actually is a good practice because it allows you to increase the number of users without any changes into landscape later on.
    From the performances point of view the system will have the some performances if you will use the sizing recommended by SAP.
    On the other hand if you will not follow the sizing guide and you will use smaller server than recommended then it is possible to have bad performances into system.
    I hope this it will help.
    Kind Regards
    Sorin Radulescu

  • WebLogic 6.0 System sizing

    Given the challenges and variance from application to application, I
    understand that it is difficult to present generalized sizing information.
    However, as I endeavor to create a long-term budgetary sizing for our
    managed application services, I would like to gather some basic sizing for
    dynamic Web delivery (J2EE-style) using WebLogic 6.0 as both the web server
    and the application server on the same system (talking to an Oracle 8i
    back-end on a separate machine). We will be running stress tests on our
    existing development systems, but generic sizing information would be very
    useful.
    I would like to gather information for both Red Hat and Solaris on
    multi-processor systems.
    Can anyone point me at benchmarks or other system sizing information that
    will help me in this process?
    Thanks,
    ssh
    Steve Hultquist
    VP of Engineering and Operations
    Accumedia, Boulder, Colorado

    I'm pretty sure it's supported - check the website for details. We have a
              client app that sends XML transactions over HTTP 1.0 to 6.0
              Thanks
              "divya" <> wrote in message news:[email protected]..
              > Does weblogic 6.0 support HTTP 1.0? Post method doesnot seem to work with
              > Weblogic 6.0 and HTTP 1.0. Any help is appreciated. Thanks.
              >
              > Divya
              >
              >
              

  • Query on Decentralised Ware house System Sizing

    Hi Folks,
    I have to measure the disk space requirement for our client production Decentralized warehouse system (Please note this is not extended ware house system which is part of SCM. It comes along with pure ERP -ECC installation).
    As per our Warehouse Management functional team we are not sure on how much data we pull from R3 to WM system.
    In this case how to do the sizing to determine the disk space for WM system. Please note our R3 System DB size is 3.75 TB.
    Kindly provide me your inputs on what based we have to perform sizing on the above explained scenario.
    Regards,
    Vinod

    Hi,
    Have you checked the alias /sizing on service marketplace? I hope it can be of help
    Br,
    Javier

  • SAP system sizing

    Hello,
    Using quicksizer tool, the result is obtained in SAPS.
    Couldyou please provide the formula or approximation of how to calculate the system resoources based on SAPS.
    Thanks.

    Hi Gautam,
    Just start a Quicksizer and it will open a new window..there  select the option
    "for beginner" where you will get a detailed understanding of QuickSizer tool.
    Basically it is helpful when you are going to do a userbased or throughput sizing.
    Please have a look at the documentation on service.sap.com/quicksizing ->
    Quick Sizer tool -> Using the Quick Sizer -> Beginners ->
    The buttons inside the project are shelf explanatory.
    Let me know if anything is not clear.
    Thank you,
    Tilak

  • DMS Server storage sizing

    Hi All,
    We have configured a DMS server 2 years back for  test purpose which works fine.
    The free hard disk space on this server is only 4 GB and it is a windows based system.
    The RAM is 9 GB.
    We are planning to go for a full fledged DMS usage, I need your recommendation on the
    system sizing.
    Please share your expriences as I am too sure about the data growth.
    Regards,

    HI Ashutosh,
    Download  the  Content server Installation Guide and read the section
    Points to Consider Before Installation
    You will get idea of required DB space.
    Regards,

  • Best practice data source from ECC 6.0 for legal consolidation in BPC NW7.5

    Hi there,
    after scanning every message in this forum for "data source" I wonder if there isn't any standard approach from SAP to extract consolidation data from ECC 6.0. I have to say that my customer is not using New g/l so far and therefore the great guide "how to get balances from ECC 6.0 ..." does not fully work for us.
    Coming from the old world of EC-CS there is the first option to go via the GLT3 table. This option requires clever customization and the need to keep both GLT0 and GLT3 in line. Who has experiences regarding maintenance of these tables in a production environment?
    We therefore plan to use data source 0FI_GL_4 which contains all line items to the financial documents posted. Does this approach make sense or will it fail because of performance issues?
    Any help is appreciated!
    Kind regards,
    Dierk

    Hi Dierk,
    Do you have a need for the level of detail provided by the 0FI_GL_4 extractor? Normally I would recommend going for the basic 0FI_GL_6 extractor, which provides a much more manageable data volume since it only gives the periodic activity and balances as well as a much smaller selection of characteristics. Link: [http://help.sap.com/saphelp_nw70/helpdata/en/0a/558cabb2e19a4db3097b81bba4fd0e/frameset.htm]
    Despite this initial recommendation, every client I've had has eventually had a need for the level of detail provided by the 0FI_GL_4 extractor (or the New G/L equivalent - 0FI_GL_14). Most BW systems can handle the output of the line-item extractors without much issue, but you should test using production data and make sure your system sizing takes into account the load.
    The major problem you can run into with the line-item extractors is that if your delta somehow gets compromised it can take a very long time (days, sometimes weeks) to reinitialize and this can cause a large load in your ECC and BW system. Also, for the first transport to production, it is important to plan time to initialize the delta.
    Ethan

  • Result Set Too Large : Data Retrieval restricted by configuration

    Hi Guys,
    I get the above error when running a large dataset, with a hierarchy on - but when I run without a hierarchy I am able to show all data.
    The Basis guys increased the ESM buffer (rsdb/esm/buffersize_kb) but it still causes an issue.
    Anyone any ideas when it comes to reporting large volumes with a hierarchy?
    Much appreciated,
    Scott

    Hi there
    I logged a message on service marketplace andg got this reply from SAP:
    ' You might have to increase the value for parameters
    BICS_DA_RESULT_SET_LIMIT_DEF and BICS_DA_RESULT_SET_LIMIT_MAX as it
    seems that the result set is still too large. Please check your
    parameters as to how many data cells you should expect and set the
    parameter accordingly.
    The cells are the number of data points that would be send from abap
    to java. The zero suppression or parts of the result suppression are
    done afterwards. As a consequence of this, the number of displayed
    data cells might differ from the threshold that is effective.
    Starting with SPS 14 you get the information how many data cells are
    rejected. That gives you better ways to determine the right setting.
    Currently you need to raise the number to e.g. 2.000.000 to get all
    data.
    If BICS_DA_RESULT_SET_LIMIT_MAX is set to a lower value than
    BICS_DA_RESULT_SET_LIMIT_DEF, it would automatically cut the value of
    BICS_DA_RESULT_SET_LIMIT_DEF down to its own..
    Please note that altough this parameter can be increased via
    configuration, you should do a proper system sizing according to note
    927530 to ensure that the system can handle the number of users and
    resultset sizes you are expecting."
    Our basis team have subsequently apllied these changes, and I will be testing today.
    Thx

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • Suggested data file size for Oracle 11

    Hi all,
    Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    Any help would be greatly appreciated.
    Thanks!

    Ben Daniels wrote:
    Hi all,
    >
    > Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    >
    > I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    >
    > I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    >
    > Any help would be greatly appreciated.
    >
    > Thanks!
    Hi Ben,
    Check the note 129439 - Maximum file sizes with Oracle
    Best regards,
    Orkun Gedik

  • Local Client Copy taking too much time

    Hi All,
    I recently done a system refresh of my Quality server (restore PRD backup to QUA Server)
    After that I have created a new client and has initiated a local client copy after the test run
    went successful.
    Then after reading approximately 4500 table it has got stuck and I saw in SM50 that table "ACCTCR"
    was running since approximately 56000 seconds.
    Then I run the program 'BTCTRNS1" to clean the work processes.
    After that I again restarted the local client copy in restart mode.
    Now since last 3 hours I can see in SM50 that table u201CACCTCR" is being sequentially ready for approx 500(under Actions) seconds and then delete for 15 seconds (under actions).
    I also found that table size is also not increasing as well as same table is being sequentially read for 500 sec and then deleted for 15 seconds (under Actions in sm50) and program is also same.
    This is happening since last 3 hours now.
    Under SCC3 I can see the status as processing reset.
    Does anyone face this issue ever? And can anyone suggest is everything is running OK or there is
    some problem.
    Your suggestions will be highly appreciated.
    Regards,
    PG

    Hello,
    Did you recalculate full DB statistics after the last system copy from PRD to QA ?
    This is very often forgotten, leading to long elapse times in select statements.
    Check via DB20 on table ACCTCR from when latest DB statistics date.
    Eventually rebuild all indexes from ACCTCR, this can be done online via ABAP RSANAORA if you are running Oracle.
    You can also perform an SQL trace on the current running process to see if the select from the client copy is running correctly.
    I also know that ACCTCR is a table containing transactional data and most of the time it contains a lot of data.
    II suppose you are running a client copy with profile SAP_ALL on QA based on a copy of PRD ?
    This means that you will end up with 2 production clients in QA ?
    Is your QA system sized for this ?
    Another question that you can ask your client, is data from ACCTCR really needed on QA on both clients.
    If not it is maybe better to start an archiving based on SAP Note :
    Note 83076 - MM_ACCTIT: Archiving programs for ACCTIT, ACCTHD, ACCTCR
    Followed by a new client copy.
    Success.
    Wim

  • SPM data extraction question: invoice data

    The documentation on data extraction (Master Data for Spend Performance Management) specifies that Invoice transactions are extracted from the table BSEG (General Ledger) . On the project I'm currently working the SAP ERP team is quite worried to run queries on BSEG as it is massive.
    However extract files are called BSIK and BSAK of; which seems to suggest that the invoices are in reality extracted from those accounts payable tables.
    Can someone clarify the tables really used, and if it's the BSIK/BSAK tables what fields are mapped?

    Hi Jan,
    Few additional mitigation thoughts which may help on the way as same concerns came up during our project .
    1) Sandbox Stress testing
    If available u2013 take advantage of an ECC Sandbox environment for extractor prototyping and performance impact analysis. BSEG can be huge (contains all financial movements), so e.g. BI folks typically do not fancy a re-init load for reasons outlined above. Ask basis to copy a full year of all relevant transactional data (normally FI & PO data) onto the sandbox and then run the SPM extractors for a full year extraction to get an idea about extraction system impact.
    Even though system sizing and parameters may differ compared to your P-box you still should get a reasonable idea and direction about system impact.
    2) In a second step you may then consider to break down the data extraction (Init/Full mode for your project) into 12 monthly extracts for the full year (this gives you 12 files from which you Init your SPM system with) with significant less system impact  and more control (e.g. can be scheduled over night).
    3) Business Scenario
    You may consider to use the Vendor related movements in BSAK/BSIK instead the massive BSEG cluster table as starting tables (and fetch/lookup BSEG details only on a need base) for the extraction process (Index advantages were outlined above already).
    Considering this we managed to extract Invoice data with reasonable source system impact.
    Rgrds,
    Markus

  • Can we change log_buffer parameter online e.g. when all users are working..

    Hi,
    Can we change log_buffer parameter online e.g. when all users are working.. ???
    What is relation between redo buffer in sga and log_buffer parameter ?
    SQL> show sga
    Total System Global Area 3758096384 bytes
    Fixed Size 1983152 bytes
    Variable Size 553655632 bytes
    Database Buffers 3187671040 bytes
    Redo Buffers 14786560 bytes
    SQL>
    SQL> show parameter log_buffer
    NAME TYPE VALUE
    log_buffer integer 14338048
    SQL>
    SQL>
    If the log_buffer parameter is 14 MB then why redo buffer is no as same size e.g.
    14 MB
    SSM

    I am hoping your Oracle Consultant know what he's talking about.
    According to Oracle Performance Guide,
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#sthref654
    "On most systems, sizing the log buffer larger than 1M does not provide any performance benefit. Increasing the log buffer size does not have any negative implications on performance or recoverability. It merely uses extra memory.
    The best method to determine if bigger redo buffer is needed is
    SELECT NAME, VALUE
    FROM V$SYSSTAT
    WHERE NAME = 'redo buffer allocation retries';
    The value of redo buffer allocation retries should be near zero over an interval. If this value increments consistently, then processes have had to wait for space in the redo log buffer. The wait can be caused by the log buffer being too small or by checkpointing. Increase the size of the redo log buffer, if necessary, by changing the value of the initialization parameter LOG_BUFFER. The value of this parameter is expressed in bytes. Alternatively, improve the checkpointing or archiving process.
    Other than that, hight log file sync event indicate you need faster disk for your redo logfiles. However, please also post your commits per second and redo per second from your AWR report.
    100M redo logs might be small for your environment.

  • Installing all MDM servers on same physical host/server

    Dear All
    For production MDM environment we have installed all mdm servers- MDIS, MDSS, MDS and MDLS on the same physical server.
    Will this have any impact on the performance. We are using MDM 7.1 and on HP Unix.
    Another question is- If we have multiple repositories Vs single repository with multiple main tables( Vendor, Customer and Materials) why the choice should have performance impacts on the server level.
    Thanks-Ravi

    Ravi,
    I don't mean to muddy the waters here, but performance tuning and system sizing is a slightly more complex exercise than simply determining how many records your repository will contain and how many fields it will have.  There are many, many considerations that need to be taken into account.
    For example. the MDM server's performance can be dramatically improved or worsened depending on things like sort indices, accelerator files, data types and fill rates, validation logic, remote key generation, number of users, number of Java connections, web service connections, etc.  The list goes on and on, and that's not even taking into consideration the hardware (multi-processor, RAM, the physical disk configuration, etc.)
    With regards to the Import Server and Syndication Server you have to take into account things like map complexity: are you doing free-form searching to filter records in maps?  Are your maps designed for main tables, qualified lookup tables, or reference data?  How often do imports / syndications occur?  What keys are you matching on when importing?  Do you plan on importing by chunks?  What is the import volume, etc?
    Once again, I don't want to scare you, but I also wanted to bring up a few topics for you to think about.  There is a reason why SAP and other vendors charge a fee for doing system sizing.
    These are just a few small examples, but the list goes on and on.  I hope this helps to get you thinking in the right direction when designing your architecture and system landscape.  Good luck!
    Edited by: Harrison Holland on Dec 10, 2010 2:34 AM

  • Maximum number of posted documents for FI module

    Hello,
    In SAP FI, we want to post about 150,000 customer invoices (received via an interface from external purchasing system) monthly. Is that number OK (in terms of system stability, speed, flexibility) for use in Fi module or would it be better to use FI-CA?
    Eventually where could I get some information about the system sizing?
    Thanks a lot,
    Zbynek

    Hi:
    Normally, It will not affect other posting in FI. Have a discussion with your Basis administrator and let him know how much data is being updated in the system.
    All the doubts will laid to rest, if complete information is avilable to you.
    Please let me know if you need more information.
    Assign points if useful.
    Regards
    MSReddy

Maybe you are looking for