Sizing of database.

Hi Experts,
We are in the phase of SRM Server sizing.
Our current running ECC server Database size is 1TB, and now we are implementing the SRM (1 ABAP instance and 1 JAVA Instance which is EP) and PI(Dual Stack) with the ECC as back-end system.
SO how much database size should I assign in Filesystem for SRM and PI server,
What is the database growth relation between ECC and SRM.?
Please suggest me.
Regards..
Amit..

Hi deepak,
Thanks for comment.
Sure i will read the guide.
But our ECC Server DB size is 1TB, so does it relates to database size of the SRM and PI? (We are installing the SRM ABAP and EP on one database only)
I mean, by considering the size of ECC db, we have to configure db size of SRM and PI ?? is it so ?
Regards,,
Amit..

Similar Messages

  • Sizing the database for Manufacturing

    Hi All:
    Does anyone have a spreadsheet that will help me do some sizing for the database that is using Oracle Apps,
    Manufacturing module. I heard that there is one floating around.
    Thanks
    Eddie Lufker

    The installation instructions for each Oracle application contain sizing guidelines and minimum system requirements. These are accessible through Oracle Metalink or from the Oracle store at WWW.Oracle.com. In addition, your Oracle Sales Rep or Consultant can help you with sizing based on
    hardware vendor recommendations.

  • Database Sizing for Oracle Applications 11i

    Hi,
    I was wondering if someone could guide me on how to size an oracle applications database, we'll be using the following modules: GL,AP,CE,FA.
    The operating system, might be windows 2000.
    What I have in my mind right now regarding the information i need to collect is as follows:
    1. No. of users
    2. Estimated transaction activity for the above modules, and whether month
    end's are particularly transaction intensive.
    And yeah thats how far ive gotten , hence the help needed.
    About the transaction activity, how do i really quantify it and then translate it into something meaningful that will help me in sizing the database. Could I perhaps get information regarding how many transactions an average user enters in a day ????
    Well basically any sort of input would be really helpful, thanks in advance.
    NM

    Need to know the number of users as we can guess at the transactions level.
    Some base line assumptions.
    1. You need 1 Gybte of memory before you add any users.
    2. You need 10 Gbytes of disc, to hold the SGA, UNIX, Swap Space, Application etc.

  • Which database table design is better?

    Hi Experts,
    I've a dilemma, my situation is like this: I'm planning to do a lottery analysis software in java. Basically it is just 10,000 digits namely 4D(digits) from 0000 to 9999. Database would store historical data from year 1990 to present with around a few thousand records. From here, I would need to analyse like which numbers belong to a total of certain numbers (like 1234 would be 10), and so on.
    My real problem comes in here, do I store the analysis data together with the historical data? I tried and I would need many many columns like around hundreds of them (just the total analysis example above would need 36 columns and not to mentions tons of other analysis data). This type of design do not need any programming on the client side, just retrieval but I reckon it would pose a lot of problems in future in terms of scalability.
    The second solution which I google and read in this forum is just store the historical data in one table and use another table to JUST store the analysis name. Heavy programming and algorithm would be needed on the client side but I'm worried about the processing speed since I'm not well-versed in many algorithm logics. I know this design is good since redundant data is eliminated but I simply have no idea how to link the historical and analysis table together.
    Could any industry experts on database structure and algorithm guide me as I'm been scratching my head for a few days.
    Many thanks in advance.

    Sorry for the late reply. I have also thought of
    putting all the analysis data into the database but
    the data is simply too many and moreover requests
    from the visitors could have differrent cominations
    of analysis data which would make it almost
    impossible to store all the differerent combinations
    in database.
    I am not sure I understand why this is a problem.
    Say you had a lottery drawing every single day for the last 20 years. That means you would have 7300 records.
    That is a trivial number for even small database.
    Doing a query for sums to a single value would be trivial. It would require a table scan but the are only 7300 records.
    But if you determined that a lot of queries like that were going to be run every hour (or second) then you should indeed create an analysis table which does the sum for the result.
    At some point you are going to run into indexing problems with that approach. One solution is to simply duplicate the data. Again there just isn't enough data that duplicating it is a problem. Each "group" would have duplicated data. If you use the same primary key for the duplicated data it would be trivial (and fast) to do cross group queries as well (like a query for the sums of 20 that also occurred on tuesday.)
    Note that it doesn't really matter if you need to create several hundred analysis tables. Again the size is so small that it isn't even meaningfull to discuss sizing the database even then. Your only real concern with that number of tables is ensuring that associations are kept low (no explicit cross group associations.)

  • Methods for managing large content databases (SharePoint 2013)

    I have a SharePoint 2013 web application with a content database of over 800GB.  It's becoming difficult to manage backups (backup time takes forever).  It was also very difficult to migrate for 2010 to 2013.  I'm getting a warning from SharePoint
    indicating that the content database is very large?
    What are methods (SQL or SharePoint) for managing this.  I was told I could split the content database into smaller DBs....

    RBS isn't a factor in database sizing (RBS content must be accounted for when sizing a database). The latter half of your statement is absolutely correct. Microsoft supports you based on those requirements, appropriate disk performance, HA, DR, and so on
    (because restoring a 4TB content database would take quite some time).
    But keep in mind what supported means here -- if you opened up a PSS case with Microsoft, regardless if you did not meet these requirements, they would 'support' you up until they found that the issue may be stemming from your lack of having these things
    in place.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    So RBS dosen't shrink DB size, what's the purpose of it.  I dont' have RBS implemented but I have an 800GB DB.  I have a disaster recovery plan, Mirrored SQL and the disk utilization is within the range.

  • Handling query errors when using ADO connection in MSAccess

    Hello,
    I'm working on enhancing a data pull from a terrabyte sized Oracle database for use within an MSAccess front end. The current plan is to append and update tables within Oracle using ODBC pass through queries or ADO connection objects, then copy the resulting much smaller data set to Access for further processing.
    Oracle client: 11g
    Access DB: 2010
    Tnsnames.ora in place
    Connection via ADO connection, or DAO/ODBC pass thru queries
    connect string:
    "ODBC;DSN=dbname;UID=username;PWD=password;DBQ=dbname;"
    i have also tried:
    "ODBC;driver={Oracle};DSN=dbname;UID=username;PWD=password;DBQ=dbname;"
    there are two issues i'm facing:
    1) comparing performance with the SQL developer, where for some runs, Access is significantly slower, other times its fine
    2) trapping errors, where it appears that all i get is the query timeout error, rather than a more informative error, such as a key violation.
    in the first instance, creating about 6000 rows in the Oracle table takes about a 2 seconds using the SQL developer, and sometimes about 6 minutes with either ADO or DAO methods of queriing, but then sometimes its nearly as quick (?). Is there any way to figure out how to make the performance equivalent or consistent? This is probably not the forum, but maybe someone could post a link to where people are doing this more often (my google searches are returning spotty results)
    But in addition, it seems like if there is an error in the query, such as a key violation, the query will wait all the way until the timeout value in many cases before returning just the timeout error, which tells me nothing. I need to keep that value pretty high, as sometimes the client will pull a lot more than 6000 records. Its also inconsistent, sometimes i get the key violation in 5-6 minutes, other times its all the way to 10 minutes before the timeout error happens, rather than almost immediatly with SQLdeveloper. is there any way to return error messages more quickly?
    thanks much for the help - I'm going round in circles here.

    Hi,
    I am working on OLAP catalog. I created one cube and 6 Dim. And OEM Console mgs showing this as valid CUbe. When I m trying to create Presentation after selecting my Measure it gives this error:-
    oracle.dss.dataSource.common.QueryRuntimeException: BIB-9009 Oracle OLAP could not create cursor.
    oracle.express.ExpressServerExceptionError class: OLAPI
    Server error descriptions:
    DPR: Unable to execute the query, Generic at TxsOqCursorManager::fetchInitialBlocks
    SEL: Unexpected error occurred. Contact Oracle Support!, Generic at null
    java.lang.CloneNotSupportedException: BIB-9009 Oracle OLAP could not create cursor.
    oracle.express.ExpressServerExceptionError class: OLAPI
    Server error descriptions:
    DPR: Unable to execute the query, Generic at TxsOqCursorManager::fetchInitialBlocks
    SEL: Unexpected error occurred. Contact Oracle Support!, Generic at null
         void oracle.dss.dataSource.common.QueryDataDirector.addDataDirectorListener(oracle.dss.util.DataDirectorListener)
              QueryDataDirector.java:687
         void oracle.dss.dataView.ModelAdapter.setDataDirector(oracle.dss.util.DataDirector)
              ModelAdapter.java:145
         void oracle.dss.crosstab.CrosstabModelAdapter.setDataSource(oracle.dss.util.DataSource)
              CrosstabModelAdapter.java:49
         void oracle.dss.dataView.Dataview.setDataSource(oracle.dss.util.DataSource)
              Dataview.java:386
         void oracle.dss.addins.wizard.presentation.PresentationWizardState.applyQuery()
              PresentationWizardState.java:106
         void oracle.dss.addins.wizard.presentation.PresentationWizardDialog.wizardFinished(oracle.bali.ewt.wizard.WizardEvent)
    It is little urgent.
    JDev version is 9.0.3.3 (Build 1205)
    Business Comp Version 9.0.3.11.50
    OS Win 2000 Proff
    DOwn Loaded BIBean9032 and bibeans90321 patch

  • The Sky is Falling! ORA-01652: unable to extend temp segment by 128

    So we currently have a production problem and I'm not so in the know as a lowly java developer and not an Oracle expert.
    We keep getting this error(below) when a certain heavy query hits the DB.
    Our DBA claims that the tablespace for 'TABLE_SPACE_NAME_HERE' is 20GB of space and that the problem is the query.
    The query has been running fine for many many months but all of a sudden is presenting a problem and we have to do something quick.
    We tried bouncing the application server but the error came right back when the big select query gets hit.
    Any thoughts? Help! : )
    java.sql.SQLException: ORA-01652: unable to extend temp segment by 128 in tablespace TABLE_SPACE_NAME_HERE
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:113)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:754)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:219)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:972)
         at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1074)
         at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1156)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3415)
         at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3460)
         at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:296)

    LosLobo wrote:
    So, the next question... what is our lesson learned in this case?It depends on the root cause.
    Our DBA thinks 30GB is an unreasonable size for the tablespace and is fingering the select query that was causing the error to occur. Their solution is move to the query to a view and then reduce the tablespace back to 20GB.
    My thoughts are shouldn't the DB be able to handle a query that has been running fine for the last couple years? Also, if we do what is suggested what would prevent another query from coming along and causing the same issue all over again?Has the DBA identified the source of the issue? Did the query plan change? It's possible that something with statistics (or with some configuration change) causes Oracle to believe that two different query plans are roughly equally efficient. One plan might take substantially more TEMP space than another. It's possible that Oracle had been choosing the plan that involved less TEMP space being used and recently changed to preferring the plan that takes more TEMP space. If that's the issue, you may want to force Oracle to use the plan that involves less TEMP usage.
    Regardless of your TEMP tablespace size, another query may come along that causes TEMP to run out of space. Or data growth may cause TEMP to run out of space. Or an increase in the number of users may cause TEMP to run out of space. Ideally, the DBAs would be identifying how much TEMP space is used over the course of the day so that if things are growing steadily additional space can be added as necessary. If TEMP space increases dramatically because a query plan changes, however, even the best monitoring is unlikely to be able to predict that level of growth.
    Whether 30 GB is unreasonable (or whether 20 GB is unreasonable) will depend heavily on your application. We don't know enough to be able to comment. A TB-sized OLTP database serving millions of customers will have very different TEMP requirements than a multi-TB data warehouse which will have very different TEMP requirements than a small department-level application.
    My surmising is we must have just crossed a watermark threshold and the simplest most reasonable solution is to just leave the larger tablespace size.Why do you believe this is the case? It is entirely possible that you need more TEMP because your TEMP usage has been growing slowly over time. It is entirely possible that the query in question has always been using more TEMP space than it really should and that you finally have enough usage to cause the problem to bubble to the surface. It is entirely possible that the query used a reasonable amount of TEMP for the past couple years and suddenly started using far more because of a query plan change. Once you identify the source of the problem, we can figure out the appropriate solution. Without knowing the source of the problem, we're all just guessing.
    Justin

  • Applescript to batch change "title" in Photos

    I'm trying to use the new Photos, but without being able to batch change the title, it's not going to work for me.  I saw a script in a previous post that shows how to do this, but I couldn't get it to work - apparently the script only works on small sized photo databases?  Mine has about 20,000 photos and 1,700 videos.
    Has anyone come up with a solution for this?  Photos is pretty much unusable without being able to batch change things like title (primarily) and photo name.
    Eric

    There are several Applescripts to do what you want.  They are listed in the Photos for Mac User Tips.

  • Oracle db / schema installation for BO

    Dear Experts,
    I am going to do the BO Setup in our landscape,
    Can any one suggest me how to do the following....
    1. How to do the Oracle installation for BO in AIX (UNIX)? Is it a normal installation? or different?
    BOE is on Windows.
    2. How to do the sizing of database?
    3. How to create a database schema in oracle with proper parameters?
    4. What are the parameters we have to maintain in Oracle - init.ora file which is required for BO ?

    Dear
    Please check the installation guides on SAP service marketplace
    [http://service.sap.com/installationguides]
    There are some specific points that depend on which BO you will install but those are mentioned in the installation guide for BO per database (Oracle, MySQL and so on).
    For Oracle you have to perform a database installation as mentioned in the installation guide for Oracle on SAP service marketplace.
    The general parameter recommendations for Oracle (SAP note) will also be valid for BO I assume as it's Oracle release dependant and not SAP Software dependant, only point is whether or not its OLTP or OLAP that is used.
    Kind regards
    Tom

  • Reason to size liveCache filesystems 2x RAM?

    Hi,
    I am sizing liveCache database for future rollouts of existing system.  (liveCache 7.7.04, SCM 5.1, Unix platform)
    In the liveCache installation guide, the recommendation is to size the unix sapdata filesystems to be 2x the size of RAM.
    SAP SCM 5.1 Standalone Engine SAP liveCache Technology 7.7: UNIX  Document version: 1.0 ‒ 08/31/2007
    Page 11, section 3.3
    File System Name Description Recommendation
    /sapdb/<LC_NAME>/sapdata[1-n] Data Volumes 2 x RAM, minimum 3 GB
    /sapdb/<LC_NAME>/saplog Log Volume 2 GB
    What is reason for this recommendation?  I assume it has something to do with the history data in datacache during system shutdown and not also the 1x RAM for KernelDumpFileName path, and > 1x RAM for backup directory because only the liveCache database, not the KernelDumpFileName or backup directories are in the sapdata* filesystems. 
    In these frugal economic times, we are requested to justify hardware expenditures. 2x RAM seems excessive for sapdata filesystems when the RAM is 200-350GB in addition to the 1x RAM KernelDumpFileName and >1x RAM backup space needed.
    Also, I assume it means the total space of all sapdata filesystems is requested to be 2x RAM, and not each individual filesystem.
    Is there more explanation, clarification or justification on the disk sizing for liveCache?
    Thanks in advance,
    Margaret

    Hello Margaret
    > What is reason for this recommendation?
    Plain and simple: it's necessary to have that amount of space to put data to.
    >I assume it has something to do with the history data in datacache during system shutdown
    ?? Not totally sure what you mean by that.
    However, liveCache implements what is calles a "consistent view" to it's objects.
    This means that it needs to keep several versions of the same object in the data area at the same time.
    Let's assume that your planning scenarios really use up the whole RAM for the liveCache, this alone does not allow to keep multiple versions.
    Thus there need to be more space available in the data area.
    Doubling the size of the RAM for the Data area (not for each single filesystem) has proven to be a good starting estimation.
    > and not also the 1x RAM for KernelDumpFileName path, and > 1x RAM for backup directory because only the liveCache database, not the KernelDumpFileName or backup directories are in the sapdata* filesystems. 
    ?? did not get that...
    > In these frugal economic times, we are requested to justify hardware expenditures. 2x RAM seems excessive for sapdata filesystems when the RAM is 200-350GB in addition to the 1x RAM KernelDumpFileName and >1x RAM backup space needed.
    No it isn't.
    And, really, what does a TB harddisk space cost these days?
    > Also, I assume it means the total space of all sapdata filesystems is requested to be 2x RAM, and not each individual filesystem.
    You assume correclty.
    > Is there more explanation, clarification or justification on the disk sizing for liveCache?
    That's pretty much it.
    You may of course start with less then recommended diskspace, but you might run into problems later on with it.
    regards,
    Lars

  • Oracle Database Sizing

    I'm looking for a source of information regardind Oracle database sizing. Does anyone can help me?
    Thank you.
    null

    there are some good paper written on this..
    I think I have one of the paper from IOUG...
    let me email you...If that doesn't help..let me know.
    Good Luck
    Shah

  • ThumbRule for Database server sizing

    Hi all,
    Please help me in sizing the oracle database server.
    Is there any thumbrule for sizing depending on concurrent users?
    Any links??

    Please help me in sizing the oracle database server.Is there any thumbrule for sizing depending on concurrent users?
    Any links??
    Sizing What ?
    1. Disk
    2. Memory
    3. No of CPU's
    4. SGA
    5 All of thses.
    hare krishna
    Alok

  • Resource estimation/Sizing (i.e CPU and Memory) for Oracle database servers

    Hi,
    I have came across one of the requirement of Oracle database server sizing in terms of CPU and Memory requirement. Has anybody metalink notes or white paper to have basic estimation or calculation for resources (i.e CPU and RAM) on based of database size, number of concurrent connections/sessions and/or number of transactions.
    I have searched lot on metalink but failed to have such, will be great help if anybody has idea on this. I'm damn sure it has to be, because to start with implementation of IT infrastructure one has to do estimation of resources aligned with IT budget.
    Thanks in advance.
    Mehul.

    You could start the other way around, if you already have a server is it sufficient for the database you want to run on it? Is there sufficient memory? Is it solely a database server (not shared)? How fast are the disks - SAN/RAID/local disk? Does it have the networking capacity (100mbps, gigabit)? How many CPUs, will there be intensive SQL? How does Oracle licensing fit into it? What type of application that will run on the database - OLTP or OLAP?
    If you don't know if there is sufficient memory/CPU then profile the application based on what everyone expects, again, start with OLTP or OLAP and work your way down to the types of queries/jobs that will be run, number of concurrent users and what performance you expect/require. For an OLAP application you may want the fastest disks possible, multiple CPUs and a large SGA and PGA (2-4GB PGA?), pay a little extra for parallel server and partitioning in license fees.
    This is just the start of an investigation, then you can work out what fits into your budget.
    Edited by: Stellios on Sep 26, 2008 4:53 PM

  • Server Sizing For Oracle Database

    Hi All,
    I need a server sizing for the below mentioned architecture:
    This application is basically for logistics company which we are planing to host it centrally with two server's one server for application and one for oracle database along with DR site (Other Location). There are four locations and each location will have 20 users who are going to access this application (20 x 4= 80 Users). We are using MPLS network of 35 mbps bandwidth.
    1. Application server: Windows server 2008 R2
    2. Database Server: Windows server 2008 R2, Oracle 11g r2
    I need a server sizing documents.
    Thanks........

    EdStevens wrote:
    Justin Mungal wrote:
    EdStevens wrote:
    user1970505 wrote:
    Hi All,
    I need a server sizing for the below mentioned architecture:
    This application is basically for logistics company which we are planing to host it centrally with two server's one server for application and one for oracle database along with DR site (Other Location). There are four locations and each location will have 20 users who are going to access this application (20 x 4= 80 Users). We are using MPLS network of 35 mbps bandwidth.
    1. Application server: Windows server 2008 R2
    2. Database Server: Windows server 2008 R2, Oracle 11g r2
    I need a server sizing documents.
    Thanks........I'd seriously reconsider hosting Oracle db on Windows. Obviously there are many, many shops that do. And obviously it is often a case of the fact that they do not have (and choose to not acquire) expertise in Linux. But I've been in IT for 30+ years and have worked on IBM S-370 and its variants and descendents, Windows since v3, DEC VMS, IBM OS/2, Solaris, AIX, HPUX, and Oracle Linux. The first Oracle database I ever created was on Windows 3.11 and at that point I had never seen *nix.  Now I am in a position to state that Windows is the worst excuse of an operating system of any I have ever used.  I am constantly amazed/amused by how often (at least once a month on schedule, plus unplanned times) that our Windows SA has to send out a notice that he is re-booting his servers.  I can't remember the last time we had to reboot a Linux server ( I have 4 of them)
    Yes, I'm biased away from Windows, but that bias comes from experience. Hardly a day goes by that I don't see something that causes me to say to whoever is in earshot "have I told you how much I hate Windows?"I was going to refrain from commenting on that, as I assumed they're a Windows shop and aren't open to any other OS (but my assumption could be incorrect).
    I haven't been working in IT for as long as many of the folks around here, only about 10 years. I'm a former system admin that maintained both Linux and Windows servers, but my focus was on Windows. In the right hands, Windows can be rock solid. If a system admin has to reboot Windows servers often, he is most likely doing something wrong, or is rebooting for security updates. It's never as simple as "Windows Sucks," or "Linux Sucks;" it all depends on who's running the system (again, in my opinion).
    I have seen some windows servers run uninterrupted for so long no one could remember the admin password. But more often memory leaks and the "weekly update" (replacing last weeks bugs with this weeks) is the culprit.
    Yes, it really is sad how often you have to reboot for updates if you want to keep your system current. Mind you, it's better to have the fixes then to not have them (maybe). I rebooted my servers about once every month at my old place... which is not that bad.
    With that said, in my experience, Oracle on Windows is a major pain. It takes me much longer to do anything. Once you get proficient with a CLI like the bash shell, the Windows GUI can't compare.Agreed. One of my many complaints about Windows is the poor excuse of a shell processor. I'm pretty proficient in command line scripting, but still cringe when I have to do it. Practically every line of code I write for a command script is accompanied by the remark "this is so lame compared to what I could do with a shell script". Same for vi vs. notepad. But my real problem is the memory leaks and the registry. I'm fairly comfortable hacking certain areas of the registry, but the need to and the arcane linkages between different areas of the registry and how they influence 'process environment' remains a mystery to all but a tiny minority of admins. Compare to *nix where everything is well documented and "knowable". 
    One (of many) anecdotal experiences, this with my personal Win7 laptop. One time it crashed and refused to reboot. A bit of a google search turned up some arcane keystroke sequence to put it into some sort of recovery mode on bootup .. similar to getting into the bios, but the keystroke sequence was much more complex .. it may have involved standing on one foot while entering the sequence. Anyway, it entered a recovery process I've never seen before or since and repaired everything. My first thought was "hey, that was pretty cool." Then my second thought was 'but only Windows would need such a facility.
    Bottom line? To paraphrase a famous Tom Hanks character, "My momma always said Windows was like a box of chocolates. You never know just what you'll get."Haha... I like that one. Yes, the registry is definitely horrible. It's amazing to me that a single point of failure was Microsoft's answer to INI files.
    I think Windows and nix have their places. Server work definitely seems more productive to me in a nix environment, but I think I'd jump off a cliff if I had to use it as my desktop environment day-in-day-out. The other problem is application lock-down; I can't blame the OS for that, but it's a reality... and using virtualization to run those applications seems to defeat the point to me.

  • CRM TPM Database Sizing for CRM and BW

    All,
    I am currently sizing for a TPM implementation and have a couple of questions concerning storage capacity for CRM and BW.  I have reviewed and created an Excel spreadsheet based on the SAP Sizing Guide for CRM-TPM but I am coming up short in a couple areas.
    Here is the document Link: [https://websmp105.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000711312004E]
    1.  Is there a storage sizing guide for BW or what has worked for the community to estimate?
    2.  Is the sizing guide for CRM/TPM correct (see below example)? 
    3.  What has worked for CRM/TPM database sizing from the community?
    I have a question about section 3.3.3 Disk Sizing in CRM.  If the disk sizing is based on per promotion (for the condition generation process), why is there a multiplication factor for PARTNERS?  I donu2019t believe we would have more than 1 or 2 partners per promotion.
    I did some quick math with some example numbers and came up with about 2.9TB for the CRM database.  See below for additional info based on the equation in section 3.3.3.
    Part 1
    20,000 Promotions
    10 Products                                         
    1000 Partners                                     
    .87TB                                                   
    Part 2
    10,000 Promotions
    47 Products
    1000 Partners
    2.04TB
    Is this accurate for sizing the condition generation process for the CRM database?  I am failing to understand why, for example, the 20,000 promotions would have 1,200 partners included in the base equation for each promotion.
    I appreciate any time you could spend in responding to my question.
    Thanks in advance,
    Steve
    Edited by: Steve Rocha on Jan 7, 2010 5:07 PM
    Edited by: Steve Rocha on Jan 7, 2010 5:09 PM
    Edited by: Steve Rocha on Jan 7, 2010 5:09 PM

    Thanks Steve for you reply.
    I am looking for a Sizing sheet from SAP TPM Perspective. If you could share your  Excel spreadsheet based on the SAP Sizing Guide for CRM-TPM .
    regards
    AK

Maybe you are looking for