Performance impact due to IO r/w on system table space

I see lot of IO on our system tablespace/datafiles always even though there is negligible user activity on the database? What could be the possible cause for this?
Because of this, the server is getting lot of IOWAITS.
Any suggestions please? Thanks.

There are two things here, first AWR is valid only for 10g, and second, so the only most suitable monitoring tool you have is the classical Statspack.
On the other hand if you don't have a temporary tablespace there will be no error, but your system will be used for temporary purposes, which is a very bad practice, as you have seen. I reccomend you, once you have detected this issue, create a temporary tablespace and make sure all users are defined to point temporary segments to that temp tablespace.
~ Madrid.

Similar Messages

  • Performance impact due to varchar2 length

    hi,
    We know varchar2 stores data dynamically. means it takes storage space only for the data it has. Then what is the requirement of defining varchar2 length. It shd be default to 4000. Pls take data consistency /security/maintenence etc fundas as another issue.
    Take an e.g
    Is there any other performance/storage/etc impact of given table designs ?
    SQL>Create table T1
    (sex varchar2(1));
    SQL>Create table T2
    (sex varchar2(4000));
    SQL>Insert into T1 values('M');
    SQL>Insert into T2 values('M');
    SQL>select * from t1;
    s
    M
    SQL>select * from t2;
    sex
    M
    Is there any other performance/storage/etc impact of given table designs ?
    Any comments.... ?
    Rgds,
    Rup

    Well Pls chk this explain plan.
    SQL> desc test1;
    Name Null? Type
    A1 VARCHAR2(1)
    A2 VARCHAR2(1)
    A3 VARCHAR2(1)
    A4 VARCHAR2(1)
    A5 VARCHAR2(1)
    A6 VARCHAR2(1)
    A7 VARCHAR2(1)
    A8 VARCHAR2(1)
    A9 VARCHAR2(1)
    A10 VARCHAR2(1)
    A11 VARCHAR2(1)
    A12 VARCHAR2(1)
    A14 VARCHAR2(1)
    A15 VARCHAR2(1)
    SQL> desc test2;
    Name Null? Type
    A1 VARCHAR2(4000)
    A2 VARCHAR2(4000)
    A3 VARCHAR2(4000)
    A4 VARCHAR2(4000)
    A5 VARCHAR2(4000)
    A6 VARCHAR2(4000)
    A7 VARCHAR2(4000)
    A8 VARCHAR2(4000)
    A9 VARCHAR2(4000)
    A10 VARCHAR2(4000)
    A11 VARCHAR2(4000)
    A12 VARCHAR2(4000)
    A13 VARCHAR2(4000)
    A14 VARCHAR2(4000)
    SQL> set autotrace on
    SQL> select a1 from test1 where rownum<=1;
    A
    a
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=2)
    1 0 COUNT (STOPKEY)
    2 1 TABLE ACCESS (FULL) OF 'TEST1' (TABLE) (Cost=2 Card=80 Bytes=160)
    SQL> select a1 from test2 where rownum<=1;
    A1
    a
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=2002)
    1 0 COUNT (STOPKEY)
    2 1 TABLE ACCESS (FULL) OF 'TEST2' (TABLE) (Cost=2 Card=80 Bytes=160160)
    what is this??? Explain plan has major difference in the Bytes accessed field.
    Is this a major performance issue...
    Rgds,

  • Avoiding performance issue due to loop within loop on internal tables

    Hi Experts,
                    I have a requirement where in i want to check whether each of the programs stored in one internal table are called from any of the programs stored in another internal table. In this case i am looping on two internal tables (Loop within a loop) which is causing a major performance issue. Program is running very very slow.
    Can any one advise how to resolve this performance issue so that program runs faster.
    Thanks in advance.
    Regards,
    Chetan.

    Forget the parallel cursur stuff, it is much to complicated for general usage and helps nearly nothing. I will publish a blog in the next days where this is shown in detail.
    Loop on loop is no problem if the inner table is a hashed or sorted table.
    If it must be a standard table, then you must make a bit more effort and faciliate a binary search (read binary search / loop from index exit)
    see here the exact coding Measurements on internal tables: Reads and Loops:
    /people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables
    And don't forget, the other table must not be sorted, the loop reaches anyway every line. The parallel cursor requires both tables to be sorted. The additional sort
    consumes nearly the whole advantage of the parallel cursor compared to the simple but good loop in loop solutions.
    Siegfried

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

  • EBS performance impact using it as a Data Source

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.Hi,
    Tough to answer without looking at data; my suggestion would be to have a test EBS environment setup, get permission from the vendors to run performance test without buying license - compare AWRs from both scenarios and then decide.
    Generally speaking, native XML publisher (BI Publisher) has less of database performance hit than external reporting tools using ODBC.
    Hope this helps.
    Regards,

  • Change Tracker Performance Impact

    MDM: 7.1
    CE: 7.2
    ERP: 6.0 EHP4
    Hi,
    We are currently using CE/BPM based central master data management. A custom application is being developed for collaborative master data authoring.
    As part of the MDM configuration, the client wishes to track changes on MDM records for audit purposes. We are looking at the MDM Change Tracking facility provided by SAP but not sure about the performance impact it will have on the MDM server.
    We have over 300 attributes for the object we wish to track for changes. Not all attributes will change all the time but it is expected that the overall number of changes every month will be over 1000, each change including approx 20-30 fields. The number of users is expected to be approx 15 initially but will increase over time.
    I have seen on SCN forums people talking about potential performance degrade by enabling change tracking. Has anyone actually experienced performance degrade due to enabling change tracking for MDM records? If so, have you tried any means to keep the impact low e.g. by allocating more resources to MDM server?
    Thanks and regards,
    Shehryar
    Edited by: Shehryar Khan on Dec 2, 2010 1:39 PM

    Hi,
    Change histoey does have impact on system performance. this can be controlled via regular archiving of Change History database. Lets say, Change history table data older than 3 months can be stored in another repository.
    Second option could be to export all Change History data(Older than 1-2 days) to a BI system(using regular scheduled job), and have change hisotry report there. It will bring drastci change in MDM System performance. 1-2 days data can be viewed from MDM Change history database itself.
    Regards,
    Prashant
    Edited by: Prashant Malik on Dec 3, 2010 1:58 AM

  • Does mapping to JNDI for ejb-refs have a performance impact?

    Does mapping to JNDI names for ejb-refs have a performance impact for
    WLS6.1?
    In the ejb-jar.xml deployment descriptor, you can define an ejb reference as
    follows:
    <ejb-ref>
    <ejb-ref-name>ejb/AccountHomeRemote</ejb-ref-name>
    <ejb-ref-type>Entity</ejb-ref-type>
    <home>com.package structure.AccountHomeRemote</home>
    <remote>com.package structure.AccountRemote</remote>
    <!-- <ejb-link>AccountBean</ejb-link>-->
    </ejb-ref>
    Now, optionally, you can use the ejb-link element - or - you can, in the
    weblogic DD, specify a JNDI name that the ejb-ref maps to.
    Using a JNDI name buys you some flexibility. If, for some reason, the jar is
    divided, you dont have to convert ejb-link's to JNDI names. The question is:
    Is there a performance cost in doing this?
    Cheers

    Hi Aashish,
    Thanks for your quick reply. it was helpful, but i am not using RFC's. Correct me if i am wrong, but i have explained the scenarios in detail below.
    Scenario 1. Synchronous
    1) PI Picks file from a common folder.
    2) PI does a data mapping and sends the data to ECC.
    3) ECC contains an inbound interface which receives the data and in which abap proxy code is written.
    4) The abap proxy code executes a function module and sends the response as an internal table back to PI.
    5) PI receives the response and places it in a text/csv file and places it back to another folder.
    I assume that the above would be possible only using BPM. What i understand is that in order for an interface to receive and send data, abstract mappings are to be used, and for this BPM is required. We do not have any conversions etc. its just a simple matter of receiving an internal table from ECC and creating a file to place in the folder.
    I also understand that BPM could have bottlenecks due to queue and cache issues, messages might be pending, or lost etc.
    Scenario 2. Asynchronous
    1) PI Picks file from a common folder.
    2) PI does a data mapping and sends the data to ECC.
    3) ECC contains an inbound interface which receives the data and in which abap proxy code is written.
    4) ABAP Proxy code executes the same function module and calls a seperate outbound interface and passes the values to it. This would be used in sending the response back.
    5)  PI receives the response from the second interface and places it in a text/csv file and places it back to another folder.
    I would like to know which would be the better approach. Documentation/references to support your claims would be much appreciated.
    Cheers,
    Mz

  • DBCC SHRINKFILE with NOTRUNCATE has any performance impact in log shipping?

    Hi All,
    To procure space I'm suggested to use below command on primary database in log shipping and I just want
    to clarify whether it has any performance impact on primary database in log shipping and also is it a recommended practice to use the below command
    in regular intervals in
    case the log is using much space of the drive. Please suggest on this. Thank You.
    "DBCC
    SHRINKFILE ('CommonDB_LoadTest_log', 2048, NOTRUNCATE)"
    Regards,
    Kalyan
    ----Learners Curiosity Never Ends----

    Hi Kalyan \ Shanky
    I was not clear in linked conversation so adding some thing :
    As per http://msdn.microsoft.com/en-us//library/ms189493.aspx
    ----->TRUNCATEONLY is applicable only to data files. 
    BUT
    As per : http://technet.microsoft.com/en-us/library/ms190488.aspx
    TRUNCATEONLY affects the log file.
    And i also tried , it does works.
    Now Truncateonly : Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified
    with TRUNCATEONLY.
    So
    1. if i am removing non used space it will not effect log shiping or no log chain will broke.
    2. If you clear unsued space it will not touch existing data. no performance issue
    3. If you clear space and then due to other operation log file will auto grow it will put unnecessary pressure on database to allocate disk every time. So once you find the max growth of log file let it be as any how again it will grow to same size.
    4. Shrinking log file is not recommeded if its again and again reaching to same size . Until unless you have space crunch
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Performance impact of using Web Services?

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

  • Performance impact of Web Services

    As WebLogic adds support for Web Services to its platform, what is
    your plan for quantifying the performance impact and the functional
    correctness of using web services before going live with the final
    application.
    Empirix is hosting a free one hour web event discussion on web
    services testing and automated web services testing solutions on
    Thursday, January 17, 2-3pm Eastern time.
    To register for this web event or learn about other web events being
    offering by Empirix this month, go to:
    http://webevents.empirix.com
    The complete abstract is below:
    The advent of web services has brought the promises of integrating
    multiple software applications from heterogeneous networks and for
    exchanging information from vendor-to-vendor or vendor-to-consumer in
    a standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be critical that
    web services undergo performance testing. As with any enterprise
    software project, the adoption of proper test methodologies and use of
    testing tools will play a key part in the overall success or failure
    of projects utilizing web services. In a compressed software project
    schedule, an organization must quickly determine if its web services
    will operate successfully under a variety of load conditions. Like
    other web-based technologies, successful web services will need to
    respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing challenges
    created by this emerging technology, along with the variety of testing
    solutions available. Automated web service testing will be discussed
    and demonstrated using FirstACT, the first web services performance
    testing solution available on the market. Using a sample web service,
    automatic test case creation, scalability testing, and results
    analysis will be explored.

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance impact related to workbook

    Hi All,
    Please let me know the performance impact on workbooks if i insert 13 work sheets,
    How much performance impact will be there?
    If i insert all reports reports in 2 worksheets by splitting them,
    How much performance impact ll be there?
    Please let me know which one is better way
    Urgent!...
    Thanks,
    Sarau

    Hello Sarau,
    The performance is purely based on the Query selection, since that decide the number of records to be fetched.
    The more you filter the query the performance will be better.
    Since you are having many queries in a workbook, make sure that all have common variable screen.
    Thanks
    Chandran

  • Performance impact on the size of the CHM file

    Is there any impact on performance depending on the size of a
    CHM file?

    The main issues people have with help file performance
    (regardless of whether it is a CHM file) are related to the number
    of images, DHTML hotspots, bookmarks and links they have in a
    topic. The number of topics in a CHM should not be an issue. What
    exactly are you trying to access the performance impact of?

  • Regarding performance impact if I do DB accessing coding in comp Controller

    Hi ,
    This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
    I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
    The architecture is aas below
    WEBDYNPRO    >>   JAVA Classes object( generated by the JAVA- COM bridge   tool )                         >>   JAVA-COM bridge  tool >> VB COM+ Component   >> SQL server.
    The issue
       Performance :-   first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
    Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
    The causing exception is nested.
    [EXCEPTION]
    com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
    ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
    Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
    My questions regarding
    What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
    Please address  this with respedct to performance  point of view .
    thanks
    pkiran

    Hi Both,
    Thanks for the reply.
    Yes they are closed and set it to null;
    Connection max and mini properties are controlled at COM+ components in VB.
    Since I am using COM - JAVA bridge,  I am just invoking the methods defined ijn the VB code  thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
    My question is
    if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
    regards
    pkiran

  • Performance impact in Oracle 8i - BLOB vs BFILE

    Hi Guys,
    We are evaluting intermedia to store multimedia objects.
    Does any know if storing and retreiving documents in Oracle database has impact on standard data stored in the database?
    Is it worth having a seperate database instance for storing tables with intermedia objects?
    Pal

    Part 2:
    Example 1: Let us estimate the storage requirements for a data set consisting of 500 video clips comprising a total size of 250MB (average size 512K bytes). Assume a LOB chunk size of 32768 bytes. Our model estimates that we need (8000 * 32) bytes or 250 k bytes for the index and 266 MB to hold the media data. Since the original media size is 250 MB, this represents about a 6.5% storage overhead for storing the media data in the database. The following table definition could be used to store this amount of data.
    create table video_items
    video_id number ,
    video_clip ordsys.ordvideo
    -- storage parameters for table in general
    tablespace video1 storage (initial 1M next 10M )
    -- special storage parameters for the video content
    lob (video_clip.source.localdata) store as
    (tablespace video2 storage (initial 260k next 270M )
    disable storage in row nocache nologging chunk 32768);
    Example 2: Let us estimate the storage requirements for a data set consisting of 5000 images with an average size of 56K bytes. The total amount of media data is 274 MB. Since the average image size is smaller, it is more space efficient to choose a smaller chunk size, say 8K, to store the data in the lob. Our model estimates that we will need about 313 MB to store the data and a little over 1 MB to store the index. In this case the 40 MB of storage required beyond the raw media content size represents a 15% overhead.
    Estimating retrieval costs
    Performance testing has shown that Oracle can achieve equivalent and even higher throughput performance for media content retrieval than a file system. The test was configured to retrieve media data from a server system to a requesting client system. In the database case, simple C client programs used OCI with LOB read callbacks to retrieve the data from the database. For the file system case, the client program used the standard C library functions to read data from the file system. Note that in this client server configuration, files are served remotely by the file server system. In essence, we are comparing distributed file system performance with Oracle database and SQLNet performance. These tests were performed on Windows NT 4 SP5.
    Although Oracle achieved higher absolute performance, the relative CPU cost per unit of throughput ranged from 1.7 to 3 times the file system cost. (For these tests, database performance ranged from 3.4 million to 9 million bytes/sec while file system performance ranged from 2.6 million bytes/sec to 7 million bytes/sec as the number of clients ranged from 1 to 5) One reason for the very high relative CPU cost at the higher end of performance is that as the 100 Mbs network approaches saturation, the system used more CPU to achieve the next increment of throughput. If we restrict ourselves to not exceeding 70% of network utilization, then the database can use up to 2.5 times as much CPU as the file system per unit of throughput.
    NOTE WELL: The extra CPU cost factors pertain only to media retrieval aspect of the workload. They do not apply to the entire system workload. See example.
    Example: A file based media asset system uses 10% of a single CPU simply to serve media data to requesting clients. If we were to store the media in an Oracle database and retrieve content from the database then we could expect to need 20-25% of a single CPU to serve content at the same throughput rate.

Maybe you are looking for

  • Latest Address and the Previous Address in the last 30 days

    Morning folks. I need to find out all Customers whose Address changed based on the Effective Date. The Query should be based as of Effective Date 01-JUN-12 and not SYSDATE. So, we want to check the MAX(effective_date) is <= 01-JUN-12 and the Previous

  • SQL developer 1.5.1

    I really gave it a try, but: The difference between view and edit mode is annoying. Why is it impossible to debug in the view mode? And I really miss buttons for the debugging actions step over, step into, run etc. But the knock out is that the debug

  • ActiveX problem again!

    Hi, all previously I asked a question about "control can not be loaded" problem with MS treeview. by registering mscomctl.ocx, this problem's solved. But now I met another problem: I made an executable with installer, and installed it on another clea

  • Information broadcasting in BW

    Hi, I am getting the following error when I set up the Broadcasting for a BW report and then press the ‘Execute’ button 500 Connection timed out Connection timed out(-5) Error -5 Verstion 6040 Component ICM Module icxothr_mt.c Line 2551 Server <serve

  • Navigational problem

    Hi all, I have a requirment where i have to navigate in the query form one query (in web template) to another .Like right click on the report GOTO-----> to the other report how is this navigational possible.Please help me out. Thanks in advance. Vish